Commit Graph

32 Commits

Author SHA1 Message Date
Miek Gieben 40060b4a4b
zoneparser: document not checking syntax (#1245)
* zoneparser: document not checking syntax

Add a couple of lines that we're not syntax checking.

Closes: #948

Signed-off-by: Miek Gieben <miek@miek.nl>

* Update wording after review.

Signed-off-by: Miek Gieben <miek@miek.nl>
2021-03-22 16:21:34 +01:00
Josh Soref 883641f4a9
Spelling (#1222)
* spelling: artifacts

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: encoding

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: exponent

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: ignoring

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: implemented

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: implements

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: next

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: numeric

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: previous

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: positions

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: presentation

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: resetting

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: stringifying

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: subsequent

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: validated

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

Co-authored-by: Miek Gieben <miek@miek.nl>
2021-02-25 17:08:05 +01:00
Shubhendra Singh Chauhan 2f14d104f3
improve code quality (#1228)
* Combine multiple `append`s into a single call

* Fix Yoda conditions

* Fix check for empty string

* revert "combine multiple `append`s"
2021-02-25 17:01:55 +01:00
Tom Thorogood 13238cb6ad
Support parsing known RR types in RFC 3597 format (#1211)
* Support parsing known RR types in RFC 3597 format

This is the format used for "Unknown DNS Resource Records", but it's
also useful to support parsing known RR types in this way.

RFC 3597 says:

   An implementation MAY also choose to represent some RRs of known type
   using the above generic representations for the type, class and/or
   RDATA, which carries the benefit of making the resulting master file
   portable to servers where these types are unknown.  Using the generic
   representation for the RDATA of an RR of known type can also be
   useful in the case of an RR type where the text format varies
   depending on a version, protocol, or similar field (or several)
   embedded in the RDATA when such a field has a value for which no text
   format is known, e.g., a LOC RR [RFC1876] with a VERSION other than
   0.

   Even though an RR of known type represented in the \# format is
   effectively treated as an unknown type for the purpose of parsing the
   RDATA text representation, all further processing by the server MUST
   treat it as a known type and take into account any applicable type-
   specific rules regarding compression, canonicalization, etc.

* Correct mistakes in TestZoneParserAddressAAAA

This was spotted when writing TestParseKnownRRAsRFC3597.

* Eliminate canParseAsRR

This has the advantage that concrete types will now be returned for
parsed ANY, NULL, OPT and TSIG records.

* Expand TestDynamicUpdateParsing for RFC 3597

This ensures we're properly handling empty RDATA for RFC 3597 parsed
records.
2021-01-30 14:05:25 +01:00
JINMEI Tatuya da812eed45
fix and enhance stringToCm to parse LOC RR optional fields (#1148)
Automatically submitted.
2020-08-17 07:08:03 +00:00
JINMEI Tatuya 81df27db17
validate LOC's lat/long field values not to be out of range (#1149)
Automatically submitted.
2020-08-17 07:07:46 +00:00
taciomcosta d128d10d17
refactor: remove ParseZone and parseZone (#1099) 2020-04-28 09:24:18 +02:00
Miek Gieben 9dcf47a409
Doc updates (#1075)
* Doc updates

Was reading https://pkg.go.dev/github.com/miekg/dns?tab=doc and spotted
some types and things to could be slightly better.

Make v unexported, as this version stuff should not be part of the
public API.

Signed-off-by: Miek Gieben <miek@miek.nl>

* fix test

Signed-off-by: Miek Gieben <miek@miek.nl>
2020-02-14 22:47:21 +01:00
chantra 9b7437f11d [zone parser] disallow nested $GENERATE directive (#1033)
While the range number of GENERATE is now limited, one can pass
a line with 2 $GENERATE directive that will exponentially increase the
time spent generating RRs.
Limit to only one per line.
Fixes #1020
2019-10-23 10:41:32 +01:00
Miek Gieben 76b57d0384
Limit $GENERATE range to 65535 steps (#1020)
* Limit $GENERATE range to 65535 steps

Having these checks means all test in TestCrasherString() are not
reached because we bail out earlier - removed that test all together.

Fixes #1019

Signed-off-by: Miek Gieben <miek@miek.nl>

* bring back testcase

Signed-off-by: Miek Gieben <miek@miek.nl>

* bring back crash test

Signed-off-by: Miek Gieben <miek@miek.nl>
2019-10-03 20:01:28 +01:00
chantra 557870346a [scan] fix crashers when parsing comment (#1018)
* [scan] fix crashers when parsing comment

When dealing with comments the parsers was potentially incrementing comi
variable twice. During the second access to com[], comi was possibly
longer than maxTok, causing an out of bound error:
panic: runtime error: index out of range [2048] with length 2048

* * Keep only 1 crasher test string.
* move tests from scan_test.go to fuzz_test.go
2019-10-03 19:09:39 +01:00
Tom Thorogood 25cacca8ca Prohibit newlines before record data in the ZoneParser (#979)
* Merge setRR into ZoneParser.Next

* Remove file argument from RR.parse

This was only used to fill in the ParseError file field. Instead we now
fill in that field in ZoneParser.Next.

* Move dynamic update check out of RR.parse

This consolidates all the dynamic update checks into one place.

* Check for unexpected newline before parsing RR data

* Move rr.parse call into if-statement

* Allow dynamic updates for TKEY and RFC3597 records

* Document that ParseError file field is unset from parse

* Inline allowDynamicUpdate into ZoneParser.Next

* Improve and simplify TestUnexpectedNewline
2019-06-10 07:38:54 +01:00
Tom Thorogood 29b9bf368b Remove pointless casts (#895)
* Remove pointless casts

These are all casts where the value was already of the same type.

* Use var style for zero-value not cast style
2019-01-04 10:30:55 +00:00
Tom Thorogood 09499bd07f Use IsFqdn and Fqdn helper functions more (#892) 2019-01-04 08:13:00 +00:00
Tom Thorogood 2533d75276 Move scanner comment handling out of scanRR (#877)
* Add comment getter to zlexer

* Use zlexer.Comment instead of lex.comment

* Move comment handling out of setRR code

* Move comment field from lex to zlexer

* Eliminate ZoneParser.com field

* Return empty string from zlexer.Comment on error

* Only reset zlexer.comment field once per Next

* Remove zlexer merge TODO

I'm pretty sure these have to remain separate which is okay.
2018-12-31 10:20:26 +01:00
Tom Thorogood 274da7d3ef
Add new ZoneParser API (#794)
* Improve ParseZone tests

* Add new ZoneParser API

* Use the ZoneParser API directly in ReadRR

* Merge parseZoneHelper into ParseZone

* Make generate string building slightly more efficient

* Add SetDefaultTTL method to ZoneParser

This makes it possible for external consumers to implement ReadRR.

* Make $INCLUDE directive opt-in

The $INCLUDE directive opens a user controlled file and parses it as
a DNS zone file. The error messages may reveal portions of sensitive
files, such as:
	/etc/passwd: dns: not a TTL: "root0:0:root:/root:/bin/bash" at line: 1:31
	/etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125

Both ParseZone and ReadRR are currently opt-in for backward
compatibility.

* Disable $INCLUDE support in ReadRR

ReadRR and NewRR are often passed untrusted input. At the same time,
$INCLUDE isn't really useful for ReadRR as it only ever returns the
first record.

This is a breaking change, but it currently represents a slight
security risk.

* Document the need to drain the ParseZone chan

* Cleanup the documentation of NewRR, ReadRR and ParseZone

* Document the ZoneParser API

* Deprecated the ParseZone function

* Add whitespace to ZoneParser.Next

* Remove prevName field from ZoneParser

This doesn't track anything meaningful as both zp.prevName and h.Name
are only ever set at the same point and to the same value.

* Use uint8 for ZoneParser.include field

It has a maximum value of 7 which easily fits within uint8.

This reduces the size of ZoneParser from 160 bytes to 152 bytes.

* Add setParseError helper to ZoneParser

* Surface $INCLUDE os.Open error in error message

* Rename ZoneParser.include field to includeDepth

* Make maximum $INCLUDE depth a const

* Add ParseZone and ZoneParser benchmarks

* Parse $GENERATE directive with a single ZoneParser

This should be more efficient than calling NewRR for each generated
record.

* Run go fmt on generate_test.go

* Add a benchmark for $GENERATE directives

* Use a custom reader for generate

This avoids the overhead and memory usage of building the zone string.

name         old time/op    new time/op    delta
Generate-12     165µs ± 4%     157µs ± 2%   -5.06%  (p=0.000 n=25+25)

name         old alloc/op   new alloc/op   delta
Generate-12    42.1kB ± 0%    31.8kB ± 0%  -24.42%  (p=0.000 n=20+23)

name         old allocs/op  new allocs/op  delta
Generate-12     1.56k ± 0%     1.55k ± 0%   -0.38%  (p=0.000 n=25+25)

* Return correct ParseError from generateReader

The last commit made these regular errors while they had been
ParseErrors before.

* Return error message as string from modToPrintf

This is slightly simpler and they don't need to be errors.

* Skip setting includeDepth in generate

This sub parser isn't allowed to use $INCLUDE directives anyway.

Note: If generate is ever changed to allow $INCLUDE directives, then
      this line must be added back. Without doing that, it would be
      be possible to exceed maxIncludeDepth.

* Make generateReader errors sticky

ReadByte should not be called after an error has been returned, but
this is cheap insurance.

* Move file and lex fields to end of generateReader

These are only used for creating a ParseError and so are unlikely to be
accessed.

* Don't return offset with error in modToPrintf

Along for the ride, are some whitespace and style changes.

* Add whitespace to generate and simplify step

* Use a for loop instead of goto in generate

* Support $INCLUDE directives inside $GENERATE directives

This was previously supported and may be useful. This is now more
rigorous as the maximum include depth is respected and relative
$INCLUDE directives are now supported from within $GENERATE.

* Don't return any lexer tokens after read error

Without this, read errors are likely to be lost and become parse errors
of the remaining str. The $GENERATE code relies on surfacing errors from
the reader.

* Support $INCLUDE in NewRR and ReadRR

Removing $INCLUDE support from these is a breaking change and should
not be included in this pull request.

* Add test to ensure $GENERATE respects $INCLUDE support

* Unify TestZoneParserIncludeDisallowed with other tests

* Remove stray whitespace from TestGenerateSurfacesErrors

* Move ZoneParser SetX methods above Err method

* $GENERATE should not accept step of 0

If step is allowed to be 0, then generateReader (and the code it
replaced) will get stuck in an infinite loop.

This is a potential DOS vulnerability.

* Fix ReadRR comment for file argument

I missed this previosuly. The file argument is also used to
resolve relative $INCLUDE directives.

* Prevent test panics on nil error

* Rework ZoneParser.subNext

This is slightly cleaner and will close the underlying *os.File even
if an error occurs.

* Make ZoneParser.generate call subNext

This also moves the calls to setParseError into generate.

* Report errors when parsing rest of $GENERATE directive

* Report proper error location in $GENERATE directive

This makes error messages much clearer.

* Simplify modToPrintf func

Note: When width is 0, the leading 0 of the fmt string is now excluded.
      This should not alter the formatting of numbers in anyway.

* Add comment explaining sub field

* Remove outdated error comment from generate
2018-10-20 11:47:56 +10:30
Tom Thorogood 17c1bc6792
Eliminate lexer goroutines (#792)
* Eliminate zlexer goroutine

This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.

* Eliminate klexer goroutine

This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.

* Merge scan into zlexer and klexer

This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.

* Avoid using text/scanner.Position to track position

* Track escape within zlexer.Next

* Avoid zl.commt check on space and tab in zlexer

* Track stri within zlexer.Next

* Track comi within zlexer.Next

There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.

* Use a single token buffer in zlexer

This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.

* Don't hardcode length of zl.tok in zlexer

* Eliminate lex.length field

This is always set to len(l.token) and is only queried in a few places.

It was added in 47cc5b052d without any
obvious need.

* Add whitespace to klexer.Next

* Track lex within klexer.Next

* Use a strings.Builder in klexer.Next

* Simplify : case in klexer.Next

* Add whitespace to zlexer.Next

* Change for loop style in zlexer.Next and klexer.Next

* Surface read errors in zlexer

* Surface read errors from klexer

* Remove debug line from parseKey

* Rename tokenText to readByte

* Make readByte return ok bool

Also change the for loop style to match the Next for loops.

* Make readByte errors sticky

klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.

* Panic in testRR if the error is non-nil

* Add whitespace and unify field setting in zlexer.Next

* Remove eof fields from zlexer and klexer

With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.

* Merge zl.tok blocks in zlexer.Next

* Split the tok buffer into separate string and comment buffers

The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).

Split the buffer back out into two separate buffers to avoid clobbering.

* Replace token slices with arrays in zlexer

* Add a NewRR benchmark

* Move token buffers into zlexer.Next

These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.

name      old alloc/op   new alloc/op   delta
NewRR-12    9.72kB ± 0%    4.98kB ± 0%  -48.72%  (p=0.000 n=10+10)

* Add a ReadRR benchmark

Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.

* Avoid using a bufio.Reader for io.ByteReader readers

At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.

name       old time/op    new time/op    delta
NewRR-12     11.0µs ± 3%     9.5µs ± 2%  -13.77%  (p=0.000 n=9+10)
ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12     4.98kB ± 0%    0.81kB ± 0%  -83.79%  (p=0.000 n=10+10)
ReadRR-12    4.87kB ± 0%    1.82kB ± 0%  -62.73%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       19.0 ± 0%      17.0 ± 0%  -10.53%  (p=0.000 n=10+10)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

* Surface any remaining comment from zlexer.Next

* Improve comment handling in zlexer.Next

This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.

* Remove outdated comment from zlexer.Next and klexer.Next

* Delay converting LF to space in braced comment

* Fixup TestParseZoneComments

* Remove tokenUpper field from lex

Not computing this for every token, and instead only
when needed is a substantial performance improvement.

name       old time/op    new time/op    delta
NewRR-12     9.56µs ± 0%    6.30µs ± 1%  -34.08%  (p=0.000 n=9+10)
ReadRR-12    9.93µs ± 1%    6.67µs ± 1%  -32.77%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12       824B ± 0%      808B ± 0%   -1.94%  (p=0.000 n=10+10)
ReadRR-12    1.83kB ± 0%    1.82kB ± 0%   -0.87%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       17.0 ± 0%      17.0 ± 0%     ~     (all equal)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

* Update ParseZone documentation to match comment changes

The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
2018-10-15 17:42:31 +10:30
Tom Thorogood 7f61c6631b
Fix dominikh/go-tools nits (#758)
* Remove unused functions and consts

* Address gosimple nits

* Address staticcheck nits

This excludes several that were intentional or weren't actual errors.

* Reduce size of lex struct

This reduces the size of the lex struct by 8 bytes from:
  lex.token string: 0-16 (size 16, align 8)
  lex.tokenUpper string: 16-32 (size 16, align 8)
  lex.length int: 32-40 (size 8, align 8)
  lex.err bool: 40-41 (size 1, align 1)
  lex.value uint8: 41-42 (size 1, align 1)
  padding: 42-48 (size 6, align 0)
  lex.line int: 48-56 (size 8, align 8)
  lex.column int: 56-64 (size 8, align 8)
  lex.torc uint16: 64-66 (size 2, align 2)
  padding: 66-72 (size 6, align 0)
  lex.comment string: 72-88 (size 16, align 8)
to:
  lex.token string: 0-16 (size 16, align 8)
  lex.tokenUpper string: 16-32 (size 16, align 8)
  lex.length int: 32-40 (size 8, align 8)
  lex.err bool: 40-41 (size 1, align 1)
  lex.value uint8: 41-42 (size 1, align 1)
  lex.torc uint16: 42-44 (size 2, align 2)
  padding: 44-48 (size 4, align 0)
  lex.line int: 48-56 (size 8, align 8)
  lex.column int: 56-64 (size 8, align 8)
  lex.comment string: 64-80 (size 16, align 8)

* Reduce size of response struct

This reduces the size of the response struct by 8 bytes from:
  response.msg []byte: 0-24 (size 24, align 8)
  response.hijacked bool: 24-25 (size 1, align 1)
  padding: 25-32 (size 7, align 0)
  response.tsigStatus error: 32-48 (size 16, align 8)
  response.tsigTimersOnly bool: 48-49 (size 1, align 1)
  padding: 49-56 (size 7, align 0)
  response.tsigRequestMAC string: 56-72 (size 16, align 8)
  response.tsigSecret map[string]string: 72-80 (size 8, align 8)
  response.udp *net.UDPConn: 80-88 (size 8, align 8)
  response.tcp net.Conn: 88-104 (size 16, align 8)
  response.udpSession *github.com/tmthrgd/dns.SessionUDP: 104-112 (size 8, align 8)
  response.writer github.com/tmthrgd/dns.Writer: 112-128 (size 16, align 8)
  response.wg *sync.WaitGroup: 128-136 (size 8, align 8)
to:
  response.msg []byte: 0-24 (size 24, align 8)
  response.hijacked bool: 24-25 (size 1, align 1)
  response.tsigTimersOnly bool: 25-26 (size 1, align 1)
  padding: 26-32 (size 6, align 0)
  response.tsigStatus error: 32-48 (size 16, align 8)
  response.tsigRequestMAC string: 48-64 (size 16, align 8)
  response.tsigSecret map[string]string: 64-72 (size 8, align 8)
  response.udp *net.UDPConn: 72-80 (size 8, align 8)
  response.tcp net.Conn: 80-96 (size 16, align 8)
  response.udpSession *github.com/tmthrgd/dns.SessionUDP: 96-104 (size 8, align 8)
  response.writer github.com/tmthrgd/dns.Writer: 104-120 (size 16, align 8)
  response.wg *sync.WaitGroup: 120-128 (size 8, align 8)
2018-09-27 04:02:05 +09:30
Miek Gieben dcdbddd810
ClassANY: don't convert CLASS255 to ANY (#618)
* ClassANY: don't convert CLASS255 to ANY

Class "ANY" is wireformat only. In zonefile you can use CLASS255, but
when String-ing we convert this into "ANY" which is wrong. I.e. this
means we can't read back our own update.

Bit of a kludge to work around this, as I'm not sure we can just remove
ANY from the ClassToString map.
2018-01-07 17:57:04 +00:00
Miek Gieben 3bbde607ac
relative include: now tested! (#602)
* relative include: now tested!

If you take the effort of creating includePath, actually use it when
opening the file. Now tested (again) with CoreDNS (with a zone file that
includes two others)

Failure to include leads to:

~~~
2017/12/07 16:47:00 plugin/file: /tmp/example.org: dns: failed to include `a/1include1.org' as `/tmp/a/1include1.org': "a/1include1.org" at line: 15:24
~~~

* dont change the error line
2017-12-07 17:12:20 +00:00
Miek Gieben c438b740fe
Allow $INCLUDE to reference relative file (#598)
When using a relative file in an $INCLUDE the file is referenced from
the cwd from the calling processes; this changes it to be down from the
view point where the file exists.

Code from https://github.com/miekg/dns/issues/537#issuecomment-342932962

Fixes #537
2017-12-06 22:03:54 +00:00
Miek Gieben 2a67631d76
cleanup: remove debug.Printf from scanner (#573)
Remove the debug.Printf stuff from scanner and some other style nits.
2017-11-17 10:48:42 +00:00
Miek Gieben cfe41281c2
txt parser: fix goroutine leak (#570)
* txt parser: fix goroutine leak

When a higher level (grammar or syntax) error was encountered the lower
level zlexer routine would be left open and trying to send more tokens
on the channel c. This leaks a goroutine, per failed parse...

This PR fixes this by signalling this error - by canceling a context -
retrieving any remaining items from the channel, so zlexer can return.

It also adds a goroutine leak test that can be re-used in other tests,
the TestParseBadNAPTR test uses this leak detector.

The private key parsing code had the same bug and is also fixed in this
PR.

Fixes #586
Fixes https://github.com/coredns/coredns/issues/1233

* sem not needed anymore
2017-11-17 10:47:28 +00:00
Miek Gieben b38dc3dcb7
Cleanup: gofmt -w -s *.go (#548)
Some renames of internal names to make go lint happier.
2017-11-03 16:15:35 +00:00
Richard Gibson eccf8bbe83 Correctly parse omitted TTLs and relative domains (#513)
* Fix $TTL handling
* Error when there is no TTL for an RR
* Fix relative name handling
* Error when a relative name is used without an origin (cf. https://tools.ietf.org/html/rfc1035#section-5.1 )

Fixes #484
2017-09-26 11:15:37 -04:00
Miek Gieben e420576857 scan: Fix $INCLUDE arguments to parseZone (#508)
When an $INCLUDE was seen the arguments to parseZone where in the wrong
order meaning the filename was used as the `neworigin` instead of the
actual origin we need.

Extend the testcase to check for the full name of the record.
2017-08-18 14:14:42 +01:00
Tim Esselens bbca4873b3 variable shadowing of token (#503)
* Added test for $INCLUDE statement parser in zone files

* FIX: localized l to switch statement, shadowed later call to os.Open(l.token)
2017-08-08 15:19:10 -07:00
Miek Gieben babbdab23a parsing: error on unbalanced braces (#489)
When done parsing, check if we have balanced braces, if not error out.

Fixes #488
2017-05-23 11:21:56 +01:00
Jon Nappi c862b7e359 Replace Atoi with ParseUint where appropriate (#470)
* replace Atoi with ParseUint where appropriate

* more Atoi replacements
2017-03-10 21:57:03 +00:00
Roland Bracewell Shoemaker 574f29b9d6 Always set tokenUpper when setting token (#403) 2016-10-03 15:10:26 +01:00
Michael Haro 1be7320498 Use t.Errorf in tests and make the error variable naming more consistent. (#367)
* Make the error variable always named err.

Sometimes the error variable was named 'err' sometimes 'e'.  Sometimes
'e' refered to an EDNS or string and not an error type.

* Use t.Errorf instead of t.Logf & t.Fail.
2016-06-09 07:00:08 +01:00
Miek Gieben 475ab80867 Remove (most) reflection
Remove the use of reflection when packing and unpacking, instead
generate all the pack and unpack functions using msg_generate.
This will generate zmsg.go which in turn calls the helper functions from
msg_helper.go.

This increases the speed by about ~30% while cutting back on memory
usage. Not all RRs are using it, but that will be rectified in upcoming
PR.

Most of the speed increase is in the header/question section parsing.
These functions *are* not generated, but straight forward enough. The
implementation can be found in msg.go.

The new code has been fuzzed by go-fuzz, which turned up some issues.

All files that started with 'z', and not autogenerated were renamed,
i.e. zscan.go is now scan.go.

Reflection is still used, in subsequent PRs it will be removed entirely.
2016-06-03 12:45:22 +01:00