dns/scan_test.go

348 lines
8.9 KiB
Go
Raw Normal View History

package dns
import (
Eliminate lexer goroutines (#792) * Eliminate zlexer goroutine This replaces the zlexer goroutine and channels with a zlexer struct that maintains state and provides a channel-like API. * Eliminate klexer goroutine This replaces the klexer goroutine and channels with a klexer struct that maintains state and provides a channel-like API. * Merge scan into zlexer and klexer This does result in tokenText existing twice, but it's pretty simple and small so it's not that bad. * Avoid using text/scanner.Position to track position * Track escape within zlexer.Next * Avoid zl.commt check on space and tab in zlexer * Track stri within zlexer.Next * Track comi within zlexer.Next There is one special case at the start of a comment that needs to be handled, otherwise this is as simple as stri was. * Use a single token buffer in zlexer This is safe as there is never both a non-empty string buffer and a non-empty comment buffer. * Don't hardcode length of zl.tok in zlexer * Eliminate lex.length field This is always set to len(l.token) and is only queried in a few places. It was added in 47cc5b052df9b4e5d5a9900cfdd622607de10a6d without any obvious need. * Add whitespace to klexer.Next * Track lex within klexer.Next * Use a strings.Builder in klexer.Next * Simplify : case in klexer.Next * Add whitespace to zlexer.Next * Change for loop style in zlexer.Next and klexer.Next * Surface read errors in zlexer * Surface read errors from klexer * Remove debug line from parseKey * Rename tokenText to readByte * Make readByte return ok bool Also change the for loop style to match the Next for loops. * Make readByte errors sticky klexer.Next calls readByte separately from within the loop. Without readByte being sticky, an error that occurs during that readByte call may be lost. * Panic in testRR if the error is non-nil * Add whitespace and unify field setting in zlexer.Next * Remove eof fields from zlexer and klexer With readByte having sticky errors, this no longer needed. zl.eof = true was also in the wrong place and could mask an unbalanced brace error. * Merge zl.tok blocks in zlexer.Next * Split the tok buffer into separate string and comment buffers The invariant of stri > 0 && comi > 0 never being true was broken when x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the "If not in a brace this ends the comment AND the RR" block). Split the buffer back out into two separate buffers to avoid clobbering. * Replace token slices with arrays in zlexer * Add a NewRR benchmark * Move token buffers into zlexer.Next These don't need to be retained across Next calls and can be stack allocated inside Next. This drastically reduces memory consumption as they accounted for nearly half of all the memory used. name old alloc/op new alloc/op delta NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10) * Add a ReadRR benchmark Unlike NewRR, this will use an io.Reader that does not implement any methods aside from Read. In particular it does not implement io.ByteReader. * Avoid using a bufio.Reader for io.ByteReader readers At the same time use a smaller buffer size of 1KiB rather than the bufio.NewReader default of 4KiB. name old time/op new time/op delta NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10) ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) * Surface any remaining comment from zlexer.Next * Improve comment handling in zlexer.Next This both fixes a regression where comments could be lost under certain circumstances and now emits comments that occur within braces. * Remove outdated comment from zlexer.Next and klexer.Next * Delay converting LF to space in braced comment * Fixup TestParseZoneComments * Remove tokenUpper field from lex Not computing this for every token, and instead only when needed is a substantial performance improvement. name old time/op new time/op delta NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10) ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10) ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) * Update ParseZone documentation to match comment changes The zlexer code was changed to return comments more often, so update the ParseZone documentation to match.
2018-10-15 07:12:31 +00:00
"io"
"io/ioutil"
"net"
"os"
"strings"
"testing"
)
func TestZoneParserGenerate(t *testing.T) {
zone := "$ORIGIN example.org.\n$GENERATE 10-12 foo${2,3,d} IN A 127.0.0.$"
wantRRs := []RR{
&A{Hdr: RR_Header{Name: "foo012.example.org."}, A: net.ParseIP("127.0.0.10")},
&A{Hdr: RR_Header{Name: "foo013.example.org."}, A: net.ParseIP("127.0.0.11")},
&A{Hdr: RR_Header{Name: "foo014.example.org."}, A: net.ParseIP("127.0.0.12")},
}
wantIdx := 0
z := NewZoneParser(strings.NewReader(zone), "", "")
for rr, ok := z.Next(); ok; rr, ok = z.Next() {
if wantIdx >= len(wantRRs) {
t.Fatalf("expected %d RRs, but got more", len(wantRRs))
}
if got, want := rr.Header().Name, wantRRs[wantIdx].Header().Name; got != want {
t.Fatalf("expected name %s, but got %s", want, got)
}
a, okA := rr.(*A)
if !okA {
t.Fatalf("expected *A RR, but got %T", rr)
}
if got, want := a.A, wantRRs[wantIdx].(*A).A; !got.Equal(want) {
t.Fatalf("expected A with IP %v, but got %v", got, want)
}
wantIdx++
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
if err := z.Err(); err != nil {
t.Fatalf("expected no error, but got %s", err)
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
if wantIdx != len(wantRRs) {
t.Errorf("too few records, expected %d, got %d", len(wantRRs), wantIdx)
}
}
func TestZoneParserInclude(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "dns")
if err != nil {
t.Fatalf("could not create tmpfile for test: %s", err)
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
defer os.Remove(tmpfile.Name())
if _, err := tmpfile.WriteString("foo\tIN\tA\t127.0.0.1"); err != nil {
t.Fatalf("unable to write content to tmpfile %q: %s", tmpfile.Name(), err)
}
if err := tmpfile.Close(); err != nil {
t.Fatalf("could not close tmpfile %q: %s", tmpfile.Name(), err)
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
zone := "$ORIGIN example.org.\n$INCLUDE " + tmpfile.Name() + "\nbar\tIN\tA\t127.0.0.2"
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
var got int
z := NewZoneParser(strings.NewReader(zone), "", "")
z.SetIncludeAllowed(true)
for rr, ok := z.Next(); ok; _, ok = z.Next() {
switch rr.Header().Name {
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
case "foo.example.org.", "bar.example.org.":
default:
t.Fatalf("expected foo.example.org. or bar.example.org., but got %s", rr.Header().Name)
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
got++
}
if err := z.Err(); err != nil {
t.Fatalf("expected no error, but got %s", err)
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
if expected := 2; got != expected {
t.Errorf("failed to parse zone after include, expected %d records, got %d", expected, got)
}
os.Remove(tmpfile.Name())
z = NewZoneParser(strings.NewReader(zone), "", "")
z.SetIncludeAllowed(true)
z.Next()
if err := z.Err(); err == nil ||
!strings.Contains(err.Error(), "failed to open") ||
!strings.Contains(err.Error(), tmpfile.Name()) ||
!strings.Contains(err.Error(), "no such file or directory") {
t.Fatalf(`expected error to contain: "failed to open", %q and "no such file or directory" but got: %s`,
tmpfile.Name(), err)
}
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
func TestZoneParserIncludeDisallowed(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "dns")
if err != nil {
t.Fatalf("could not create tmpfile for test: %s", err)
}
defer os.Remove(tmpfile.Name())
if _, err := tmpfile.WriteString("foo\tIN\tA\t127.0.0.1"); err != nil {
t.Fatalf("unable to write content to tmpfile %q: %s", tmpfile.Name(), err)
}
if err := tmpfile.Close(); err != nil {
t.Fatalf("could not close tmpfile %q: %s", tmpfile.Name(), err)
}
zp := NewZoneParser(strings.NewReader("$INCLUDE "+tmpfile.Name()), "example.org.", "")
for _, ok := zp.Next(); ok; _, ok = zp.Next() {
}
const expect = "$INCLUDE directive not allowed"
if err := zp.Err(); err == nil || !strings.Contains(err.Error(), expect) {
t.Errorf("expected error to contain %q, got %v", expect, err)
}
}
func TestZoneParserAddressAAAA(t *testing.T) {
tests := []struct {
record string
want *AAAA
}{
{
record: "1.example.org. 600 IN AAAA ::1",
want: &AAAA{Hdr: RR_Header{Name: "1.example.org."}, AAAA: net.IPv6loopback},
},
{
record: "2.example.org. 600 IN AAAA ::FFFF:127.0.0.1",
want: &AAAA{Hdr: RR_Header{Name: "2.example.org."}, AAAA: net.ParseIP("::FFFF:127.0.0.1")},
},
}
for _, tc := range tests {
got, err := NewRR(tc.record)
if err != nil {
t.Fatalf("expected no error, but got %s", err)
}
aaaa, ok := got.(*AAAA)
if !ok {
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
t.Fatalf("expected *AAAA RR, but got %T", got)
}
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
if !aaaa.AAAA.Equal(tc.want.AAAA) {
t.Fatalf("expected AAAA with IP %v, but got %v", tc.want.AAAA, aaaa.AAAA)
}
}
}
func TestZoneParserAddressBad(t *testing.T) {
records := []string{
"1.bad.example.org. 600 IN A ::1",
"2.bad.example.org. 600 IN A ::FFFF:127.0.0.1",
"3.bad.example.org. 600 IN AAAA 127.0.0.1",
}
for _, record := range records {
const expect = "bad A"
if got, err := NewRR(record); err == nil || !strings.Contains(err.Error(), expect) {
t.Errorf("NewRR(%v) = %v, want err to contain %q", record, got, expect)
}
}
}
func TestParseTA(t *testing.T) {
rr, err := NewRR(` Ta 0 0 0`)
if err != nil {
t.Fatalf("expected no error, but got %s", err)
}
if rr == nil {
t.Fatal(`expected a normal RR, but got nil`)
}
}
Eliminate lexer goroutines (#792) * Eliminate zlexer goroutine This replaces the zlexer goroutine and channels with a zlexer struct that maintains state and provides a channel-like API. * Eliminate klexer goroutine This replaces the klexer goroutine and channels with a klexer struct that maintains state and provides a channel-like API. * Merge scan into zlexer and klexer This does result in tokenText existing twice, but it's pretty simple and small so it's not that bad. * Avoid using text/scanner.Position to track position * Track escape within zlexer.Next * Avoid zl.commt check on space and tab in zlexer * Track stri within zlexer.Next * Track comi within zlexer.Next There is one special case at the start of a comment that needs to be handled, otherwise this is as simple as stri was. * Use a single token buffer in zlexer This is safe as there is never both a non-empty string buffer and a non-empty comment buffer. * Don't hardcode length of zl.tok in zlexer * Eliminate lex.length field This is always set to len(l.token) and is only queried in a few places. It was added in 47cc5b052df9b4e5d5a9900cfdd622607de10a6d without any obvious need. * Add whitespace to klexer.Next * Track lex within klexer.Next * Use a strings.Builder in klexer.Next * Simplify : case in klexer.Next * Add whitespace to zlexer.Next * Change for loop style in zlexer.Next and klexer.Next * Surface read errors in zlexer * Surface read errors from klexer * Remove debug line from parseKey * Rename tokenText to readByte * Make readByte return ok bool Also change the for loop style to match the Next for loops. * Make readByte errors sticky klexer.Next calls readByte separately from within the loop. Without readByte being sticky, an error that occurs during that readByte call may be lost. * Panic in testRR if the error is non-nil * Add whitespace and unify field setting in zlexer.Next * Remove eof fields from zlexer and klexer With readByte having sticky errors, this no longer needed. zl.eof = true was also in the wrong place and could mask an unbalanced brace error. * Merge zl.tok blocks in zlexer.Next * Split the tok buffer into separate string and comment buffers The invariant of stri > 0 && comi > 0 never being true was broken when x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the "If not in a brace this ends the comment AND the RR" block). Split the buffer back out into two separate buffers to avoid clobbering. * Replace token slices with arrays in zlexer * Add a NewRR benchmark * Move token buffers into zlexer.Next These don't need to be retained across Next calls and can be stack allocated inside Next. This drastically reduces memory consumption as they accounted for nearly half of all the memory used. name old alloc/op new alloc/op delta NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10) * Add a ReadRR benchmark Unlike NewRR, this will use an io.Reader that does not implement any methods aside from Read. In particular it does not implement io.ByteReader. * Avoid using a bufio.Reader for io.ByteReader readers At the same time use a smaller buffer size of 1KiB rather than the bufio.NewReader default of 4KiB. name old time/op new time/op delta NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10) ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) * Surface any remaining comment from zlexer.Next * Improve comment handling in zlexer.Next This both fixes a regression where comments could be lost under certain circumstances and now emits comments that occur within braces. * Remove outdated comment from zlexer.Next and klexer.Next * Delay converting LF to space in braced comment * Fixup TestParseZoneComments * Remove tokenUpper field from lex Not computing this for every token, and instead only when needed is a substantial performance improvement. name old time/op new time/op delta NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10) ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10) ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) * Update ParseZone documentation to match comment changes The zlexer code was changed to return comments more often, so update the ParseZone documentation to match.
2018-10-15 07:12:31 +00:00
var errTestReadError = &Error{"test error"}
type errReader struct{}
func (errReader) Read(p []byte) (int, error) { return 0, errTestReadError }
func TestParseZoneReadError(t *testing.T) {
rr, err := ReadRR(errReader{}, "")
if err == nil || !strings.Contains(err.Error(), errTestReadError.Error()) {
t.Errorf("expected error to contain %q, but got %v", errTestReadError, err)
}
if rr != nil {
t.Errorf("expected a nil RR, but got %v", rr)
}
}
func TestUnexpectedNewline(t *testing.T) {
zone := `
example.com. 60 PX
1000 TXT 1K
`
zp := NewZoneParser(strings.NewReader(zone), "example.com.", "")
for _, ok := zp.Next(); ok; _, ok = zp.Next() {
}
const expect = `dns: unexpected newline: "\n" at line: 2:18`
if err := zp.Err(); err == nil || err.Error() != expect {
t.Errorf("expected error to contain %q, got %v", expect, err)
}
// Test that newlines inside braces still work.
zone = `
example.com. 60 PX (
1000 TXT 1K )
`
zp = NewZoneParser(strings.NewReader(zone), "example.com.", "")
var count int
for _, ok := zp.Next(); ok; _, ok = zp.Next() {
count++
}
if count != 1 {
t.Errorf("expected 1 record, got %d", count)
}
if err := zp.Err(); err != nil {
t.Errorf("unexpected error: %v", err)
}
}
func TestParseRFC3597InvalidLength(t *testing.T) {
// We need to space separate the 00s otherwise it will exceed the maximum token size
// of the zone lexer.
_, err := NewRR("example. 3600 CLASS1 TYPE1 \\# 65536 " + strings.Repeat("00 ", 65536))
if err == nil {
t.Error("should not have parsed excessively long RFC3579 record")
}
}
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
func TestParseKnownRRAsRFC3597(t *testing.T) {
t.Run("with RDATA", func(t *testing.T) {
// This was found by oss-fuzz.
_, err := NewRR("example. 3600 tYpe44 \\# 03 75 0100")
if err != nil {
t.Errorf("failed to parse RFC3579 format: %v", err)
}
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
rr, err := NewRR("example. 3600 CLASS1 TYPE1 \\# 4 7f000001")
if err != nil {
t.Fatalf("failed to parse RFC3579 format: %v", err)
}
if rr.Header().Rrtype != TypeA {
t.Errorf("expected TypeA (1) Rrtype, but got %v", rr.Header().Rrtype)
}
a, ok := rr.(*A)
if !ok {
t.Fatalf("expected *A RR, but got %T", rr)
}
localhost := net.IPv4(127, 0, 0, 1)
if !a.A.Equal(localhost) {
t.Errorf("expected A with IP %v, but got %v", localhost, a.A)
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
}
})
t.Run("without RDATA", func(t *testing.T) {
rr, err := NewRR("example. 3600 CLASS1 TYPE1 \\# 0")
if err != nil {
t.Fatalf("failed to parse RFC3579 format: %v", err)
}
if rr.Header().Rrtype != TypeA {
t.Errorf("expected TypeA (1) Rrtype, but got %v", rr.Header().Rrtype)
}
a, ok := rr.(*A)
if !ok {
t.Fatalf("expected *A RR, but got %T", rr)
}
if len(a.A) != 0 {
t.Errorf("expected A with empty IP, but got %v", a.A)
Support parsing known RR types in RFC 3597 format (#1211) * Support parsing known RR types in RFC 3597 format This is the format used for "Unknown DNS Resource Records", but it's also useful to support parsing known RR types in this way. RFC 3597 says: An implementation MAY also choose to represent some RRs of known type using the above generic representations for the type, class and/or RDATA, which carries the benefit of making the resulting master file portable to servers where these types are unknown. Using the generic representation for the RDATA of an RR of known type can also be useful in the case of an RR type where the text format varies depending on a version, protocol, or similar field (or several) embedded in the RDATA when such a field has a value for which no text format is known, e.g., a LOC RR [RFC1876] with a VERSION other than 0. Even though an RR of known type represented in the \# format is effectively treated as an unknown type for the purpose of parsing the RDATA text representation, all further processing by the server MUST treat it as a known type and take into account any applicable type- specific rules regarding compression, canonicalization, etc. * Correct mistakes in TestZoneParserAddressAAAA This was spotted when writing TestParseKnownRRAsRFC3597. * Eliminate canParseAsRR This has the advantage that concrete types will now be returned for parsed ANY, NULL, OPT and TSIG records. * Expand TestDynamicUpdateParsing for RFC 3597 This ensures we're properly handling empty RDATA for RFC 3597 parsed records.
2021-01-30 13:05:25 +00:00
}
})
}
Eliminate lexer goroutines (#792) * Eliminate zlexer goroutine This replaces the zlexer goroutine and channels with a zlexer struct that maintains state and provides a channel-like API. * Eliminate klexer goroutine This replaces the klexer goroutine and channels with a klexer struct that maintains state and provides a channel-like API. * Merge scan into zlexer and klexer This does result in tokenText existing twice, but it's pretty simple and small so it's not that bad. * Avoid using text/scanner.Position to track position * Track escape within zlexer.Next * Avoid zl.commt check on space and tab in zlexer * Track stri within zlexer.Next * Track comi within zlexer.Next There is one special case at the start of a comment that needs to be handled, otherwise this is as simple as stri was. * Use a single token buffer in zlexer This is safe as there is never both a non-empty string buffer and a non-empty comment buffer. * Don't hardcode length of zl.tok in zlexer * Eliminate lex.length field This is always set to len(l.token) and is only queried in a few places. It was added in 47cc5b052df9b4e5d5a9900cfdd622607de10a6d without any obvious need. * Add whitespace to klexer.Next * Track lex within klexer.Next * Use a strings.Builder in klexer.Next * Simplify : case in klexer.Next * Add whitespace to zlexer.Next * Change for loop style in zlexer.Next and klexer.Next * Surface read errors in zlexer * Surface read errors from klexer * Remove debug line from parseKey * Rename tokenText to readByte * Make readByte return ok bool Also change the for loop style to match the Next for loops. * Make readByte errors sticky klexer.Next calls readByte separately from within the loop. Without readByte being sticky, an error that occurs during that readByte call may be lost. * Panic in testRR if the error is non-nil * Add whitespace and unify field setting in zlexer.Next * Remove eof fields from zlexer and klexer With readByte having sticky errors, this no longer needed. zl.eof = true was also in the wrong place and could mask an unbalanced brace error. * Merge zl.tok blocks in zlexer.Next * Split the tok buffer into separate string and comment buffers The invariant of stri > 0 && comi > 0 never being true was broken when x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the "If not in a brace this ends the comment AND the RR" block). Split the buffer back out into two separate buffers to avoid clobbering. * Replace token slices with arrays in zlexer * Add a NewRR benchmark * Move token buffers into zlexer.Next These don't need to be retained across Next calls and can be stack allocated inside Next. This drastically reduces memory consumption as they accounted for nearly half of all the memory used. name old alloc/op new alloc/op delta NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10) * Add a ReadRR benchmark Unlike NewRR, this will use an io.Reader that does not implement any methods aside from Read. In particular it does not implement io.ByteReader. * Avoid using a bufio.Reader for io.ByteReader readers At the same time use a smaller buffer size of 1KiB rather than the bufio.NewReader default of 4KiB. name old time/op new time/op delta NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10) ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10) * Surface any remaining comment from zlexer.Next * Improve comment handling in zlexer.Next This both fixes a regression where comments could be lost under certain circumstances and now emits comments that occur within braces. * Remove outdated comment from zlexer.Next and klexer.Next * Delay converting LF to space in braced comment * Fixup TestParseZoneComments * Remove tokenUpper field from lex Not computing this for every token, and instead only when needed is a substantial performance improvement. name old time/op new time/op delta NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10) ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10) name old alloc/op new alloc/op delta NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10) ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10) name old allocs/op new allocs/op delta NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal) ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal) * Update ParseZone documentation to match comment changes The zlexer code was changed to return comments more often, so update the ParseZone documentation to match.
2018-10-15 07:12:31 +00:00
func BenchmarkNewRR(b *testing.B) {
const name1 = "12345678901234567890123456789012345.12345678.123."
const s = name1 + " 3600 IN MX 10 " + name1
for n := 0; n < b.N; n++ {
_, err := NewRR(s)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkReadRR(b *testing.B) {
const name1 = "12345678901234567890123456789012345.12345678.123."
const s = name1 + " 3600 IN MX 10 " + name1 + "\n"
for n := 0; n < b.N; n++ {
r := struct{ io.Reader }{strings.NewReader(s)}
// r is now only an io.Reader and won't benefit from the
// io.ByteReader special-case in zlexer.Next.
_, err := ReadRR(r, "")
if err != nil {
b.Fatal(err)
}
}
}
Add new ZoneParser API (#794) * Improve ParseZone tests * Add new ZoneParser API * Use the ZoneParser API directly in ReadRR * Merge parseZoneHelper into ParseZone * Make generate string building slightly more efficient * Add SetDefaultTTL method to ZoneParser This makes it possible for external consumers to implement ReadRR. * Make $INCLUDE directive opt-in The $INCLUDE directive opens a user controlled file and parses it as a DNS zone file. The error messages may reveal portions of sensitive files, such as: /etc/passwd: dns: not a TTL: "root:x:0:0:root:/root:/bin/bash" at line: 1:31 /etc/shadow: dns: not a TTL: "root:$6$<redacted>::0:99999:7:::" at line: 1:125 Both ParseZone and ReadRR are currently opt-in for backward compatibility. * Disable $INCLUDE support in ReadRR ReadRR and NewRR are often passed untrusted input. At the same time, $INCLUDE isn't really useful for ReadRR as it only ever returns the first record. This is a breaking change, but it currently represents a slight security risk. * Document the need to drain the ParseZone chan * Cleanup the documentation of NewRR, ReadRR and ParseZone * Document the ZoneParser API * Deprecated the ParseZone function * Add whitespace to ZoneParser.Next * Remove prevName field from ZoneParser This doesn't track anything meaningful as both zp.prevName and h.Name are only ever set at the same point and to the same value. * Use uint8 for ZoneParser.include field It has a maximum value of 7 which easily fits within uint8. This reduces the size of ZoneParser from 160 bytes to 152 bytes. * Add setParseError helper to ZoneParser * Surface $INCLUDE os.Open error in error message * Rename ZoneParser.include field to includeDepth * Make maximum $INCLUDE depth a const * Add ParseZone and ZoneParser benchmarks * Parse $GENERATE directive with a single ZoneParser This should be more efficient than calling NewRR for each generated record. * Run go fmt on generate_test.go * Add a benchmark for $GENERATE directives * Use a custom reader for generate This avoids the overhead and memory usage of building the zone string. name old time/op new time/op delta Generate-12 165µs ± 4% 157µs ± 2% -5.06% (p=0.000 n=25+25) name old alloc/op new alloc/op delta Generate-12 42.1kB ± 0% 31.8kB ± 0% -24.42% (p=0.000 n=20+23) name old allocs/op new allocs/op delta Generate-12 1.56k ± 0% 1.55k ± 0% -0.38% (p=0.000 n=25+25) * Return correct ParseError from generateReader The last commit made these regular errors while they had been ParseErrors before. * Return error message as string from modToPrintf This is slightly simpler and they don't need to be errors. * Skip setting includeDepth in generate This sub parser isn't allowed to use $INCLUDE directives anyway. Note: If generate is ever changed to allow $INCLUDE directives, then this line must be added back. Without doing that, it would be be possible to exceed maxIncludeDepth. * Make generateReader errors sticky ReadByte should not be called after an error has been returned, but this is cheap insurance. * Move file and lex fields to end of generateReader These are only used for creating a ParseError and so are unlikely to be accessed. * Don't return offset with error in modToPrintf Along for the ride, are some whitespace and style changes. * Add whitespace to generate and simplify step * Use a for loop instead of goto in generate * Support $INCLUDE directives inside $GENERATE directives This was previously supported and may be useful. This is now more rigorous as the maximum include depth is respected and relative $INCLUDE directives are now supported from within $GENERATE. * Don't return any lexer tokens after read error Without this, read errors are likely to be lost and become parse errors of the remaining str. The $GENERATE code relies on surfacing errors from the reader. * Support $INCLUDE in NewRR and ReadRR Removing $INCLUDE support from these is a breaking change and should not be included in this pull request. * Add test to ensure $GENERATE respects $INCLUDE support * Unify TestZoneParserIncludeDisallowed with other tests * Remove stray whitespace from TestGenerateSurfacesErrors * Move ZoneParser SetX methods above Err method * $GENERATE should not accept step of 0 If step is allowed to be 0, then generateReader (and the code it replaced) will get stuck in an infinite loop. This is a potential DOS vulnerability. * Fix ReadRR comment for file argument I missed this previosuly. The file argument is also used to resolve relative $INCLUDE directives. * Prevent test panics on nil error * Rework ZoneParser.subNext This is slightly cleaner and will close the underlying *os.File even if an error occurs. * Make ZoneParser.generate call subNext This also moves the calls to setParseError into generate. * Report errors when parsing rest of $GENERATE directive * Report proper error location in $GENERATE directive This makes error messages much clearer. * Simplify modToPrintf func Note: When width is 0, the leading 0 of the fmt string is now excluded. This should not alter the formatting of numbers in anyway. * Add comment explaining sub field * Remove outdated error comment from generate
2018-10-20 01:17:56 +00:00
const benchZone = `
foo. IN A 10.0.0.1 ; this is comment 1
foo. IN A (
10.0.0.2 ; this is comment 2
)
; this is comment 3
foo. IN A 10.0.0.3
foo. IN A ( 10.0.0.4 ); this is comment 4
foo. IN A 10.0.0.5
; this is comment 5
foo. IN A 10.0.0.6
foo. IN DNSKEY 256 3 5 AwEAAb+8l ; this is comment 6
foo. IN NSEC miek.nl. TXT RRSIG NSEC; this is comment 7
foo. IN TXT "THIS IS TEXT MAN"; this is comment 8
`
func BenchmarkZoneParser(b *testing.B) {
for n := 0; n < b.N; n++ {
zp := NewZoneParser(strings.NewReader(benchZone), "example.org.", "")
for _, ok := zp.Next(); ok; _, ok = zp.Next() {
}
if err := zp.Err(); err != nil {
b.Fatal(err)
}
}
}