* Merge setRR into ZoneParser.Next
* Remove file argument from RR.parse
This was only used to fill in the ParseError file field. Instead we now
fill in that field in ZoneParser.Next.
* Move dynamic update check out of RR.parse
This consolidates all the dynamic update checks into one place.
* Check for unexpected newline before parsing RR data
* Move rr.parse call into if-statement
* Allow dynamic updates for TKEY and RFC3597 records
* Document that ParseError file field is unset from parse
* Inline allowDynamicUpdate into ZoneParser.Next
* Improve and simplify TestUnexpectedNewline
* Prevent IsDuplicate from panicing on wrong Rrtype
* Replace the isDuplicateRdata switch with an interface method
This is substantially simpler and avoids the need to use reflect.
* Add equal RR case to TestDuplicateWrongRrtype
* Move RR_Header duplicate checking back into IsDuplicate
This restores the previous behaviour that RR_Header's were never
equal to each other.
* Revert "Move RR_Header duplicate checking back into IsDuplicate"
This reverts commit a3d7eba50825d546074d81084e639bbecf9cbb57.
* Run go fmt on package
This was a mistake when I merged the master branch into this PR.
* Move isDuplicate method to end of RR interface
* Eliminate Variable bool from parserFunc
Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.
* Use an interface method for parsing zone file records
* Prevent panic in TestOmittedTTL if no regexp match
* Move slurpRemainder into fixed length parse functions
This is consistent with the original logic in setRR and avoids potential
edge cases.
* Parse synthetic records according to RFC 3597
These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
* Use an interface method for unpacking records
* Eliminate err var declaration from unpack functions
* Remove pointless r.Data assignment in PrivateRR.unpack
* Add comment getter to zlexer
* Use zlexer.Comment instead of lex.comment
* Move comment handling out of setRR code
* Move comment field from lex to zlexer
* Eliminate ZoneParser.com field
* Return empty string from zlexer.Comment on error
* Only reset zlexer.comment field once per Next
* Remove zlexer merge TODO
I'm pretty sure these have to remain separate which is okay.
* Avoid setting the Rdlength field when packing
The offset of the end of the header is now returned from the RR.pack
method, with the RDLENGTH record field being written in packRR.
To maintain compatability with callers of PackRR who might be relying
on this old behaviour, PackRR will now set rr.Header().Rdlength for
external callers. Care must be taken by callers to ensure this won't
cause a data-race.
* Prevent panic if TestClientLocalAddress fails
This came up during testing of the previous change.
* Change style of overflow check in packRR
* Reduce compression memory use with map[string]uint16
map[string]uint16 uses 25% less memory per-entry than a map[string]int
(16+2)/(16+8) = 0.75. All entries in the compression map are bound by
maxCompressionOffset which is 14-bits and fits within a uint16.
* Add PackMsg benchmark with more RRs
* Add a comment to the compressionMap struct
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.
* Eliminate zlexer goroutine
This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.
* Eliminate klexer goroutine
This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.
* Merge scan into zlexer and klexer
This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.
* Avoid using text/scanner.Position to track position
* Track escape within zlexer.Next
* Avoid zl.commt check on space and tab in zlexer
* Track stri within zlexer.Next
* Track comi within zlexer.Next
There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.
* Use a single token buffer in zlexer
This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.
* Don't hardcode length of zl.tok in zlexer
* Eliminate lex.length field
This is always set to len(l.token) and is only queried in a few places.
It was added in 47cc5b052d without any
obvious need.
* Add whitespace to klexer.Next
* Track lex within klexer.Next
* Use a strings.Builder in klexer.Next
* Simplify : case in klexer.Next
* Add whitespace to zlexer.Next
* Change for loop style in zlexer.Next and klexer.Next
* Surface read errors in zlexer
* Surface read errors from klexer
* Remove debug line from parseKey
* Rename tokenText to readByte
* Make readByte return ok bool
Also change the for loop style to match the Next for loops.
* Make readByte errors sticky
klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.
* Panic in testRR if the error is non-nil
* Add whitespace and unify field setting in zlexer.Next
* Remove eof fields from zlexer and klexer
With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.
* Merge zl.tok blocks in zlexer.Next
* Split the tok buffer into separate string and comment buffers
The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).
Split the buffer back out into two separate buffers to avoid clobbering.
* Replace token slices with arrays in zlexer
* Add a NewRR benchmark
* Move token buffers into zlexer.Next
These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.
name old alloc/op new alloc/op delta
NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10)
* Add a ReadRR benchmark
Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.
* Avoid using a bufio.Reader for io.ByteReader readers
At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.
name old time/op new time/op delta
NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10)
ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
* Surface any remaining comment from zlexer.Next
* Improve comment handling in zlexer.Next
This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.
* Remove outdated comment from zlexer.Next and klexer.Next
* Delay converting LF to space in braced comment
* Fixup TestParseZoneComments
* Remove tokenUpper field from lex
Not computing this for every token, and instead only
when needed is a substantial performance improvement.
name old time/op new time/op delta
NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10)
ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10)
ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
* Update ParseZone documentation to match comment changes
The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
copyHeader() is redundant, we allocate a header and then copy the
non-pointer elements into it; we don't need to do this, because if we
just asssign rr.Hdr to something else we get the same result.
Remove copyHeader() and the generation and use of it in ztypes.go.
Make the reflection types a black list (these types use (or should use)
the tag 'size-xxx' in their struct definition.s
HIP, IPSECKEY, NSEC3, TSIG
All other types don't use reflection anymore.
* Return a pointer to the header when there is no rdata, this restores old
behavior. The rest of the conversion mostly hangs on getting size-hex
right, but then packStruct and packStructValue and the unpack variant
can be killed.
* Generate pack and unpack for all embedded types as well.
* Fix PrivateRRs, register an unpack function as well, when you register
a new PrivateRR.
* Add the tag octet, nsec, []domains and more to msg_helper.go
Add dns:txt parsing helper to prevent compile errors. This allows
us to generate all unpack/pack function.
Add pack to the RR interface definition and add this method to
PrivateRR.
We still use typeToUnpack to select which types don't use reflection.
Add function that dedups a list of RRs. Work on strings, which
adds garbage, but seems to be the least intrusive and takes the
last amount of memory.
Some fmt changes snook in as well.