When migrating zones to CoreDNS, it did not accept private key files
the former bind setup gracefully accepted. It turned out they had a
trailing newline for whateveer reason. The dns library should handle
them gracefully, too.
* - implement deep-copy for OPT records + simple UT
* - adding ztypes.go (generated).
* - properly comment the specific behavior for EDNS0
* - remove too narrow UT + down-scope copy() method to package level only
* - tune comment
* RFC 1996 allows SOA in answer in notify
The answer section of a notify can contain a SOA record that we should
not ignore in the DefaultAcceptFunc.
* End sentence
Signed-off-by: Miek Gieben <miek@miek.nl>
* Prevent IsDuplicate from panicing on wrong Rrtype
* Replace the isDuplicateRdata switch with an interface method
This is substantially simpler and avoids the need to use reflect.
* Add equal RR case to TestDuplicateWrongRrtype
* Move RR_Header duplicate checking back into IsDuplicate
This restores the previous behaviour that RR_Header's were never
equal to each other.
* Revert "Move RR_Header duplicate checking back into IsDuplicate"
This reverts commit a3d7eba50825d546074d81084e639bbecf9cbb57.
* Run go fmt on package
This was a mistake when I merged the master branch into this PR.
* Move isDuplicate method to end of RR interface
* Eliminate Variable bool from parserFunc
Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.
* Use an interface method for parsing zone file records
* Prevent panic in TestOmittedTTL if no regexp match
* Move slurpRemainder into fixed length parse functions
This is consistent with the original logic in setRR and avoids potential
edge cases.
* Parse synthetic records according to RFC 3597
These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
* Flatten goroutine inside goroutine in Transfer.In
* Return an error for unknown question types
Previously this would just be silently ignored leaving nothing to close
the returned channel or return an error.
* Avoid calling RR.Header more than once per RR
Header is an interface method so there's non-zero overhead when calling
it.
* Reset entire RR_Header in SIG.Sign
This is equivilant (while also clearing Rdlength) while being simpler.
* Fork packDomainName for IsDomainName
* Eliminate msg buffer from packDomainName2
* Eliminate compression code from packDomainName2
* Remove off argument and return from packDomainName2
* Remove bs buffer from packDomainName2
* Merge packDomainName2 into IsDomainName
* Eliminate root label special case from IsDomainName
* Remove ls variable from IsDomainName
* Fixup comments in IsDomainName
* Remove msg == nil special cases from packDomainName
* Eliminate lenmsg variable from packDomainName
* Eliminate label counting from packDomainName
* Change off length check in IsDomainName
* Fix IsDomainName for escaped names
* Use strings.HasSuffix for IsFqdn
* Revert "Use strings.HasSuffix for IsFqdn"
I'll submit this as a seperate PR.
This reverts commit 80bf8c83700d121ea45edac0f00db52817498166.
* Cross reference IsDomainName and packDomainName
* Correct IsDomainName max length comment
* Use an interface method for unpacking records
* Eliminate err var declaration from unpack functions
* Remove pointless r.Data assignment in PrivateRR.unpack
* Add comment getter to zlexer
* Use zlexer.Comment instead of lex.comment
* Move comment handling out of setRR code
* Move comment field from lex to zlexer
* Eliminate ZoneParser.com field
* Return empty string from zlexer.Comment on error
* Only reset zlexer.comment field once per Next
* Remove zlexer merge TODO
I'm pretty sure these have to remain separate which is okay.
Sorely missing from this library. Add it. As there is no presentation
format the String method for this type puts a comment in front of it.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Don't reject Nscount > 0
IXFR request could have a SOA RR in the NS section
RFC 1995, section 3: https://tools.ietf.org/html/rfc1995
* Only one RR in the NS section is acceptable
* Remove URL from comment
* Simplify TKEY presentation format
Just put add ";" in front of it, instead of the whole pseudo option
text.
Fixes#855
Signed-off-by: Miek Gieben <miek@miek.nl>
* Add more fields to presentation format - convert time using the RRSIG routines
Signed-off-by: Miek Gieben <miek@miek.nl>
* Avoid setting the Rdlength field when packing
The offset of the end of the header is now returned from the RR.pack
method, with the RDLENGTH record field being written in packRR.
To maintain compatability with callers of PackRR who might be relying
on this old behaviour, PackRR will now set rr.Header().Rdlength for
external callers. Care must be taken by callers to ensure this won't
cause a data-race.
* Prevent panic if TestClientLocalAddress fails
This came up during testing of the previous change.
* Change style of overflow check in packRR
* Reduce compression memory use with map[string]uint16
map[string]uint16 uses 25% less memory per-entry than a map[string]int
(16+2)/(16+8) = 0.75. All entries in the compression map are bound by
maxCompressionOffset which is 14-bits and fits within a uint16.
* Add PackMsg benchmark with more RRs
* Add a comment to the compressionMap struct
* Pretty print test compression map differences
* Use compressionMapsDifference in TestPackDomainNameCompressionMap
This isn't strictly needed as it only contains a small number of
entries, but is consistent nonetheless.
* Fix map ordering in compressionMapsDifference
* Stop compressing names in RT records
Although RFC 1183 allows names in the RT record to be compressed with:
"The concrete encoding is identical to the MX RR."
RFC 3597 specifically prohibits compressing names in any record not
defined in RFC 1035.
* Add comment to RT struct regarding compression
* Fix compressed length calculations for escaped names
* Add Len benchmark for escaped name
* Fix length with escaping after compression point
* Avoid calling escapedNameLen multiple times in domainNameLen
* Use regular quotes for question in TestMsgCompressLengthEscaped
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.