* Avoid setting the Rdlength field when packing
The offset of the end of the header is now returned from the RR.pack
method, with the RDLENGTH record field being written in packRR.
To maintain compatability with callers of PackRR who might be relying
on this old behaviour, PackRR will now set rr.Header().Rdlength for
external callers. Care must be taken by callers to ensure this won't
cause a data-race.
* Prevent panic if TestClientLocalAddress fails
This came up during testing of the previous change.
* Change style of overflow check in packRR
* Reduce compression memory use with map[string]uint16
map[string]uint16 uses 25% less memory per-entry than a map[string]int
(16+2)/(16+8) = 0.75. All entries in the compression map are bound by
maxCompressionOffset which is 14-bits and fits within a uint16.
* Add PackMsg benchmark with more RRs
* Add a comment to the compressionMap struct
* Pretty print test compression map differences
* Use compressionMapsDifference in TestPackDomainNameCompressionMap
This isn't strictly needed as it only contains a small number of
entries, but is consistent nonetheless.
* Fix map ordering in compressionMapsDifference
* Stop compressing names in RT records
Although RFC 1183 allows names in the RT record to be compressed with:
"The concrete encoding is identical to the MX RR."
RFC 3597 specifically prohibits compressing names in any record not
defined in RFC 1035.
* Add comment to RT struct regarding compression
* Fix compressed length calculations for escaped names
* Add Len benchmark for escaped name
* Fix length with escaping after compression point
* Avoid calling escapedNameLen multiple times in domainNameLen
* Use regular quotes for question in TestMsgCompressLengthEscaped
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.
* Reduce allocations in UnpackDomainName by better sizing slice
The maximum size of a domain name in presentation format is bounded by
the maximum length of a name in wire octet form and the maximum length
of a label. As s doesn't escape from UnpackDomainName, we can safely
give it the maximum capacity and it will never need to grow.
* Benchmark UnpackDomainName with lonest names possible
* Rename BenchmarkUnpackDomainNameLongestEscaped to match
* Improve maxDomainNamePresentationLength comment
* Further improve maxDomainNamePresentationLength comment
* Simplify maxDomainNameWireOctets checking in UnpackDomainName
* Don't return too long name in UnpackDomainName
* Simplify root domain return in UnpackDomainName
* Bail early from UnpackDomainName when name is too long
This drastically reduces the amount of garbage created
in UnpackDomainName for certain malicious names.
The wire formatted name
"\x3Faaabbbcccdddeeefffggghhhiiijjjkkklllmmmnnnooopppqqqrrrssstttuuu\xC0\x00"
would previously generate 1936B of garbage (36112B since maxCompressionPointers
was raised) before returning the "too many compression pointers" error, while
it now generates just 384B of garbage.
* Change +1 budget comment to reflect spec
This better reflects what maxDomainNameWireOctets is actually measuring.
* Remove budget check from after loop in UnpackDomainName
This can never be tripped as budget is always checked immediately after
subtracting inside the loop.
* Improve UnpackDomainName documentation
Generalize the srv.Unsafe and make it pluggeable. Also add a default
accept function that allows to discard malformed DNS messages very early
on. Before we allocate and parse anything furher.
Also re-use the client's message when sending a reply.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Increase the maximum number of allowed compression pointers
* Add a Pack+Unpack test case for many compression pointers
* Clarify maxCompressionPointers comment
* Use range loops in Msg.packBufferWithCompressionMap
* Remove rr set variables in Msg.packBufferWithCompressionMap
* Move Header var down in Msg.packBufferWithCompressionMap
* Move stripTsig comment into Msg.Unpack
* Use map[string]struct{} for compression map in Len
map[string]int requires 8 bytes per entry to store the unused position
information.
* Add MsgLength benchmark with more RRs
* Pass dns.Compress explicitly to packBufferWithCompressionMap
* Avoid creating compression map for question only Msg
This idea was inspired by:
"Skip dname compression for replies with no answers."
https://www.nlnetlabs.nl/bugs-script/show_bug.cgi?id=235
* Continue compressing multiple questions
* Remove ErrTruncated from the library
ErrTruncated is removed. This (correctly) assume that a truncated
message will be fully formed. Any message that isn't fully formed will
return (most likely) an unpack error.
Any program using ErrTruncated will fail to compile when they update to
this version: this is by design: you're doing it wrong. For checking if
a message was truncated you should checked the msg.Truncated boolean;
assuming the unpack didn't fail.
Fixes#814
Signed-off-by: Miek Gieben <miek@miek.nl>
* Restore tests
Signed-off-by: Miek Gieben <miek@miek.nl>