* Set Rdlength in fromRFC3597
This was a bug found by oss-fuzz. My bad (#1211).
* Limit maximum length of Rdata in (*RFC3597).parse
RDATA must be a 16-bit unsigned integer.
* Validate Rdlength and off in UnpackRRWithHeader
* Revert "Validate Rdlength and off in UnpackRRWithHeader"
This reverts commit 2f6a8811b944b100af7605e53a6fb164944a6d65.
* Use hex.DecodedLen in (*RFC3597).fromRFC3597
While this isn't done elsewhere, it is clearer and more obvious.
* Support parsing known RR types in RFC 3597 format
This is the format used for "Unknown DNS Resource Records", but it's
also useful to support parsing known RR types in this way.
RFC 3597 says:
An implementation MAY also choose to represent some RRs of known type
using the above generic representations for the type, class and/or
RDATA, which carries the benefit of making the resulting master file
portable to servers where these types are unknown. Using the generic
representation for the RDATA of an RR of known type can also be
useful in the case of an RR type where the text format varies
depending on a version, protocol, or similar field (or several)
embedded in the RDATA when such a field has a value for which no text
format is known, e.g., a LOC RR [RFC1876] with a VERSION other than
0.
Even though an RR of known type represented in the \# format is
effectively treated as an unknown type for the purpose of parsing the
RDATA text representation, all further processing by the server MUST
treat it as a known type and take into account any applicable type-
specific rules regarding compression, canonicalization, etc.
* Correct mistakes in TestZoneParserAddressAAAA
This was spotted when writing TestParseKnownRRAsRFC3597.
* Eliminate canParseAsRR
This has the advantage that concrete types will now be returned for
parsed ANY, NULL, OPT and TSIG records.
* Expand TestDynamicUpdateParsing for RFC 3597
This ensures we're properly handling empty RDATA for RFC 3597 parsed
records.
* Merge setRR into ZoneParser.Next
* Remove file argument from RR.parse
This was only used to fill in the ParseError file field. Instead we now
fill in that field in ZoneParser.Next.
* Move dynamic update check out of RR.parse
This consolidates all the dynamic update checks into one place.
* Check for unexpected newline before parsing RR data
* Move rr.parse call into if-statement
* Allow dynamic updates for TKEY and RFC3597 records
* Document that ParseError file field is unset from parse
* Inline allowDynamicUpdate into ZoneParser.Next
* Improve and simplify TestUnexpectedNewline
* Prevent IsDuplicate from panicing on wrong Rrtype
* Replace the isDuplicateRdata switch with an interface method
This is substantially simpler and avoids the need to use reflect.
* Add equal RR case to TestDuplicateWrongRrtype
* Move RR_Header duplicate checking back into IsDuplicate
This restores the previous behaviour that RR_Header's were never
equal to each other.
* Revert "Move RR_Header duplicate checking back into IsDuplicate"
This reverts commit a3d7eba50825d546074d81084e639bbecf9cbb57.
* Run go fmt on package
This was a mistake when I merged the master branch into this PR.
* Move isDuplicate method to end of RR interface
* Eliminate Variable bool from parserFunc
Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.
* Use an interface method for parsing zone file records
* Prevent panic in TestOmittedTTL if no regexp match
* Move slurpRemainder into fixed length parse functions
This is consistent with the original logic in setRR and avoids potential
edge cases.
* Parse synthetic records according to RFC 3597
These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
* Use an interface method for unpacking records
* Eliminate err var declaration from unpack functions
* Remove pointless r.Data assignment in PrivateRR.unpack
* Avoid setting the Rdlength field when packing
The offset of the end of the header is now returned from the RR.pack
method, with the RDLENGTH record field being written in packRR.
To maintain compatability with callers of PackRR who might be relying
on this old behaviour, PackRR will now set rr.Header().Rdlength for
external callers. Care must be taken by callers to ensure this won't
cause a data-race.
* Prevent panic if TestClientLocalAddress fails
This came up during testing of the previous change.
* Change style of overflow check in packRR
* Reduce compression memory use with map[string]uint16
map[string]uint16 uses 25% less memory per-entry than a map[string]int
(16+2)/(16+8) = 0.75. All entries in the compression map are bound by
maxCompressionOffset which is 14-bits and fits within a uint16.
* Add PackMsg benchmark with more RRs
* Add a comment to the compressionMap struct
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.
copyHeader() is redundant, we allocate a header and then copy the
non-pointer elements into it; we don't need to do this, because if we
just asssign rr.Hdr to something else we get the same result.
Remove copyHeader() and the generation and use of it in ztypes.go.
* Cleanup and removals
Gut rawmsg.go as most functions are not used. Reword some documentation.
Add more types to be checked for name compression.
* Yeah, we do use these
* Remove this function as well - only used one
Add dns:txt parsing helper to prevent compile errors. This allows
us to generate all unpack/pack function.
Add pack to the RR interface definition and add this method to
PrivateRR.
We still use typeToUnpack to select which types don't use reflection.
Formatters are not needs you can access the members just fine.
However the rdata Field access function are handy and non-trivial,
extend them and add a basic test.
The dns package implements String() for all RR types, but sometimes you will
need more flexibility. The functions Printf, Sprintf, etc. implemented formatted I/O
for the RR type.
Printing
The verbs:
Generic part of RRs:
%N the owner name of the RR
%C the class: IN, CH, CLASS15, etc.
%D the TTL in seconds
%Y the type: MX, A, etc.
The rdata of each RR differs, we allow each field to be printed as a string.
Rdata:
%0 the first rdata field
%1 the second rdata field
%2 the third rdata field
.. ...
%9 the nineth rdata field
%R all rdata fields
The rdata fields remain a TODO, but will be implemented using
reflection.
Changes to domain name packing and unpacking:
* Escape dot, backslash, brackets, double-quote, semi-colon and space
* Tab, line feed and carriage return become \t, \n and \r
Changes to TXT string packing and unpacking:
* Escape backslash and double-quote
* Tab, line feed and carriage return become \t, \n and \r
* Other unprintables to \DDD
Stringers do the equivalent of putting domain names and TXT strings
to the wire and back.
There is some duplication of logic. I found performance suffered when
I broke the logic out into smaller functions. I think this may have
been due to functions not being inlined for various reasons.