* Modify the SVCBAlpn to properly parse/print
* Remove debug
* Change SVCB test from reflect to loop
* Refactor SVCB code to reduce indentation
* When stringifying SVCBAlpn, use strings.Builder for whole process
* Update comment in svcb.go
Co-authored-by: Miek Gieben <miek@miek.nl>
* Describe why we use a specific size for the alpn buffer
Co-authored-by: Miek Gieben <miek@miek.nl>
* Add SVCB dohpath key
The parameter is being added in [its own IETF draft][1] and also being used in
the [IETF draft about Descovery of Designated Resolvers][2].
Additionally, the mappings of the numeric key values to strings are exported,
under names consistent with the already existing exported mappings, to make it
easier for the clients of the module to validate and print SVCB keys.
Testing was done by sending SVCB queries for the "_dns.resolver.arpa" domain to
OpenDNS's 146.112.41.2 server.
[1]: https://datatracker.ietf.org/doc/html/draft-ietf-add-svcb-dns-02
[2]: https://datatracker.ietf.org/doc/html/draft-ietf-add-ddr-06.html
* Fix template length, docs; reverse some changes
* Remove incorrect validations; improve docs
* Support parsing known RR types in RFC 3597 format
This is the format used for "Unknown DNS Resource Records", but it's
also useful to support parsing known RR types in this way.
RFC 3597 says:
An implementation MAY also choose to represent some RRs of known type
using the above generic representations for the type, class and/or
RDATA, which carries the benefit of making the resulting master file
portable to servers where these types are unknown. Using the generic
representation for the RDATA of an RR of known type can also be
useful in the case of an RR type where the text format varies
depending on a version, protocol, or similar field (or several)
embedded in the RDATA when such a field has a value for which no text
format is known, e.g., a LOC RR [RFC1876] with a VERSION other than
0.
Even though an RR of known type represented in the \# format is
effectively treated as an unknown type for the purpose of parsing the
RDATA text representation, all further processing by the server MUST
treat it as a known type and take into account any applicable type-
specific rules regarding compression, canonicalization, etc.
* Correct mistakes in TestZoneParserAddressAAAA
This was spotted when writing TestParseKnownRRAsRFC3597.
* Eliminate canParseAsRR
This has the advantage that concrete types will now be returned for
parsed ANY, NULL, OPT and TSIG records.
* Expand TestDynamicUpdateParsing for RFC 3597
This ensures we're properly handling empty RDATA for RFC 3597 parsed
records.
This reduces the time it takes to run the test. Shorter timeouts on
clients to avoid awaiting for the detault timeouts.
It's also reduces the iterations in some test functions, this doesn't
seem to impact the tests indicating those numbers where random to begin
with.
Use shorter crypto keys, as we don't need to strength in tests.
Stop using Google Public DNS and other remotes in tests as well: it's
faster, keeps things local and avoids spilling info to Google.
This brings the test duration down from ~8s to ~2s on my machine, a 4x
reduction.
~~~
PASS
ok github.com/miekg/dns 2.046s
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
PASS
ok github.com/miekg/dns 7.915s
~~~
Signed-off-by: Miek Gieben <miek@miek.nl>
* Implement SVCB
* Fix serialization and deserialization of double quotes
* More effort (?)
4 months old commit
* DEBUG
* _
* Presentation format serialization/deserialization
* _
Remove generated
* Progress on presentation format parse & write
* _
* Finish parsing presentation format
* Regenerate
* Pack unpack
* Move to svcb.go
Scan_rr.go and types.go should be untouched now
* 🐛
Thanks ghedo
* Definitions
* TypeHTTPSSVC
* Generated
and isDuplicate
* Goodbye lenient functions
Now private key=value pairs have to be defined as structs too. They are no longer automatically named as KeyNNNNN
* Encode/decode
* Experimental svc
* Read method
* Implement some of the methods, use trick...
to report where the error is while reading it. This should be applied to EDNS too. Todo: Find if case can only contain e := new(SVC_ALPN) and rest moved out
Also fix two compile errors
* Add SVC_LOCAL methods, reorder, remove alpn value, bugs
* Errors
* Alpn, make it build
* Correct testsuite
* Fully implement parser
Change from keeping a state variable to reading in one iteration until the key=value pair is fully consumed
* Simplify and document
EDNS should be simplified too
* Attempt to fix fuzzer
And Alpn bug
* A bug and change type values to match @ghedo's implementation
* IP bug
Also there are two ip duplicating patterns, one with copy, one with append. Maybe change it to be consistent.
* Check for strictly increasing keys as required
* Don't panic on invalid alpn
* Redundant check, don't modify original array
* Size calculation
* Fix the fuzzer, match the style
* 65535 is reserved too, don't delay errors
* Check keyNNN, check for aliasform having values
* IPvNHint is an array
* Fix ipvNHint
* Rename everything
* Unrecognized keys according to the updated specification
* Skip zero-length structs in generators. Fix CI
* Doc cleanup
* Off by one
* Add parse tests
* Check if private key doesn't collide with known key, invalid tests
* Disallow IPv4 as IPv6. More tests.
Related #1107
* Style fixes
* More consistency, more tests
* 🐛 Deep copy as in the documentation
a := make([]net.IP, 1)
a[0] = net.ParseIP("1.1.1.1").To4()
b := append(make([]net.IP, 0, 1), a...)
b[0] = net.ParseIP("3.1.1.1").To4()
fmt.Println(a[0][0])
* Make tests readable
* Move valid parse tests to different file
* 🐛 One of previous commits not fully committed
* Test binary single value encoding/decoding and full encode/decode
* Add worst-case grows to builders, 🐛 Wrong visible character range, redundant tests
* Testing improvements
And don't convert to IPv4 twice
* Doc update only
* Document worst case allocations
and ipv6 can be at most of length 39, not 40
* Redundant IP copy, consistent IPv6 behavior, fix deep copy
* isDuplicate for SVCB
* Optimizations
* echoconfig
* Svc => SVCB
* Fix CI
* Regenerate after REBASE (2)
Rebased twice on 15th and 20th May
* Rename svc, use escapeByte.
* Fix parsing whitespaces between quotes, rename ECHOHOConfig
* resolve
Remove svcbFieldLen
Use reverseInt
Uppercase SVCB
Rename key_value
"invalid" => bad
Alpn comments
> 65535 check
Unneeded slices
* a little more
read => parse
IP array meaning
Force pushed because forgot to change read in svcb_test.go
* HTTPSSVC -> HTTPS
* Use new values
* mandatory code
https://github.com/MikeBishop/dns-alt-svc/pull/205
* Resolve comments
Rename svcb-pairs
Remove SVCB_PRIVATE ranges
Comment on SVCB_KEY65535
ParseError return l.token
rename svcbKeyToString and svcbStringToKey
privatize SVCBKeyToString, SVCBStringToKey
* Refactor 1
Rename sorted, originalPairs
Use append instead of copy
Use svcb_RESERVED instead of 65535, with it now being private
"type SVCBKey uint16"
* Refactor 2
svcbKeyToString as method
svcbStringToKey updated after key 0
🐛 mandatory has missing key
Rename str
idx < 0
* Refactor 3
Use l.token as z
var key, value string
Comment wrap
0:
Sentences with '.'
keyValue => kv
* Refactor 4
* Refactor 5
len() int
* Refactor 6
* Refactor 7
* Test remove parsing
* Error messages
* Rewrite two estimate comments
* parse shouldn't modify original array 🐛
* Remove two unneeded comments
* Address review comments
Push 2 because can't build fuzzer python
Push 3 to try again
* Simplify argument duplication as per tmthrgd's suggestion
And add the relevant test
Force push edit: Make sorting code fit into one line
* Rewrite ECHConfig and address the review
* Remove the optional tab
* Add To4() Check
* More cleanup and fix mandatory not sorting bug
* Fixed the default values of HorizPre and VertPre
According to RFC-1876 those fields should be:
"a pair of four-bit unsigned
integers, each ranging from zero to nine, with the most
significant four bits representing the base and the second
number representing the power of ten by which to multiply
the base. This allows sizes from 0e0 (<1cm) to 9e9
(90,000km) to be expressed"
Current values for HorizPre and VertPre (165=0xA5 and 162=0xA2)
are incorrect because the first HEX digit is greater then 9
The default values should be:
HorizPre = 10000m = 10000 * 100 cm = 10^6 = 0x16
VertPre = 10m = 10 * 100 cm = 10^3 = 0x13
Size = 1m = 1 * 100 cm = 10^2 = 0x12
The value of Size was correct, but this PR changes it to HEX
representation to be more readable
* Informative comments
Made comments on LOC record default field values more informative
Co-Authored-By: Richard Gibson <richard.gibson@gmail.com>
Co-authored-by: Richard Gibson <richard.gibson@gmail.com>
* APL record: add structure and code point
* APL record: add wire format support
* APL record: add presentation format support
* APL record: add isDuplicate implementation
* APL record: add copy implementation
* APL record: add len implementation
* APL record: run go generate
* APL record: fix condition checking for equality
* APL record: use switches to map family to address length
* APL record: check bounds of individual fields rather than whole header
* APL record: stylistic changes
* APL record: remove APLPrefix methods from public interface
* APL record: update README
* APL record: additional cleanup for code review
* APL record: change return type from pointer to struct
* APL record: refactor of pack and unpack to eliminate extra variables
This follows BIND9 and removed support for the DSA family of algorithms.
Any DNSSEC implementation should consider those zones using it,
insecure.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Eliminate Variable bool from parserFunc
Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.
* Use an interface method for parsing zone file records
* Prevent panic in TestOmittedTTL if no regexp match
* Move slurpRemainder into fixed length parse functions
This is consistent with the original logic in setRR and avoids potential
edge cases.
* Parse synthetic records according to RFC 3597
These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
Sorely missing from this library. Add it. As there is no presentation
format the String method for this type puts a comment in front of it.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.
* Eliminate zlexer goroutine
This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.
* Eliminate klexer goroutine
This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.
* Merge scan into zlexer and klexer
This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.
* Avoid using text/scanner.Position to track position
* Track escape within zlexer.Next
* Avoid zl.commt check on space and tab in zlexer
* Track stri within zlexer.Next
* Track comi within zlexer.Next
There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.
* Use a single token buffer in zlexer
This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.
* Don't hardcode length of zl.tok in zlexer
* Eliminate lex.length field
This is always set to len(l.token) and is only queried in a few places.
It was added in 47cc5b052df9b4e5d5a9900cfdd622607de10a6d without any
obvious need.
* Add whitespace to klexer.Next
* Track lex within klexer.Next
* Use a strings.Builder in klexer.Next
* Simplify : case in klexer.Next
* Add whitespace to zlexer.Next
* Change for loop style in zlexer.Next and klexer.Next
* Surface read errors in zlexer
* Surface read errors from klexer
* Remove debug line from parseKey
* Rename tokenText to readByte
* Make readByte return ok bool
Also change the for loop style to match the Next for loops.
* Make readByte errors sticky
klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.
* Panic in testRR if the error is non-nil
* Add whitespace and unify field setting in zlexer.Next
* Remove eof fields from zlexer and klexer
With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.
* Merge zl.tok blocks in zlexer.Next
* Split the tok buffer into separate string and comment buffers
The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).
Split the buffer back out into two separate buffers to avoid clobbering.
* Replace token slices with arrays in zlexer
* Add a NewRR benchmark
* Move token buffers into zlexer.Next
These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.
name old alloc/op new alloc/op delta
NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10)
* Add a ReadRR benchmark
Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.
* Avoid using a bufio.Reader for io.ByteReader readers
At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.
name old time/op new time/op delta
NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10)
ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
* Surface any remaining comment from zlexer.Next
* Improve comment handling in zlexer.Next
This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.
* Remove outdated comment from zlexer.Next and klexer.Next
* Delay converting LF to space in braced comment
* Fixup TestParseZoneComments
* Remove tokenUpper field from lex
Not computing this for every token, and instead only
when needed is a substantial performance improvement.
name old time/op new time/op delta
NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10)
ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10)
ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
* Update ParseZone documentation to match comment changes
The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
* ClassANY: don't convert CLASS255 to ANY
Class "ANY" is wireformat only. In zonefile you can use CLASS255, but
when String-ing we convert this into "ANY" which is wrong. I.e. this
means we can't read back our own update.
Bit of a kludge to work around this, as I'm not sure we can just remove
ANY from the ClassToString map.
* Add support for Ed25519 DNSSEC signing from RFC 8080
Note: The test case from RFC 8080 has been modified
to correct the missing final brace, but is otherwise
present as-is.
* Explain why ed25519 is special cased in (*RRSIG).Sign
* Explain use of ed25519.GenerateKey in readPrivateKeyED25519
* Add dep
This is PR #458 with the dependency added into it.
Implement the CSYNC record.
Fixes#290
Long overdue, lets add this record. Similar in vain as NSEC/NSEC3, we
need to implement len() our selves. Presentation format parsing and
tests are done as well.
This is CoreDNS running with CSYNC support, `dig` doesn't support this
at the moment, so:
~~~
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40323
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;csync.example.org. IN TYPE62
;; ANSWER SECTION:
csync.example.org. 10 IN TYPE62 \# 12 000335240042000460000008
;; AUTHORITY SECTION:
example.org. 10 IN NS a.iana-servers.net.
example.org. 10 IN NS b.iana-servers.net.
~~~
* txt parser: fix goroutine leak
When a higher level (grammar or syntax) error was encountered the lower
level zlexer routine would be left open and trying to send more tokens
on the channel c. This leaks a goroutine, per failed parse...
This PR fixes this by signalling this error - by canceling a context -
retrieving any remaining items from the channel, so zlexer can return.
It also adds a goroutine leak test that can be re-used in other tests,
the TestParseBadNAPTR test uses this leak detector.
The private key parsing code had the same bug and is also fixed in this
PR.
Fixes#586
Fixes https://github.com/coredns/coredns/issues/1233
* sem not needed anymore
Move some of them to Errorf and friends, but most of them are just
gone: This make go test -v actually readable.
Remove a bunch of test that used ipv6 on localhost as this does not work
on Travis.
* Fix $TTL handling
* Error when there is no TTL for an RR
* Fix relative name handling
* Error when a relative name is used without an origin (cf. https://tools.ietf.org/html/rfc1035#section-5.1 )
Fixes#484
* Test for proper parsing of whitespace-separated (TXT) character-strings
* Properly parse whitespace-separated (TXT) character-strings
* Remove non-RFC treatment of backslash sequences in character-strings
Fixes gh-420
* For tests, remove non-RFC treatment of backslashes in domain names
1) Refactoring of tlsa.go
- moved routine to create the certificate rdata to its own go module
as this is shared between TLSA and SMIMEA records
2) Added support for creating an SMIMEA domain name
3) Developed in accordance with draft-ietf-dane-smime-12 RFC
Miek,
Submitting for your review. Happy to make any recommended changes or
address omissions.
Lightly tested against our internal DNS service which hosts DANE
SMIMEA records for our email certificates.
Parse tests are added.
When removing the reflection we inadvertely also removed the code for
handling empty salt values in NSEC3 and NSEC3PARAM. These are somewhat
annoying because the text representation is '-', which is not valid hex.
Update the size-xxx-member tags to point to another field in the struct
that should be used for the length in that field. Fix NSEC3/HIP and TSIG
to use to this and generate the correct pack/unpack functions for them.
Remove IPSECKEY from the lib and handle it as an unknown record - it is
such a horrible RR, needed kludges before - now just handle it as an
unknown RR.
All types now use generated pack and unpack functions. The blacklist is
removed.
Add function that dedups a list of RRs. Work on strings, which
adds garbage, but seems to be the least intrusive and takes the
last amount of memory.
Some fmt changes snook in as well.
This will allow RRSIG.Sign to use generic crypto.Signer implementations.
This is a interface breaking change, even if the required changes are most
likely just type asserions from crypto.PrivateKey to the underlying type or
crypto.Signer.