This follows BIND9 and removed support for the DSA family of algorithms.
Any DNSSEC implementation should consider those zones using it,
insecure.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Eliminate Variable bool from parserFunc
Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.
* Use an interface method for parsing zone file records
* Prevent panic in TestOmittedTTL if no regexp match
* Move slurpRemainder into fixed length parse functions
This is consistent with the original logic in setRR and avoids potential
edge cases.
* Parse synthetic records according to RFC 3597
These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
Sorely missing from this library. Add it. As there is no presentation
format the String method for this type puts a comment in front of it.
Signed-off-by: Miek Gieben <miek@miek.nl>
* Remove fullSize return from compressionLenSearch
This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.
* Add generated compressedLen functions and use them
This replaces the confusing and complicated compressionLenSlice
function.
* Use compressedLenWithCompressionMap even for uncompressed
This leaves the len() functions unused and they'll soon be removed.
This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".
* Use Len helper instead of RR.len private method
* Merge len and compressedLen functions
* Merge compressedLen helper into Msg.Len
* Remove compress bool from compressedLenWithCompressionMap
* Merge map insertion into compressionLenSearch
This eliminates the need to loop over the domain name twice when we're
compressing the name.
* Use compressedNameLen for NSEC.NextDomain
This was a mistake.
* Remove compress from RR.len
* Add test case for multiple questions length
* Add test case for MINFO and SOA compression
These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.
* Rename compressedNameLen to domainNameLen
It also handles the length of uncompressed domain names.
* Use off directly instead of len(s[:off])
* Move initial maxCompressionOffset check out of compressionLenMapInsert
This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.
compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.
* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap
This better reflects that it also calculates the uncompressed length.
* Merge TestMsgCompressMINFO with TestMsgCompressSOA
They're both testing the same thing.
* Remove compressionLenMapInsert
compressionLenSearch does everything compressionLenMapInsert did anyway.
* Only call compressionLenSearch in one place in domainNameLen
* Split if statement in domainNameLen
The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.
name old time/op new time/op delta
MsgLength-12 550ns ±13% 510ns ±21% ~ (p=0.050 n=10+10)
MsgLengthNoCompression-12 26.9ns ± 2% 27.0ns ± 1% ~ (p=0.198 n=9+10)
MsgLengthPack-12 2.30µs ±12% 2.26µs ±16% ~ (p=0.739 n=10+10)
MsgLengthMassive-12 32.9µs ± 7% 32.0µs ±10% ~ (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12 9.60ns ± 1% 9.20ns ± 1% -4.16% (p=0.000 n=9+9)
* Remove stray newline from TestMsgCompressionMultipleQuestions
* Remove stray newline in length_test.go
This was introduced when resolving merge conflicts.
* Eliminate zlexer goroutine
This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.
* Eliminate klexer goroutine
This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.
* Merge scan into zlexer and klexer
This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.
* Avoid using text/scanner.Position to track position
* Track escape within zlexer.Next
* Avoid zl.commt check on space and tab in zlexer
* Track stri within zlexer.Next
* Track comi within zlexer.Next
There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.
* Use a single token buffer in zlexer
This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.
* Don't hardcode length of zl.tok in zlexer
* Eliminate lex.length field
This is always set to len(l.token) and is only queried in a few places.
It was added in 47cc5b052d without any
obvious need.
* Add whitespace to klexer.Next
* Track lex within klexer.Next
* Use a strings.Builder in klexer.Next
* Simplify : case in klexer.Next
* Add whitespace to zlexer.Next
* Change for loop style in zlexer.Next and klexer.Next
* Surface read errors in zlexer
* Surface read errors from klexer
* Remove debug line from parseKey
* Rename tokenText to readByte
* Make readByte return ok bool
Also change the for loop style to match the Next for loops.
* Make readByte errors sticky
klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.
* Panic in testRR if the error is non-nil
* Add whitespace and unify field setting in zlexer.Next
* Remove eof fields from zlexer and klexer
With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.
* Merge zl.tok blocks in zlexer.Next
* Split the tok buffer into separate string and comment buffers
The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).
Split the buffer back out into two separate buffers to avoid clobbering.
* Replace token slices with arrays in zlexer
* Add a NewRR benchmark
* Move token buffers into zlexer.Next
These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.
name old alloc/op new alloc/op delta
NewRR-12 9.72kB ± 0% 4.98kB ± 0% -48.72% (p=0.000 n=10+10)
* Add a ReadRR benchmark
Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.
* Avoid using a bufio.Reader for io.ByteReader readers
At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.
name old time/op new time/op delta
NewRR-12 11.0µs ± 3% 9.5µs ± 2% -13.77% (p=0.000 n=9+10)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 4.98kB ± 0% 0.81kB ± 0% -83.79% (p=0.000 n=10+10)
ReadRR-12 4.87kB ± 0% 1.82kB ± 0% -62.73% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 19.0 ± 0% 17.0 ± 0% -10.53% (p=0.000 n=10+10)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
ReadRR-12 11.2µs ±16% 9.8µs ± 1% -13.03% (p=0.000 n=10+10)
* Surface any remaining comment from zlexer.Next
* Improve comment handling in zlexer.Next
This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.
* Remove outdated comment from zlexer.Next and klexer.Next
* Delay converting LF to space in braced comment
* Fixup TestParseZoneComments
* Remove tokenUpper field from lex
Not computing this for every token, and instead only
when needed is a substantial performance improvement.
name old time/op new time/op delta
NewRR-12 9.56µs ± 0% 6.30µs ± 1% -34.08% (p=0.000 n=9+10)
ReadRR-12 9.93µs ± 1% 6.67µs ± 1% -32.77% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
NewRR-12 824B ± 0% 808B ± 0% -1.94% (p=0.000 n=10+10)
ReadRR-12 1.83kB ± 0% 1.82kB ± 0% -0.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
NewRR-12 17.0 ± 0% 17.0 ± 0% ~ (all equal)
ReadRR-12 19.0 ± 0% 19.0 ± 0% ~ (all equal)
* Update ParseZone documentation to match comment changes
The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
* ClassANY: don't convert CLASS255 to ANY
Class "ANY" is wireformat only. In zonefile you can use CLASS255, but
when String-ing we convert this into "ANY" which is wrong. I.e. this
means we can't read back our own update.
Bit of a kludge to work around this, as I'm not sure we can just remove
ANY from the ClassToString map.
* Add support for Ed25519 DNSSEC signing from RFC 8080
Note: The test case from RFC 8080 has been modified
to correct the missing final brace, but is otherwise
present as-is.
* Explain why ed25519 is special cased in (*RRSIG).Sign
* Explain use of ed25519.GenerateKey in readPrivateKeyED25519
* Add dep
This is PR #458 with the dependency added into it.
Implement the CSYNC record.
Fixes#290
Long overdue, lets add this record. Similar in vain as NSEC/NSEC3, we
need to implement len() our selves. Presentation format parsing and
tests are done as well.
This is CoreDNS running with CSYNC support, `dig` doesn't support this
at the moment, so:
~~~
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40323
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;csync.example.org. IN TYPE62
;; ANSWER SECTION:
csync.example.org. 10 IN TYPE62 \# 12 000335240042000460000008
;; AUTHORITY SECTION:
example.org. 10 IN NS a.iana-servers.net.
example.org. 10 IN NS b.iana-servers.net.
~~~
* txt parser: fix goroutine leak
When a higher level (grammar or syntax) error was encountered the lower
level zlexer routine would be left open and trying to send more tokens
on the channel c. This leaks a goroutine, per failed parse...
This PR fixes this by signalling this error - by canceling a context -
retrieving any remaining items from the channel, so zlexer can return.
It also adds a goroutine leak test that can be re-used in other tests,
the TestParseBadNAPTR test uses this leak detector.
The private key parsing code had the same bug and is also fixed in this
PR.
Fixes#586
Fixes https://github.com/coredns/coredns/issues/1233
* sem not needed anymore
Move some of them to Errorf and friends, but most of them are just
gone: This make go test -v actually readable.
Remove a bunch of test that used ipv6 on localhost as this does not work
on Travis.
* Fix $TTL handling
* Error when there is no TTL for an RR
* Fix relative name handling
* Error when a relative name is used without an origin (cf. https://tools.ietf.org/html/rfc1035#section-5.1 )
Fixes#484
* Test for proper parsing of whitespace-separated (TXT) character-strings
* Properly parse whitespace-separated (TXT) character-strings
* Remove non-RFC treatment of backslash sequences in character-strings
Fixes gh-420
* For tests, remove non-RFC treatment of backslashes in domain names
1) Refactoring of tlsa.go
- moved routine to create the certificate rdata to its own go module
as this is shared between TLSA and SMIMEA records
2) Added support for creating an SMIMEA domain name
3) Developed in accordance with draft-ietf-dane-smime-12 RFC
Miek,
Submitting for your review. Happy to make any recommended changes or
address omissions.
Lightly tested against our internal DNS service which hosts DANE
SMIMEA records for our email certificates.
Parse tests are added.
When removing the reflection we inadvertely also removed the code for
handling empty salt values in NSEC3 and NSEC3PARAM. These are somewhat
annoying because the text representation is '-', which is not valid hex.
Update the size-xxx-member tags to point to another field in the struct
that should be used for the length in that field. Fix NSEC3/HIP and TSIG
to use to this and generate the correct pack/unpack functions for them.
Remove IPSECKEY from the lib and handle it as an unknown record - it is
such a horrible RR, needed kludges before - now just handle it as an
unknown RR.
All types now use generated pack and unpack functions. The blacklist is
removed.
Add function that dedups a list of RRs. Work on strings, which
adds garbage, but seems to be the least intrusive and takes the
last amount of memory.
Some fmt changes snook in as well.
This will allow RRSIG.Sign to use generic crypto.Signer implementations.
This is a interface breaking change, even if the required changes are most
likely just type asserions from crypto.PrivateKey to the underlying type or
crypto.Signer.
TXT records consist out of multiple 255 byte chunk. When parsing
a chunk that is too large, Go DNS would happily add it. This would
only fail when packing the message.
Change this to auto-chunking when reading the TXT records from file
into 255 byte sized chunks.
Remove trailing \n from t.Log and t.Error messages as it's unnecessary.
In some instances, combine multiple t.Error()s into one
To provide more consistency across the tests, rename e to err and use %v
as the format arg for errors.
Replace Logf and Errorf with Log and Error when it made sense. For
example t.Errorf("%v", err) to t.Error(err)