Commit Graph

18 Commits

Author SHA1 Message Date
Tom Thorogood 17c1bc6792
Eliminate lexer goroutines (#792)
* Eliminate zlexer goroutine

This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.

* Eliminate klexer goroutine

This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.

* Merge scan into zlexer and klexer

This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.

* Avoid using text/scanner.Position to track position

* Track escape within zlexer.Next

* Avoid zl.commt check on space and tab in zlexer

* Track stri within zlexer.Next

* Track comi within zlexer.Next

There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.

* Use a single token buffer in zlexer

This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.

* Don't hardcode length of zl.tok in zlexer

* Eliminate lex.length field

This is always set to len(l.token) and is only queried in a few places.

It was added in 47cc5b052d without any
obvious need.

* Add whitespace to klexer.Next

* Track lex within klexer.Next

* Use a strings.Builder in klexer.Next

* Simplify : case in klexer.Next

* Add whitespace to zlexer.Next

* Change for loop style in zlexer.Next and klexer.Next

* Surface read errors in zlexer

* Surface read errors from klexer

* Remove debug line from parseKey

* Rename tokenText to readByte

* Make readByte return ok bool

Also change the for loop style to match the Next for loops.

* Make readByte errors sticky

klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.

* Panic in testRR if the error is non-nil

* Add whitespace and unify field setting in zlexer.Next

* Remove eof fields from zlexer and klexer

With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.

* Merge zl.tok blocks in zlexer.Next

* Split the tok buffer into separate string and comment buffers

The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).

Split the buffer back out into two separate buffers to avoid clobbering.

* Replace token slices with arrays in zlexer

* Add a NewRR benchmark

* Move token buffers into zlexer.Next

These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.

name      old alloc/op   new alloc/op   delta
NewRR-12    9.72kB ± 0%    4.98kB ± 0%  -48.72%  (p=0.000 n=10+10)

* Add a ReadRR benchmark

Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.

* Avoid using a bufio.Reader for io.ByteReader readers

At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.

name       old time/op    new time/op    delta
NewRR-12     11.0µs ± 3%     9.5µs ± 2%  -13.77%  (p=0.000 n=9+10)
ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12     4.98kB ± 0%    0.81kB ± 0%  -83.79%  (p=0.000 n=10+10)
ReadRR-12    4.87kB ± 0%    1.82kB ± 0%  -62.73%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       19.0 ± 0%      17.0 ± 0%  -10.53%  (p=0.000 n=10+10)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

* Surface any remaining comment from zlexer.Next

* Improve comment handling in zlexer.Next

This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.

* Remove outdated comment from zlexer.Next and klexer.Next

* Delay converting LF to space in braced comment

* Fixup TestParseZoneComments

* Remove tokenUpper field from lex

Not computing this for every token, and instead only
when needed is a substantial performance improvement.

name       old time/op    new time/op    delta
NewRR-12     9.56µs ± 0%    6.30µs ± 1%  -34.08%  (p=0.000 n=9+10)
ReadRR-12    9.93µs ± 1%    6.67µs ± 1%  -32.77%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12       824B ± 0%      808B ± 0%   -1.94%  (p=0.000 n=10+10)
ReadRR-12    1.83kB ± 0%    1.82kB ± 0%   -0.87%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       17.0 ± 0%      17.0 ± 0%     ~     (all equal)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

* Update ParseZone documentation to match comment changes

The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
2018-10-15 17:42:31 +10:30
tr3e 501e858f67 Fix issue #742 (#745)
* Fix error comparison in SetTA

* Add testcase TestParseTA()
2018-09-22 18:36:01 +01:00
Tom Thorogood c9b812d1d9 Remove redundant parenthesis (#727)
* Remove redundant parenthesis

These were caught with:
    gofmt -r '(a) -> a' -w *.go

This commit only includes the changes where the formatting makes the
ordering of operations clear.

* Remove more redundant parenthesis

These were caught with:
    gofmt -r '(a) -> a' -w *.go

This commit includes the remaining changes where the formatting does not
make the ordering of operations as clear as the previous commit.
2018-08-16 17:05:27 +01:00
Tom Thorogood 77d95a53d0 Handle empty NSEC3 salt in scanner (#677)
Fixes #676
2018-05-14 20:07:52 +01:00
Miek Gieben e508eecd67
Some linter fixes from Go report card. (#601)
Implement small linter fixes.
2017-12-06 11:31:56 +00:00
spsholleman 052efef004 Add support for TKEY RRs (#567)
* Add support for TKEY RRs

- make sure Key and Data fields are variable length hex fields
- checkin output from 'go generate'
- add a TKEY specific test to ensure this stays working

* go format changes

* address review comments

* add ability to parse TKEY via string

* handle review comments - change TKEY string output
2017-11-28 07:48:02 +00:00
Miek Gieben 2ae4695cc7
Implement CSYNC (#585)
Implement the CSYNC record.

Fixes #290

Long overdue, lets add this record. Similar in vain as NSEC/NSEC3, we
need to implement len() our selves. Presentation format parsing and
tests are done as well.

This is CoreDNS running with CSYNC support, `dig` doesn't support this
at the moment, so:

~~~
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40323
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;csync.example.org.		IN	TYPE62

;; ANSWER SECTION:
csync.example.org.	10	IN	TYPE62	\# 12 000335240042000460000008

;; AUTHORITY SECTION:
example.org.		10	IN	NS	a.iana-servers.net.
example.org.		10	IN	NS	b.iana-servers.net.
~~~
2017-11-25 08:19:06 +00:00
Miek Gieben acff9ce3fa
Fuzzing the text parser: a few fixes (#579)
I'm fuzzing the text parser and that turned up these two. Will do
further fuzzing with these fixes in.
2017-11-20 18:07:37 +00:00
Miek Gieben b38dc3dcb7
Cleanup: gofmt -w -s *.go (#548)
Some renames of internal names to make go lint happier.
2017-11-03 16:15:35 +00:00
Richard Gibson eccf8bbe83 Correctly parse omitted TTLs and relative domains (#513)
* Fix $TTL handling
* Error when there is no TTL for an RR
* Fix relative name handling
* Error when a relative name is used without an origin (cf. https://tools.ietf.org/html/rfc1035#section-5.1 )

Fixes #484
2017-09-26 11:15:37 -04:00
Miek Gieben 767422ac12 Add AVC record (#480)
See
https://www.iana.org/assignments/dns-parameters/AVC/avc-completed-template
for the template, a new record that is (again) a mirror of the TXT
record. For lack of a better name, name the rdata Txt - as we do in SPF
and TXT.
2017-03-29 22:17:13 +02:00
Jon Nappi c862b7e359 Replace Atoi with ParseUint where appropriate (#470)
* replace Atoi with ParseUint where appropriate

* more Atoi replacements
2017-03-10 21:57:03 +00:00
Richard Gibson f4d2b08694 For consistency with other types, allow empty UINFO RDATA (#424)
Ref https://github.com/miekg/dns/pull/421#discussion_r90610949
2016-12-02 22:38:56 +00:00
Richard Gibson 21314e1838 Fix TXT RDATA parsing (#421)
* Test for proper parsing of whitespace-separated (TXT) character-strings

* Properly parse whitespace-separated (TXT) character-strings

* Remove non-RFC treatment of backslash sequences in character-strings

Fixes gh-420

* For tests, remove non-RFC treatment of backslashes in domain names
2016-12-02 09:34:49 +00:00
Miek Gieben 46df8c9462 Fix for miekg/dns issue #289: support the SMIMEA record (#410)
1) Refactoring of tlsa.go
   - moved routine to create the certificate rdata to its own go module
     as this is shared between TLSA and SMIMEA records
2) Added support for creating an SMIMEA domain name
3) Developed in accordance with draft-ietf-dane-smime-12 RFC

Miek,

Submitting for your review. Happy to make any recommended changes or
address omissions.

Lightly tested against our internal DNS service which hosts DANE
SMIMEA records for our email certificates.

Parse tests are added.
2016-10-17 18:09:52 +01:00
Miek Gieben dbffa4b057 Kill all reflection when packing/unpacking RR (#372)
Update the size-xxx-member tags to point to another field in the struct
that should be used for the length in that field. Fix NSEC3/HIP and TSIG
to use to this and generate the correct pack/unpack functions for them.

Remove IPSECKEY from the lib and handle it as an unknown record - it is
such a horrible RR, needed kludges before - now just handle it as an
unknown RR.

All types now use generated pack and unpack functions. The blacklist is
removed.
2016-06-12 18:31:50 +01:00
Miek Gieben 799de7044d Remove WKS support
Support for WKS was incomplete, i.e. len() method was incorrect.
Remove support for the record and handle it as an unknown one.

Fixes #361
2016-06-05 08:23:44 +01:00
Miek Gieben 475ab80867 Remove (most) reflection
Remove the use of reflection when packing and unpacking, instead
generate all the pack and unpack functions using msg_generate.
This will generate zmsg.go which in turn calls the helper functions from
msg_helper.go.

This increases the speed by about ~30% while cutting back on memory
usage. Not all RRs are using it, but that will be rectified in upcoming
PR.

Most of the speed increase is in the header/question section parsing.
These functions *are* not generated, but straight forward enough. The
implementation can be found in msg.go.

The new code has been fuzzed by go-fuzz, which turned up some issues.

All files that started with 'z', and not autogenerated were renamed,
i.e. zscan.go is now scan.go.

Reflection is still used, in subsequent PRs it will be removed entirely.
2016-06-03 12:45:22 +01:00