Commit Graph

33 Commits

Author SHA1 Message Date
Miek Gieben ce48a4b9ef
small cleans from go report card (#1268)
I went through the list and cleaned things up here and there.

Signed-off-by: Miek Gieben <miek@miek.nl>
2021-06-17 11:05:49 +02:00
Josh Soref 883641f4a9
Spelling (#1222)
* spelling: artifacts

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: encoding

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: exponent

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: ignoring

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: implemented

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: implements

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: next

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: numeric

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: previous

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: positions

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: presentation

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: resetting

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: stringifying

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: subsequent

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

* spelling: validated

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>

Co-authored-by: Miek Gieben <miek@miek.nl>
2021-02-25 17:08:05 +01:00
Miek Gieben 9dcf47a409
Doc updates (#1075)
* Doc updates

Was reading https://pkg.go.dev/github.com/miekg/dns?tab=doc and spotted
some types and things to could be slightly better.

Make v unexported, as this version stuff should not be part of the
public API.

Signed-off-by: Miek Gieben <miek@miek.nl>

* fix test

Signed-off-by: Miek Gieben <miek@miek.nl>
2020-02-14 22:47:21 +01:00
Tom Thorogood 25cacca8ca Prohibit newlines before record data in the ZoneParser (#979)
* Merge setRR into ZoneParser.Next

* Remove file argument from RR.parse

This was only used to fill in the ParseError file field. Instead we now
fill in that field in ZoneParser.Next.

* Move dynamic update check out of RR.parse

This consolidates all the dynamic update checks into one place.

* Check for unexpected newline before parsing RR data

* Move rr.parse call into if-statement

* Allow dynamic updates for TKEY and RFC3597 records

* Document that ParseError file field is unset from parse

* Inline allowDynamicUpdate into ZoneParser.Next

* Improve and simplify TestUnexpectedNewline
2019-06-10 07:38:54 +01:00
Tom Thorogood 92185d1e17 Simplify PrivateRR copying (#960)
This eliminates mkPrivateRR and reduces the risk of panic.
2019-04-30 09:14:58 +01:00
Tom Thorogood c5493ad28d
Use an interface method for IsDuplicate (#883)
* Prevent IsDuplicate from panicing on wrong Rrtype

* Replace the isDuplicateRdata switch with an interface method

This is substantially simpler and avoids the need to use reflect.

* Add equal RR case to TestDuplicateWrongRrtype

* Move RR_Header duplicate checking back into IsDuplicate

This restores the previous behaviour that RR_Header's were never
equal to each other.

* Revert "Move RR_Header duplicate checking back into IsDuplicate"

This reverts commit a3d7eba50825d546074d81084e639bbecf9cbb57.

* Run go fmt on package

This was a mistake when I merged the master branch into this PR.

* Move isDuplicate method to end of RR interface
2019-01-06 14:42:38 +10:30
Tom Thorogood db3d0ce13b
Use an interface method for parsing zone file records (#886)
* Eliminate Variable bool from parserFunc

Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.

* Use an interface method for parsing zone file records

* Prevent panic in TestOmittedTTL if no regexp match

* Move slurpRemainder into fixed length parse functions

This is consistent with the original logic in setRR and avoids potential
edge cases.

* Parse synthetic records according to RFC 3597

These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
2019-01-06 14:36:16 +10:30
Tom Thorogood 513c1ff221 Simplify and unify various returns (#893) 2019-01-04 10:19:42 +00:00
Tom Thorogood b955100a79
Move RR header packing out of generated code (#885) 2019-01-04 10:09:14 +10:30
Tom Thorogood 57b81e0614 Use an interface method for unpacking records (#884)
* Use an interface method for unpacking records

* Eliminate err var declaration from unpack functions

* Remove pointless r.Data assignment in PrivateRR.unpack
2019-01-03 17:35:32 +01:00
Tom Thorogood 2533d75276 Move scanner comment handling out of scanRR (#877)
* Add comment getter to zlexer

* Use zlexer.Comment instead of lex.comment

* Move comment handling out of setRR code

* Move comment field from lex to zlexer

* Eliminate ZoneParser.com field

* Return empty string from zlexer.Comment on error

* Only reset zlexer.comment field once per Next

* Remove zlexer merge TODO

I'm pretty sure these have to remain separate which is okay.
2018-12-31 10:20:26 +01:00
Tom Thorogood ff7d445081 Avoid setting the Rdlength field when packing records (#859)
* Avoid setting the Rdlength field when packing

The offset of the end of the header is now returned from the RR.pack
method, with the RDLENGTH record field being written in packRR.

To maintain compatability with callers of PackRR who might be relying
on this old behaviour, PackRR will now set rr.Header().Rdlength for
external callers. Care must be taken by callers to ensure this won't
cause a data-race.

* Prevent panic if TestClientLocalAddress fails

This came up during testing of the previous change.

* Change style of overflow check in packRR
2018-12-02 08:23:35 +00:00
Tom Thorogood 470f08e191
Reduce compression memory use with map[string]uint16 (#852)
* Reduce compression memory use with map[string]uint16

map[string]uint16 uses 25% less memory per-entry than a map[string]int
(16+2)/(16+8) = 0.75. All entries in the compression map are bound by
maxCompressionOffset which is 14-bits and fits within a uint16.

* Add PackMsg benchmark with more RRs

* Add a comment to the compressionMap struct
2018-12-02 08:50:51 +10:30
Tom Thorogood 778aa4f83d
Properly calculate compressed message lengths (#833)
* Remove fullSize return from compressionLenSearch

This wasn't used anywhere but TestCompressionLenSearch, and was very
wrong.

* Add generated compressedLen functions and use them

This replaces the confusing and complicated compressionLenSlice
function.

* Use compressedLenWithCompressionMap even for uncompressed

This leaves the len() functions unused and they'll soon be removed.

This also fixes the off-by-one error of compressedLen when a (Q)NAME
is ".".

* Use Len helper instead of RR.len private method

* Merge len and compressedLen functions

* Merge compressedLen helper into Msg.Len

* Remove compress bool from compressedLenWithCompressionMap

* Merge map insertion into compressionLenSearch

This eliminates the need to loop over the domain name twice when we're
compressing the name.

* Use compressedNameLen for NSEC.NextDomain

This was a mistake.

* Remove compress from RR.len

* Add test case for multiple questions length

* Add test case for MINFO and SOA compression

These are the only RRs with multiple compressible names within the same
RR, and they were previously broken.

* Rename compressedNameLen to domainNameLen

It also handles the length of uncompressed domain names.

* Use off directly instead of len(s[:off])

* Move initial maxCompressionOffset check out of compressionLenMapInsert

This should allow us to avoid the call overhead of
compressionLenMapInsert in certain limited cases and may result in a
slight performance increase.

compressionLenMapInsert still has a maxCompressionOffset check inside
the for loop.

* Rename compressedLenWithCompressionMap to msgLenWithCompressionMap

This better reflects that it also calculates the uncompressed length.

* Merge TestMsgCompressMINFO with TestMsgCompressSOA

They're both testing the same thing.

* Remove compressionLenMapInsert

compressionLenSearch does everything compressionLenMapInsert did anyway.

* Only call compressionLenSearch in one place in domainNameLen

* Split if statement in domainNameLen

The last two commits worsened the performance of domainNameLen
noticably, this change restores it's original performance.

name                            old time/op    new time/op    delta
MsgLength-12                       550ns ±13%     510ns ±21%    ~     (p=0.050 n=10+10)
MsgLengthNoCompression-12         26.9ns ± 2%    27.0ns ± 1%    ~     (p=0.198 n=9+10)
MsgLengthPack-12                  2.30µs ±12%    2.26µs ±16%    ~     (p=0.739 n=10+10)
MsgLengthMassive-12               32.9µs ± 7%    32.0µs ±10%    ~     (p=0.243 n=9+10)
MsgLengthOnlyQuestion-12          9.60ns ± 1%    9.20ns ± 1%  -4.16%  (p=0.000 n=9+9)

* Remove stray newline from TestMsgCompressionMultipleQuestions

* Remove stray newline in length_test.go

This was introduced when resolving merge conflicts.
2018-11-30 10:03:41 +10:30
Tom Thorogood 17c1bc6792
Eliminate lexer goroutines (#792)
* Eliminate zlexer goroutine

This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.

* Eliminate klexer goroutine

This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.

* Merge scan into zlexer and klexer

This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.

* Avoid using text/scanner.Position to track position

* Track escape within zlexer.Next

* Avoid zl.commt check on space and tab in zlexer

* Track stri within zlexer.Next

* Track comi within zlexer.Next

There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.

* Use a single token buffer in zlexer

This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.

* Don't hardcode length of zl.tok in zlexer

* Eliminate lex.length field

This is always set to len(l.token) and is only queried in a few places.

It was added in 47cc5b052d without any
obvious need.

* Add whitespace to klexer.Next

* Track lex within klexer.Next

* Use a strings.Builder in klexer.Next

* Simplify : case in klexer.Next

* Add whitespace to zlexer.Next

* Change for loop style in zlexer.Next and klexer.Next

* Surface read errors in zlexer

* Surface read errors from klexer

* Remove debug line from parseKey

* Rename tokenText to readByte

* Make readByte return ok bool

Also change the for loop style to match the Next for loops.

* Make readByte errors sticky

klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.

* Panic in testRR if the error is non-nil

* Add whitespace and unify field setting in zlexer.Next

* Remove eof fields from zlexer and klexer

With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.

* Merge zl.tok blocks in zlexer.Next

* Split the tok buffer into separate string and comment buffers

The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).

Split the buffer back out into two separate buffers to avoid clobbering.

* Replace token slices with arrays in zlexer

* Add a NewRR benchmark

* Move token buffers into zlexer.Next

These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.

name      old alloc/op   new alloc/op   delta
NewRR-12    9.72kB ± 0%    4.98kB ± 0%  -48.72%  (p=0.000 n=10+10)

* Add a ReadRR benchmark

Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.

* Avoid using a bufio.Reader for io.ByteReader readers

At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.

name       old time/op    new time/op    delta
NewRR-12     11.0µs ± 3%     9.5µs ± 2%  -13.77%  (p=0.000 n=9+10)
ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12     4.98kB ± 0%    0.81kB ± 0%  -83.79%  (p=0.000 n=10+10)
ReadRR-12    4.87kB ± 0%    1.82kB ± 0%  -62.73%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       19.0 ± 0%      17.0 ± 0%  -10.53%  (p=0.000 n=10+10)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

* Surface any remaining comment from zlexer.Next

* Improve comment handling in zlexer.Next

This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.

* Remove outdated comment from zlexer.Next and klexer.Next

* Delay converting LF to space in braced comment

* Fixup TestParseZoneComments

* Remove tokenUpper field from lex

Not computing this for every token, and instead only
when needed is a substantial performance improvement.

name       old time/op    new time/op    delta
NewRR-12     9.56µs ± 0%    6.30µs ± 1%  -34.08%  (p=0.000 n=9+10)
ReadRR-12    9.93µs ± 1%    6.67µs ± 1%  -32.77%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12       824B ± 0%      808B ± 0%   -1.94%  (p=0.000 n=10+10)
ReadRR-12    1.83kB ± 0%    1.82kB ± 0%   -0.87%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       17.0 ± 0%      17.0 ± 0%     ~     (all equal)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

* Update ParseZone documentation to match comment changes

The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
2018-10-15 17:42:31 +10:30
Tom Thorogood 7f61c6631b
Fix dominikh/go-tools nits (#758)
* Remove unused functions and consts

* Address gosimple nits

* Address staticcheck nits

This excludes several that were intentional or weren't actual errors.

* Reduce size of lex struct

This reduces the size of the lex struct by 8 bytes from:
  lex.token string: 0-16 (size 16, align 8)
  lex.tokenUpper string: 16-32 (size 16, align 8)
  lex.length int: 32-40 (size 8, align 8)
  lex.err bool: 40-41 (size 1, align 1)
  lex.value uint8: 41-42 (size 1, align 1)
  padding: 42-48 (size 6, align 0)
  lex.line int: 48-56 (size 8, align 8)
  lex.column int: 56-64 (size 8, align 8)
  lex.torc uint16: 64-66 (size 2, align 2)
  padding: 66-72 (size 6, align 0)
  lex.comment string: 72-88 (size 16, align 8)
to:
  lex.token string: 0-16 (size 16, align 8)
  lex.tokenUpper string: 16-32 (size 16, align 8)
  lex.length int: 32-40 (size 8, align 8)
  lex.err bool: 40-41 (size 1, align 1)
  lex.value uint8: 41-42 (size 1, align 1)
  lex.torc uint16: 42-44 (size 2, align 2)
  padding: 44-48 (size 4, align 0)
  lex.line int: 48-56 (size 8, align 8)
  lex.column int: 56-64 (size 8, align 8)
  lex.comment string: 64-80 (size 16, align 8)

* Reduce size of response struct

This reduces the size of the response struct by 8 bytes from:
  response.msg []byte: 0-24 (size 24, align 8)
  response.hijacked bool: 24-25 (size 1, align 1)
  padding: 25-32 (size 7, align 0)
  response.tsigStatus error: 32-48 (size 16, align 8)
  response.tsigTimersOnly bool: 48-49 (size 1, align 1)
  padding: 49-56 (size 7, align 0)
  response.tsigRequestMAC string: 56-72 (size 16, align 8)
  response.tsigSecret map[string]string: 72-80 (size 8, align 8)
  response.udp *net.UDPConn: 80-88 (size 8, align 8)
  response.tcp net.Conn: 88-104 (size 16, align 8)
  response.udpSession *github.com/tmthrgd/dns.SessionUDP: 104-112 (size 8, align 8)
  response.writer github.com/tmthrgd/dns.Writer: 112-128 (size 16, align 8)
  response.wg *sync.WaitGroup: 128-136 (size 8, align 8)
to:
  response.msg []byte: 0-24 (size 24, align 8)
  response.hijacked bool: 24-25 (size 1, align 1)
  response.tsigTimersOnly bool: 25-26 (size 1, align 1)
  padding: 26-32 (size 6, align 0)
  response.tsigStatus error: 32-48 (size 16, align 8)
  response.tsigRequestMAC string: 48-64 (size 16, align 8)
  response.tsigSecret map[string]string: 64-72 (size 8, align 8)
  response.udp *net.UDPConn: 72-80 (size 8, align 8)
  response.tcp net.Conn: 80-96 (size 16, align 8)
  response.udpSession *github.com/tmthrgd/dns.SessionUDP: 96-104 (size 8, align 8)
  response.writer github.com/tmthrgd/dns.Writer: 104-120 (size 16, align 8)
  response.wg *sync.WaitGroup: 120-128 (size 8, align 8)
2018-09-27 04:02:05 +09:30
Miek Gieben a93f3e4f6b
copyHeader is redundant (#672)
copyHeader() is redundant, we allocate a header and then copy the
non-pointer elements into it; we don't need to do this, because if we
just asssign rr.Hdr to something else we get the same result.

Remove copyHeader() and the generation and use of it in ztypes.go.
2018-05-10 14:50:26 +01:00
Miek Gieben 6ae3b9f061 Skip reflection for most types (#369)
Make the reflection types a black list (these types use (or should use)
the tag 'size-xxx' in their struct definition.s

HIP, IPSECKEY, NSEC3, TSIG

All other types don't use reflection anymore.

* Return a pointer to the header when there is no rdata, this restores old
  behavior. The rest of the conversion mostly hangs on getting size-hex
  right, but then packStruct and packStructValue and the unpack variant
  can be killed.
* Generate pack and unpack for all embedded types as well.
* Fix PrivateRRs, register an unpack function as well, when you register
  a new PrivateRR.
* Add the tag octet, nsec, []domains and more  to msg_helper.go
2016-06-12 16:09:37 +01:00
Miek Gieben 907a4aef57 Generate pack/unpack for all RRs (#360)
Add dns:txt parsing helper to prevent compile errors. This allows
us to generate all unpack/pack function.

Add pack to the RR interface definition and add this method to
PrivateRR.

We still use typeToUnpack to select which types don't use reflection.
2016-06-05 07:53:12 +01:00
Filippo Valsorda 023972bb19 Expose TypeToRR 2015-10-16 23:36:49 +01:00
Miek Gieben 79547a0341 Add Dedup function
Add function that dedups a list of RRs. Work on strings, which
adds garbage, but seems to be the least intrusive and takes the
last amount of memory.

Some fmt changes snook in as well.
2015-08-24 22:02:57 +01:00
Miek Gieben 9bf52083d1 golint fixes 2015-08-23 08:03:13 +01:00
Miek Gieben 64fea017a2 Move all docs to docs.go
Another golint change.
2015-02-19 13:47:50 +00:00
Miek Gieben b02ae55fbe golint changes, lose the underscores 2015-02-19 10:45:59 +00:00
Alex Sergeyev aaf867499e Attempting to fix #133
- Trying to get as close to original state as possible
- Since private RR should not run slurp, toggling Variable there.
2014-09-23 21:59:09 -04:00
Miek Gieben 45e66bfe1d Rename ParseTextSlice to Parse.
Shorter.
2014-09-21 19:09:13 +01:00
Miek Gieben 45f3dc89e4 Rename CopyRdata to just Copy.
Still not sure if this makes things more clear.
2014-09-21 18:30:16 +01:00
Miek Gieben 685f708a9e Rename RdataLen to just Len 2014-09-21 18:21:34 +01:00
Miek Gieben 6ee673464d Small doc update 2014-09-21 09:09:50 +01:00
Miek Gieben 9c9f3646ff More naming and some docs 2014-09-21 09:07:26 +01:00
Miek Gieben b90da23741 Fix tests and examples.
Put all examples in the same file and cleanup the tests to use
the new names.
2014-09-21 08:38:17 +01:00
Miek Gieben 9c455b0214 PrivateRR: naming naming naming
Try to find better (=more in sync with the rest of the lib) naming. My
guess is that these are better, but YMMV.
2014-09-21 08:28:38 +01:00
Miek Gieben 7e792338bb Move custom* filenames to private to be in sync with the type 2014-09-21 07:56:01 +01:00