Commit Graph

36 Commits

Author SHA1 Message Date
Miek Gieben ce48a4b9ef
small cleans from go report card (#1268)
I went through the list and cleaned things up here and there.

Signed-off-by: Miek Gieben <miek@miek.nl>
2021-06-17 11:05:49 +02:00
joseph-stanton-ax 9922549621
Incorrect HIP RR public key length decode (#1262)
When decoding a HIP resource record, 'base64.StdEncoding.DecodedLen' can return a length larger than the length of the decoded public key. This change decodes the public key and retrieves the correct length. In our tests, the public key length was being set to 33, instead of 32. Below is our offending resource record:
'23b5993f649c0827.a.b.c.      3600    IN      HIP     5 200100100020001523B5993F649C0827 Cm6k4jhir9YYoKq9JDqD3Ob1hBfCuwbWam1igFPhkGg='
2021-05-13 09:33:12 +02:00
Miek Gieben db96610e5e
loc: use float64 to get enough precision (#1234)
Fixes: #1232

Signed-off-by: Miek Gieben <miek@miek.nl>
2021-03-01 15:19:31 +01:00
madestro 375601dc88
Implementation of zone digest (ZONEMD) (#1208)
* adding ZONEMD

* adding ZONEMD

* deleting extra cod

* updating constants

* updating mod

* updating release

* Moving ZONEMD Implementation to project structure

* re adding indirect tools import

* case-insensitive digest

* fixing if zone has rfc 3597 RRs

* remove .idea folder

* restore go.mod imports

* gofmt files

* pseudo rollback

* after go generate...

* parsing zonemd in rfc3597

* removing the check for a STRING as HAsh Algorithm in ZONEMD, RFC says only numbers go there

* adding ZONEMD constants

* Reverting changes in generate.go

un-gofmt ing generate.go file

* Reverting changes in generate.go

un-gofmt ing generate.go file

* remove ZoneMD reserved types

* remove zonemd RFC3597 branch in ZONEMD parser

* revert rfc3597 related modifications

* revert rfc3597 related modifications

* removing unintentional changes from go.sum and types.go

* add line break to go.sum

* removing spaces from types.go

* Use ZONEMD official RFC link as reference

* Add ZONEMD parsing test

* Update parse_test.go

Co-authored-by: Miek Gieben <miek@miek.nl>

Co-authored-by: Eduardo <eriveros@dcc.uchile.cl>
Co-authored-by: Eduardo <e.sdfbsadjhgskndwegit@xor.cl>
Co-authored-by: Eduardo <e.git@xor.cl>
Co-authored-by: Miek Gieben <miek@miek.nl>
2021-02-26 16:35:05 +01:00
Shubhendra Singh Chauhan 2f14d104f3
improve code quality (#1228)
* Combine multiple `append`s into a single call

* Fix Yoda conditions

* Fix check for empty string

* revert "combine multiple `append`s"
2021-02-25 17:01:55 +01:00
Tom Thorogood e6df8867af
Fix Rdlength related parsing bug in RFC3597 records (#1214)
* Set Rdlength in fromRFC3597

This was a bug found by oss-fuzz. My bad (#1211).

* Limit maximum length of Rdata in (*RFC3597).parse

RDATA must be a 16-bit unsigned integer.

* Validate Rdlength and off in UnpackRRWithHeader

* Revert "Validate Rdlength and off in UnpackRRWithHeader"

This reverts commit 2f6a8811b944b100af7605e53a6fb164944a6d65.

* Use hex.DecodedLen in (*RFC3597).fromRFC3597

While this isn't done elsewhere, it is clearer and more obvious.
2021-02-01 09:10:38 +01:00
JINMEI Tatuya 81df27db17
validate LOC's lat/long field values not to be out of range (#1149)
Automatically submitted.
2020-08-17 07:07:46 +00:00
Disconnect3d 86044e4e05
Fixes a TODO to "error out on > MAX_UINT32" (#1147)
Automatically submitted.
2020-08-17 05:59:54 +00:00
Catena cyber 5bfe94bb6e
Efficient string concatenation (#1105)
Found by oss-fuzz
2020-04-28 09:21:06 +02:00
Manabu Sonoda 1d3a971542
Fix NSEC3PARAM SaltLength when parsing (#1088)
* fix parse nsec3param saltLength

* division inside into cast
2020-03-11 10:24:25 +01:00
Miek Gieben f0dca1ef05
Fix all cases of error halding (#1087)
Automatically submitted.
2020-03-10 06:42:28 +00:00
Pavel Rybintsev 418631f446
correct default values fields in LOC record (#1084)
* Fixed the default values of HorizPre and VertPre

According to RFC-1876 those fields should be:

"a pair of four-bit unsigned
integers, each ranging from zero to nine, with the most
significant four bits representing the base and the second
number representing the power of ten by which to multiply
the base.  This allows sizes from 0e0 (<1cm) to 9e9
(90,000km) to be expressed"

Current values for HorizPre and VertPre (165=0xA5 and 162=0xA2)
are incorrect because the first HEX digit is greater then 9

The default values should be:

HorizPre = 10000m = 10000 * 100 cm = 10^6 = 0x16
VertPre  = 10m    = 10 * 100 cm    = 10^3 = 0x13
Size     = 1m     = 1 * 100 cm     = 10^2 = 0x12

The value of Size was correct, but this PR changes it to HEX
representation to be more readable

* Informative comments

Made comments on LOC record default field values more informative

Co-Authored-By: Richard Gibson <richard.gibson@gmail.com>

Co-authored-by: Richard Gibson <richard.gibson@gmail.com>
2020-03-02 08:48:01 +00:00
Jan Včelák c9b62b4215 APL record support (#1058)
* APL record: add structure and code point

* APL record: add wire format support

* APL record: add presentation format support

* APL record: add isDuplicate implementation

* APL record: add copy implementation

* APL record: add len implementation

* APL record: run go generate

* APL record: fix condition checking for equality

* APL record: use switches to map family to address length

* APL record: check bounds of individual fields rather than whole header

* APL record: stylistic changes

* APL record: remove APLPrefix methods from public interface

* APL record: update README

* APL record: additional cleanup for code review

* APL record: change return type from pointer to struct

* APL record: refactor of pack and unpack to eliminate extra variables
2020-01-03 13:41:45 +01:00
Tom Thorogood 25cacca8ca Prohibit newlines before record data in the ZoneParser (#979)
* Merge setRR into ZoneParser.Next

* Remove file argument from RR.parse

This was only used to fill in the ParseError file field. Instead we now
fill in that field in ZoneParser.Next.

* Move dynamic update check out of RR.parse

This consolidates all the dynamic update checks into one place.

* Check for unexpected newline before parsing RR data

* Move rr.parse call into if-statement

* Allow dynamic updates for TKEY and RFC3597 records

* Document that ParseError file field is unset from parse

* Inline allowDynamicUpdate into ZoneParser.Next

* Improve and simplify TestUnexpectedNewline
2019-06-10 07:38:54 +01:00
Tom F 487e4636d5 ZoneParser: error on parsing an IPv6 address in an A record (#923)
* ZoneParser: error on parsing an IPv6 address in an A record

And vice versa for IPv4 with AAAA.

The implementation of isIPv6 is inspired by e341bae08d/src/net/ip.go (L678-L681) .

* Fix benchmarks that try to use ::1 as A record.

* Test A/AAAA parsing via NewRR rather than zone parser.

* Document why we distinguish IPv4 vs IPv6 via existence of ":".
2019-03-09 09:02:18 +00:00
Tom Thorogood db3d0ce13b
Use an interface method for parsing zone file records (#886)
* Eliminate Variable bool from parserFunc

Instead we now check whether the last token read from the zlexer was
a zNewline or zEOF. The error check above should be tripped for any
record that ends prematurely.

* Use an interface method for parsing zone file records

* Prevent panic in TestOmittedTTL if no regexp match

* Move slurpRemainder into fixed length parse functions

This is consistent with the original logic in setRR and avoids potential
edge cases.

* Parse synthetic records according to RFC 3597

These records lack a presentation format and cannot be parsed otherwise.
This behaviour is consistent with how this previously operated.
2019-01-06 14:36:16 +10:30
Tom Thorogood 29b9bf368b Remove pointless casts (#895)
* Remove pointless casts

These are all casts where the value was already of the same type.

* Use var style for zero-value not cast style
2019-01-04 10:30:55 +00:00
Tom Thorogood 2533d75276 Move scanner comment handling out of scanRR (#877)
* Add comment getter to zlexer

* Use zlexer.Comment instead of lex.comment

* Move comment handling out of setRR code

* Move comment field from lex to zlexer

* Eliminate ZoneParser.com field

* Return empty string from zlexer.Comment on error

* Only reset zlexer.comment field once per Next

* Remove zlexer merge TODO

I'm pretty sure these have to remain separate which is okay.
2018-12-31 10:20:26 +01:00
Tom Thorogood 17c1bc6792
Eliminate lexer goroutines (#792)
* Eliminate zlexer goroutine

This replaces the zlexer goroutine and channels with a zlexer struct
that maintains state and provides a channel-like API.

* Eliminate klexer goroutine

This replaces the klexer goroutine and channels with a klexer struct
that maintains state and provides a channel-like API.

* Merge scan into zlexer and klexer

This does result in tokenText existing twice, but it's pretty simple
and small so it's not that bad.

* Avoid using text/scanner.Position to track position

* Track escape within zlexer.Next

* Avoid zl.commt check on space and tab in zlexer

* Track stri within zlexer.Next

* Track comi within zlexer.Next

There is one special case at the start of a comment that needs to be
handled, otherwise this is as simple as stri was.

* Use a single token buffer in zlexer

This is safe as there is never both a non-empty string buffer and a
non-empty comment buffer.

* Don't hardcode length of zl.tok in zlexer

* Eliminate lex.length field

This is always set to len(l.token) and is only queried in a few places.

It was added in 47cc5b052d without any
obvious need.

* Add whitespace to klexer.Next

* Track lex within klexer.Next

* Use a strings.Builder in klexer.Next

* Simplify : case in klexer.Next

* Add whitespace to zlexer.Next

* Change for loop style in zlexer.Next and klexer.Next

* Surface read errors in zlexer

* Surface read errors from klexer

* Remove debug line from parseKey

* Rename tokenText to readByte

* Make readByte return ok bool

Also change the for loop style to match the Next for loops.

* Make readByte errors sticky

klexer.Next calls readByte separately from within the loop. Without
readByte being sticky, an error that occurs during that readByte call
may be lost.

* Panic in testRR if the error is non-nil

* Add whitespace and unify field setting in zlexer.Next

* Remove eof fields from zlexer and klexer

With readByte having sticky errors, this no longer needed. zl.eof = true
was also in the wrong place and could mask an unbalanced brace error.

* Merge zl.tok blocks in zlexer.Next

* Split the tok buffer into separate string and comment buffers

The invariant of stri > 0 && comi > 0 never being true was broken when
x == '\n' && !zl.quote && zl.commt && zl.brace != 0 (the
"If not in a brace this ends the comment AND the RR" block).

Split the buffer back out into two separate buffers to avoid clobbering.

* Replace token slices with arrays in zlexer

* Add a NewRR benchmark

* Move token buffers into zlexer.Next

These don't need to be retained across Next calls and can be stack
allocated inside Next. This drastically reduces memory consumption as
they accounted for nearly half of all the memory used.

name      old alloc/op   new alloc/op   delta
NewRR-12    9.72kB ± 0%    4.98kB ± 0%  -48.72%  (p=0.000 n=10+10)

* Add a ReadRR benchmark

Unlike NewRR, this will use an io.Reader that does not implement any
methods aside from Read. In particular it does not implement
io.ByteReader.

* Avoid using a bufio.Reader for io.ByteReader readers

At the same time use a smaller buffer size of 1KiB rather than the
bufio.NewReader default of 4KiB.

name       old time/op    new time/op    delta
NewRR-12     11.0µs ± 3%     9.5µs ± 2%  -13.77%  (p=0.000 n=9+10)
ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12     4.98kB ± 0%    0.81kB ± 0%  -83.79%  (p=0.000 n=10+10)
ReadRR-12    4.87kB ± 0%    1.82kB ± 0%  -62.73%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       19.0 ± 0%      17.0 ± 0%  -10.53%  (p=0.000 n=10+10)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

ReadRR-12    11.2µs ±16%     9.8µs ± 1%  -13.03%  (p=0.000 n=10+10)

* Surface any remaining comment from zlexer.Next

* Improve comment handling in zlexer.Next

This both fixes a regression where comments could be lost under certain
circumstances and now emits comments that occur within braces.

* Remove outdated comment from zlexer.Next and klexer.Next

* Delay converting LF to space in braced comment

* Fixup TestParseZoneComments

* Remove tokenUpper field from lex

Not computing this for every token, and instead only
when needed is a substantial performance improvement.

name       old time/op    new time/op    delta
NewRR-12     9.56µs ± 0%    6.30µs ± 1%  -34.08%  (p=0.000 n=9+10)
ReadRR-12    9.93µs ± 1%    6.67µs ± 1%  -32.77%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
NewRR-12       824B ± 0%      808B ± 0%   -1.94%  (p=0.000 n=10+10)
ReadRR-12    1.83kB ± 0%    1.82kB ± 0%   -0.87%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
NewRR-12       17.0 ± 0%      17.0 ± 0%     ~     (all equal)
ReadRR-12      19.0 ± 0%      19.0 ± 0%     ~     (all equal)

* Update ParseZone documentation to match comment changes

The zlexer code was changed to return comments more often, so update the
ParseZone documentation to match.
2018-10-15 17:42:31 +10:30
tr3e 501e858f67 Fix issue #742 (#745)
* Fix error comparison in SetTA

* Add testcase TestParseTA()
2018-09-22 18:36:01 +01:00
Tom Thorogood c9b812d1d9 Remove redundant parenthesis (#727)
* Remove redundant parenthesis

These were caught with:
    gofmt -r '(a) -> a' -w *.go

This commit only includes the changes where the formatting makes the
ordering of operations clear.

* Remove more redundant parenthesis

These were caught with:
    gofmt -r '(a) -> a' -w *.go

This commit includes the remaining changes where the formatting does not
make the ordering of operations as clear as the previous commit.
2018-08-16 17:05:27 +01:00
Tom Thorogood 77d95a53d0 Handle empty NSEC3 salt in scanner (#677)
Fixes #676
2018-05-14 20:07:52 +01:00
Miek Gieben e508eecd67
Some linter fixes from Go report card. (#601)
Implement small linter fixes.
2017-12-06 11:31:56 +00:00
spsholleman 052efef004 Add support for TKEY RRs (#567)
* Add support for TKEY RRs

- make sure Key and Data fields are variable length hex fields
- checkin output from 'go generate'
- add a TKEY specific test to ensure this stays working

* go format changes

* address review comments

* add ability to parse TKEY via string

* handle review comments - change TKEY string output
2017-11-28 07:48:02 +00:00
Miek Gieben 2ae4695cc7
Implement CSYNC (#585)
Implement the CSYNC record.

Fixes #290

Long overdue, lets add this record. Similar in vain as NSEC/NSEC3, we
need to implement len() our selves. Presentation format parsing and
tests are done as well.

This is CoreDNS running with CSYNC support, `dig` doesn't support this
at the moment, so:

~~~
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40323
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;csync.example.org.		IN	TYPE62

;; ANSWER SECTION:
csync.example.org.	10	IN	TYPE62	\# 12 000335240042000460000008

;; AUTHORITY SECTION:
example.org.		10	IN	NS	a.iana-servers.net.
example.org.		10	IN	NS	b.iana-servers.net.
~~~
2017-11-25 08:19:06 +00:00
Miek Gieben acff9ce3fa
Fuzzing the text parser: a few fixes (#579)
I'm fuzzing the text parser and that turned up these two. Will do
further fuzzing with these fixes in.
2017-11-20 18:07:37 +00:00
Miek Gieben b38dc3dcb7
Cleanup: gofmt -w -s *.go (#548)
Some renames of internal names to make go lint happier.
2017-11-03 16:15:35 +00:00
Richard Gibson eccf8bbe83 Correctly parse omitted TTLs and relative domains (#513)
* Fix $TTL handling
* Error when there is no TTL for an RR
* Fix relative name handling
* Error when a relative name is used without an origin (cf. https://tools.ietf.org/html/rfc1035#section-5.1 )

Fixes #484
2017-09-26 11:15:37 -04:00
Miek Gieben 767422ac12 Add AVC record (#480)
See
https://www.iana.org/assignments/dns-parameters/AVC/avc-completed-template
for the template, a new record that is (again) a mirror of the TXT
record. For lack of a better name, name the rdata Txt - as we do in SPF
and TXT.
2017-03-29 22:17:13 +02:00
Jon Nappi c862b7e359 Replace Atoi with ParseUint where appropriate (#470)
* replace Atoi with ParseUint where appropriate

* more Atoi replacements
2017-03-10 21:57:03 +00:00
Richard Gibson f4d2b08694 For consistency with other types, allow empty UINFO RDATA (#424)
Ref https://github.com/miekg/dns/pull/421#discussion_r90610949
2016-12-02 22:38:56 +00:00
Richard Gibson 21314e1838 Fix TXT RDATA parsing (#421)
* Test for proper parsing of whitespace-separated (TXT) character-strings

* Properly parse whitespace-separated (TXT) character-strings

* Remove non-RFC treatment of backslash sequences in character-strings

Fixes gh-420

* For tests, remove non-RFC treatment of backslashes in domain names
2016-12-02 09:34:49 +00:00
Miek Gieben 46df8c9462 Fix for miekg/dns issue #289: support the SMIMEA record (#410)
1) Refactoring of tlsa.go
   - moved routine to create the certificate rdata to its own go module
     as this is shared between TLSA and SMIMEA records
2) Added support for creating an SMIMEA domain name
3) Developed in accordance with draft-ietf-dane-smime-12 RFC

Miek,

Submitting for your review. Happy to make any recommended changes or
address omissions.

Lightly tested against our internal DNS service which hosts DANE
SMIMEA records for our email certificates.

Parse tests are added.
2016-10-17 18:09:52 +01:00
Miek Gieben dbffa4b057 Kill all reflection when packing/unpacking RR (#372)
Update the size-xxx-member tags to point to another field in the struct
that should be used for the length in that field. Fix NSEC3/HIP and TSIG
to use to this and generate the correct pack/unpack functions for them.

Remove IPSECKEY from the lib and handle it as an unknown record - it is
such a horrible RR, needed kludges before - now just handle it as an
unknown RR.

All types now use generated pack and unpack functions. The blacklist is
removed.
2016-06-12 18:31:50 +01:00
Miek Gieben 799de7044d Remove WKS support
Support for WKS was incomplete, i.e. len() method was incorrect.
Remove support for the record and handle it as an unknown one.

Fixes #361
2016-06-05 08:23:44 +01:00
Miek Gieben 475ab80867 Remove (most) reflection
Remove the use of reflection when packing and unpacking, instead
generate all the pack and unpack functions using msg_generate.
This will generate zmsg.go which in turn calls the helper functions from
msg_helper.go.

This increases the speed by about ~30% while cutting back on memory
usage. Not all RRs are using it, but that will be rectified in upcoming
PR.

Most of the speed increase is in the header/question section parsing.
These functions *are* not generated, but straight forward enough. The
implementation can be found in msg.go.

The new code has been fuzzed by go-fuzz, which turned up some issues.

All files that started with 'z', and not autogenerated were renamed,
i.e. zscan.go is now scan.go.

Reflection is still used, in subsequent PRs it will be removed entirely.
2016-06-03 12:45:22 +01:00