Compare commits

...

33 Commits

Author SHA1 Message Date
Samantha Frank 8aafb31347
ratelimits: Small cleanup in transaction.go (#8275) 2025-06-26 17:43:02 -04:00
Aaron Gable 30eac83730
RFC 9773: Update ARI URL (#8274)
https://www.rfc-editor.org/rfc/rfc9773.html is no longer a draft; it
deserves a better-looking path!
2025-06-26 08:50:44 -07:00
Aaron Gable 4e74a25582
Restore TestAccountEmailError (#8273)
This integration test was removed in the early versions of
https://github.com/letsencrypt/boulder/pull/8245, because that PR had
removed all validation of contact addresses. However, later iterations
of that PR restored (most) contact validation, so this PR restores (most
of) the TestAccountEmailError integration test.
2025-06-25 16:35:52 -07:00
James Renken 21d022840b
Really fix GHA for IANA registries (#8271) 2025-06-25 15:58:44 -07:00
Aaron Gable e110ec9a03
Confine contact addresses to the WFE (#8245)
Change the WFE to stop populating the Contact field of the
NewRegistration requests it sends to the RA. Similarly change the WFE to
ignore the Contact field of any update-account requests it receives,
thereby removing all calls to the RA's UpdateRegistrationContact method.

Hoist the RA's contact validation logic into the WFE, so that we can
still return errors to clients which are presenting grossly malformed
contact fields, and have a first layer of protection against trying to
send malformed addresses to email-exporter.

A follow-up change (after a deploy cycle) will remove the deprecated RA
and SA methods.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-25 15:51:44 -07:00
James Renken ea23894910
Fix GHA for IANA registries (#8270)
Add `org: read` to the IANA GHA token's scope, so it can ask
boulder-developers for review.

Add a line break formatting change from IANA.
2025-06-25 13:30:47 -07:00
James Renken 9308392adf
iana: Embed & parse reserved IP registries from primary source (#8249)
Move `policy.IsReservedIP` to `iana.IsReservedAddr`.

Move `policy.IsReservedPrefix` to `iana.IsReservedPrefix`.

Embed & parse IANA's special-purpose address registries for IPv4 and
IPv6 in their original CSV format.

Fixes #8080
2025-06-25 12:05:25 -07:00
dependabot[bot] 901f2dba7c
build(deps): bump the aws group with 4 updates (#8263)
Bumps the aws group with 4 updates:
[github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2)
and [github.com/aws/smithy-go](https://github.com/aws/smithy-go).

Updates `github.com/aws/aws-sdk-go-v2` from 1.36.4 to 1.36.5
Updates `github.com/aws/aws-sdk-go-v2/config` from 1.29.16 to 1.29.17
Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.80.2 to 1.80.3
Updates `github.com/aws/smithy-go` from 1.22.2 to 1.22.4

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 14:08:41 -04:00
James Renken a29f2f37d6
va: Check for reserved IP addresses at dialer creation (#8257)
Fixes #8041
2025-06-25 10:09:47 -07:00
Aaron Gable c576a200d0
Remove id-kp-clientAuth from intermediate ceremony (#8265)
Fixes https://github.com/letsencrypt/boulder/issues/8264
2025-06-24 16:19:31 -07:00
Matthew McPherrin 5ddd5acf99
Print key hash as hex in admin tool. (#8266)
The ProtoText printing of this structure prints the binary string as
escaped
utf8 text, which is essentially gibberish for my processes.

---------

Co-authored-by: Aaron Gable <aaron@letsencrypt.org>
2025-06-23 17:36:06 -07:00
Jacob Hoffman-Andrews cd02caea99
Add verify-release-ancestry.sh (#8268)
And run it from the release workflow.
2025-06-23 17:22:47 -07:00
Samantha Frank ddc4c8683b
email-exporter: Don't waste limited attempts on cached entries (#8262)
Currently, we check the cache only immediately before attempting to send
an email address. However, we only reach that point if the rate limiter
(used to respect the daily API quota) permits it. As a result, around
40% of sends are wasted on email addresses that are ultimately skipped
due to cache hits.

Replace the pre-send cache `Seen` check with an atomic `StoreIfAbsent`
executed before the `limiter.Wait()` so that limiter tokens are consumed
only for email addresses that actually need sending. Skip the
`limiter.Wait()` on cache hits, remove cache entries only when a send
fails, and increment metrics only on successful sends.
2025-06-23 14:55:53 -07:00
Jacob Hoffman-Andrews f087d280be
Add a GitHub Action that only runs on main or hotfix (#8267)
It can be used by tag protection rules to ensure that tags may only be
pushed if their corresponding commit was first pushed to main or a
hotfix branch.
2025-06-23 12:16:01 -07:00
Samantha Frank 1bfc3186c8
grpc: Enable client-side health_v1 health checking (#8254)
- Configure all gRPC clients to check the overall serving status of each
endpoint via the `grpc_health_v1` service.
- Configure all gRPC servers to expose the `grpc_health_v1` service to
any client permitted to access one of the server’s services.
- Modify long-running, deep health checks to set and transition the
overall (empty string) health status of the gRPC server in addition to
the specific service they were configured for.

Fixes #8227
2025-06-18 10:37:20 -04:00
Aaron Gable b6c5ee69ed
Make ARI error messages clearer (#8260)
Fixes https://github.com/letsencrypt/boulder/issues/8259
2025-06-17 16:55:36 -07:00
Jacob Hoffman-Andrews 5ad5f85cfb
bdns: deprecate DOH feature flag (#8234)
Since the bdns unittests used a local DNS server via TCP, modify that
server to instead speak DoH.

Fixes #8120
2025-06-17 14:45:52 -07:00
Samantha Frank c97b312e65
integration: Move test_order_finalize_early to the Go tests (#8258)
Hyrum’s Law strikes again: our Python integration tests were implicitly
relying on behavior that was changed upstream in Certbot’s ACME client
(see https://github.com/certbot/certbot/pull/10239). To ensure continued
coverage, replicate this test in our Go integration test suite.
2025-06-17 17:19:34 -04:00
Aaron Gable aa3c9f0eee
Drop contact column from registrations table (#8201)
Drop the contact column from the Registrations table.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 14:58:53 -07:00
James Renken 61d2558b29
bad-key-revoker: Fix log message formatting (#8252)
Fixes #8251
2025-06-16 11:30:14 -07:00
Aaron Gable c68e27ea6f
Stop overwriting contact column upon account deactivation (#8248)
This fixes an oversight in
https://github.com/letsencrypt/boulder/pull/8200.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:57 -07:00
Aaron Gable fbf0c06427
Delete admin update-email subcommand (#8246)
Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:44 -07:00
Aaron Gable 24c385c1cc
Delete contact-auditor (#8244)
The contact-auditor's purpose was to scan the contact emails stored in
our database and identify invalid addresses which could be removed. As
of https://github.com/letsencrypt/boulder/pull/8201 we no longer have
any contacts in the database, so this tool no longer has a purpose.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:33 -07:00
dependabot[bot] 6872dfc63a
build(deps): bump the aws group with 4 updates (#8242)
Bumps the aws group with 4 updates:
[github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2)
and [github.com/aws/smithy-go](https://github.com/aws/smithy-go).

Updates `github.com/aws/aws-sdk-go-v2` from 1.32.2 to 1.36.4
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/v1.32.2...v1.36.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.43 to 1.29.16
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.43...config/v1.29.16">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.65.3 to 1.80.2
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.65.3...service/s3/v1.80.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/smithy-go` from 1.22.0 to 1.22.2
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws/smithy-go/blob/main/CHANGELOG.md">github.com/aws/smithy-go's
changelog</a>.</em></p>
<blockquote>
<h1>Release (2025-02-17)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.3</li>
<li><strong>Dependency Update</strong>: Bump minimum Go version to 1.22
per our language support policy.</li>
</ul>
<h1>Release (2025-01-21)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.2
<ul>
<li><strong>Bug Fix</strong>: Fix HTTP metrics data race.</li>
<li><strong>Bug Fix</strong>: Replace usages of deprecated ioutil
package.</li>
</ul>
</li>
</ul>
<h1>Release (2024-11-15)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.1
<ul>
<li><strong>Bug Fix</strong>: Fix failure to replace URI path segments
when their names overlap.</li>
</ul>
</li>
</ul>
<h1>Release (2024-10-03)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.0
<ul>
<li><strong>Feature</strong>: Add HTTP client metrics.</li>
</ul>
</li>
</ul>
<h1>Release (2024-09-25)</h1>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go/aws-http-auth</code>: <a
href="https://github.com/aws/smithy-go/blob/main/aws-http-auth/CHANGELOG.md#v100-2024-09-25">v1.0.0</a>
<ul>
<li><strong>Release</strong>: Initial release of module aws-http-auth,
which implements generically consumable SigV4 and SigV4a request
signing.</li>
</ul>
</li>
</ul>
<h1>Release (2024-09-19)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.21.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f2ae388e50"><code>f2ae388</code></a>
Release 2025-01-21</li>
<li><a
href="d9b8ee9d55"><code>d9b8ee9</code></a>
refactor: fix deprecated for ioutil (<a
href="https://redirect.github.com/aws/smithy-go/issues/560">#560</a>)</li>
<li><a
href="ee8334e832"><code>ee8334e</code></a>
transport/http: fix metrics race condition (<a
href="https://redirect.github.com/aws/smithy-go/issues/555">#555</a>)</li>
<li><a
href="7e8149709c"><code>7e81497</code></a>
transport/http: fix go doc typo (<a
href="https://redirect.github.com/aws/smithy-go/issues/554">#554</a>)</li>
<li><a
href="a7d0f1ef5f"><code>a7d0f1e</code></a>
fix potential nil deref in waiter path matcher (<a
href="https://redirect.github.com/aws/smithy-go/issues/563">#563</a>)</li>
<li><a
href="e5c5ac3012"><code>e5c5ac3</code></a>
add changelog instructions and make recipe</li>
<li><a
href="5e16ee7648"><code>5e16ee7</code></a>
add missing waiter retry breakout on non-nil non-matched error (<a
href="https://redirect.github.com/aws/smithy-go/issues/561">#561</a>)</li>
<li><a
href="10fbeed6f8"><code>10fbeed</code></a>
Revert &quot;Change defaults when generating a client via smithy CLI (<a
href="https://redirect.github.com/aws/smithy-go/issues/558">#558</a>)&quot;
(<a
href="https://redirect.github.com/aws/smithy-go/issues/559">#559</a>)</li>
<li><a
href="95ba31879b"><code>95ba318</code></a>
Change defaults when generating a client via smithy CLI (<a
href="https://redirect.github.com/aws/smithy-go/issues/558">#558</a>)</li>
<li><a
href="bed421c3d7"><code>bed421c</code></a>
Release 2024-11-15</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/smithy-go/compare/v1.22.0...v1.22.2">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/aws/aws-sdk-go-v2/service/s3 | [< 1.28, > 1.27.1] |
| github.com/aws/aws-sdk-go-v2/config | [< 1.18, > 1.17.1] |
| github.com/aws/aws-sdk-go-v2/service/s3 | [< 1.31, > 1.30.5] |
</details>


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 22:40:08 -07:00
Aaron Gable 1ffa95d53d
Stop interacting with registration.contact column (#8200)
Deprecate the IgnoreAccountContacts feature flag. This causes the SA to
never query the contact column when reading registrations from the
database, and to never write a value for the contact column when
creating a new registration.

This requires updating or disabling several tests. These tests could be
deleted now, but I felt it was more appropriate for them to be fully
deleted when their corresponding services (e.g. expiration-mailer) are
also deleted.

Fixes https://github.com/letsencrypt/boulder/issues/8176
2025-06-13 14:40:19 -07:00
James Renken 7214b285e4
identifier: Remove helper funcs from PB identifiers migration (#8236)
Remove `ToDNSSlice`, `FromProtoWithDefault`, and
`FromProtoSliceWithDefault` now that all their callers are gone. All
protobufs but one have migrated from DnsNames to Identifiers.

Remove TODOs for the exception, `ValidationRecord`, where an identifier
type isn't appropriate and it really only needs a string.

Rename `corepb.ValidationRecord.DnsName` to `Hostname` for clarity, to
match the corresponding PB's field name.

Improve various comments and docs re: IP address identifiers.

Depends on #8221 (which removes the last callers)
Fixes #8023
2025-06-13 12:55:32 -07:00
Aaron Gable b9a681dbcc
Delete notify-mailer, expiration-mailer, and id-exporter (#8230)
These services existed solely for the purpose of sending emails, which
we no longer do.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-12 15:45:04 -07:00
James Renken 0a095e2f6b
policy, ra: Remove default allows for DNS identifiers (#8233)
Fixes #8184
2025-06-12 15:25:23 -07:00
James Renken 48d5ad3c19
ratelimits: Add IP address identifier support (#8221)
Change most functions in `ratelimits` to use full ACMEIdentifier(s) as
arguments, instead of using their values as strings. This makes the
plumbing from other packages more consistent, and allows us to:

Rename `FQDNsToETLDsPlusOne` to `coveringIdentifiers` and handle IP
identifiers, parsing IPv6 addresses into their covering /64 prefixes for
CertificatesPerDomain[PerAccount] bucket keys.

Port improved IP/CIDR validation logic to NewRegistrationsPerIPAddress &
PerIPv6Range.

Rename `domain` parts of bucket keys to either `identValue` or
`domainOrCIDR`.

Rename other internal functions to clarify that they now handle
identifier values, not just domains.

Add the new reserved IPv6 address range from RFC 9780.

For deployability, don't (yet) rename rate limits themselves; and
because it remains the name of the database table, preserve the term
`fqdnSets`.

Fixes #8223
Part of #7311
2025-06-12 11:47:32 -07:00
Aaron Gable 1f36d654ba
Update CI to mariadb 10.6.22 (#8239)
Fixes https://github.com/letsencrypt/boulder/issues/8238
2025-06-11 15:19:09 -07:00
Aaron Gable 44f75d6abd
Remove mail functionality from bad-key-revoker (#8229)
Simplify the main logic loop to simply revoke certs as soon as they're
identified, rather than jumping through hoops to identify and
deduplicate the associated accounts and emails. Make the Mailer portion
of the config optional for deployability.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-09 14:36:19 -07:00
Aaron Gable d4e706eeb8
Update CI to go1.24.4 (#8232)
Go 1.24.4 is a security release containing fixes to net/http,
os.OpenFile, and x509.Certificate.Verify, all of which we use. We appear
to be unaffected by the specific vulnerabilities described, however. See
the announcement here:
https://groups.google.com/g/golang-announce/c/ufZ8WpEsA3A
2025-06-09 09:30:33 -07:00
dependabot[bot] 426482781c
build(deps): bump the otel group (#7968)
Update:
- https://github.com/open-telemetry/opentelemetry-go-contrib from 0.55.0 to 0.61.0
- https://github.com/open-telemetry/opentelemetry-go from 1.30.0 to 1.36.0
- several golang.org/x/ packages
- their transitive dependencies
2025-06-06 17:22:48 -07:00
541 changed files with 34502 additions and 28243 deletions

View File

@ -36,7 +36,7 @@ jobs:
matrix:
# Add additional docker image tags here and all tests will be run with the additional image.
BOULDER_TOOLS_TAG:
- go1.24.1_2025-06-03
- go1.24.4_2025-06-06
# Tests command definitions. Use the entire "docker compose" command you want to run.
tests:
# Run ./test.sh --help for a description of each of the flags.

View File

@ -0,0 +1,53 @@
name: Check for IANA special-purpose address registry updates
on:
schedule:
- cron: "20 16 * * *"
workflow_dispatch:
jobs:
check-iana-registries:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout iana/data from main branch
uses: actions/checkout@v4
with:
sparse-checkout: iana/data
# If the branch already exists, this will fail, which will remind us about
# the outstanding PR.
- name: Create an iana-registries-gha branch
run: |
git checkout --track origin/main -b iana-registries-gha
- name: Retrieve the IANA special-purpose address registries
run: |
IANA_IPV4="https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry-1.csv"
IANA_IPV6="https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry-1.csv"
REPO_IPV4="iana/data/iana-ipv4-special-registry-1.csv"
REPO_IPV6="iana/data/iana-ipv6-special-registry-1.csv"
curl --fail --location --show-error --silent --output "${REPO_IPV4}" "${IANA_IPV4}"
curl --fail --location --show-error --silent --output "${REPO_IPV6}" "${IANA_IPV6}"
- name: Create a commit and pull request
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell:
bash
# `git diff --exit-code` returns an error code if there are any changes.
run: |
if ! git diff --exit-code; then
git add iana/data/
git config user.name "Irwin the IANA Bot"
git commit \
--message "Update IANA special-purpose address registries"
git push origin HEAD
gh pr create --fill
fi

View File

@ -0,0 +1,17 @@
# This GitHub Action runs only on pushes to main or a hotfix branch. It can
# be used by tag protection rules to ensure that tags may only be pushed if
# their corresponding commit was first pushed to one of those branches.
name: Merged to main (or hotfix)
on:
push:
branches:
- main
- release-branch-*
jobs:
merged-to-main:
name: Merged to main (or hotfix)
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false

View File

@ -15,7 +15,7 @@ jobs:
fail-fast: false
matrix:
GO_VERSION:
- "1.24.1"
- "1.24.4"
runs-on: ubuntu-24.04
permissions:
contents: write
@ -24,6 +24,10 @@ jobs:
- uses: actions/checkout@v4
with:
persist-credentials: false
fetch-depth: '0' # Needed for verify-release-ancestry.sh to see origin/main
- name: Verify release ancestry
run: ./tools/verify-release-ancestry.sh "$GITHUB_SHA"
- name: Build .deb
id: build

View File

@ -16,7 +16,7 @@ jobs:
fail-fast: false
matrix:
GO_VERSION:
- "1.24.1"
- "1.24.4"
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4

View File

@ -3,10 +3,10 @@
[![Build Status](https://github.com/letsencrypt/boulder/actions/workflows/boulder-ci.yml/badge.svg?branch=main)](https://github.com/letsencrypt/boulder/actions/workflows/boulder-ci.yml?query=branch%3Amain)
This is an implementation of an ACME-based CA. The [ACME
protocol](https://github.com/ietf-wg-acme/acme/) allows the CA to
automatically verify that an applicant for a certificate actually controls an
identifier, and allows domain holders to issue and revoke certificates for
their domains. Boulder is the software that runs [Let's
protocol](https://github.com/ietf-wg-acme/acme/) allows the CA to automatically
verify that an applicant for a certificate actually controls an identifier, and
allows subscribers to issue and revoke certificates for the identifiers they
control. Boulder is the software that runs [Let's
Encrypt](https://letsencrypt.org).
## Contents

View File

@ -21,10 +21,9 @@ import (
"github.com/miekg/dns"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/features"
"github.com/letsencrypt/boulder/iana"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/policy"
)
// ResolverAddrs contains DNS resolver(s) that were chosen to perform a
@ -77,30 +76,23 @@ func New(
tlsConfig *tls.Config,
) Client {
var client exchanger
if features.Get().DOH {
// Clone the default transport because it comes with various settings
// that we like, which are different from the zero value of an
// `http.Transport`.
transport := http.DefaultTransport.(*http.Transport).Clone()
transport.TLSClientConfig = tlsConfig
// The default transport already sets this field, but it isn't
// documented that it will always be set. Set it again to be sure,
// because Unbound will reject non-HTTP/2 DoH requests.
transport.ForceAttemptHTTP2 = true
client = &dohExchanger{
clk: clk,
hc: http.Client{
Timeout: readTimeout,
Transport: transport,
},
userAgent: userAgent,
}
} else {
client = &dns.Client{
// Set timeout for underlying net.Conn
ReadTimeout: readTimeout,
Net: "udp",
}
// Clone the default transport because it comes with various settings
// that we like, which are different from the zero value of an
// `http.Transport`.
transport := http.DefaultTransport.(*http.Transport).Clone()
transport.TLSClientConfig = tlsConfig
// The default transport already sets this field, but it isn't
// documented that it will always be set. Set it again to be sure,
// because Unbound will reject non-HTTP/2 DoH requests.
transport.ForceAttemptHTTP2 = true
client = &dohExchanger{
clk: clk,
hc: http.Client{
Timeout: readTimeout,
Transport: transport,
},
userAgent: userAgent,
}
queryTime := prometheus.NewHistogramVec(
@ -281,17 +273,10 @@ func (dnsClient *impl) exchangeOne(ctx context.Context, hostname string, qtype u
case r := <-ch:
if r.err != nil {
var isRetryable bool
if features.Get().DOH {
// According to the http package documentation, retryable
// errors emitted by the http package are of type *url.Error.
var urlErr *url.Error
isRetryable = errors.As(r.err, &urlErr) && urlErr.Temporary()
} else {
// According to the net package documentation, retryable
// errors emitted by the net package are of type *net.OpError.
var opErr *net.OpError
isRetryable = errors.As(r.err, &opErr) && opErr.Temporary()
}
// According to the http package documentation, retryable
// errors emitted by the http package are of type *url.Error.
var urlErr *url.Error
isRetryable = errors.As(r.err, &urlErr) && urlErr.Temporary()
hasRetriesLeft := tries < dnsClient.maxTries
if isRetryable && hasRetriesLeft {
tries++
@ -411,7 +396,7 @@ func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]netip
a, ok := answer.(*dns.A)
if ok && a.A.To4() != nil {
netIP, ok := netip.AddrFromSlice(a.A)
if ok && (policy.IsReservedIP(netIP) == nil || dnsClient.allowRestrictedAddresses) {
if ok && (iana.IsReservedAddr(netIP) == nil || dnsClient.allowRestrictedAddresses) {
addrsA = append(addrsA, netIP)
}
}
@ -429,7 +414,7 @@ func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]netip
aaaa, ok := answer.(*dns.AAAA)
if ok && aaaa.AAAA.To16() != nil {
netIP, ok := netip.AddrFromSlice(aaaa.AAAA)
if ok && (policy.IsReservedIP(netIP) == nil || dnsClient.allowRestrictedAddresses) {
if ok && (iana.IsReservedAddr(netIP) == nil || dnsClient.allowRestrictedAddresses) {
addrsAAAA = append(addrsAAAA, netIP)
}
}

View File

@ -2,10 +2,14 @@ package bdns
import (
"context"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"net/netip"
"net/url"
"os"
@ -20,7 +24,6 @@ import (
"github.com/miekg/dns"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/features"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/test"
@ -28,7 +31,30 @@ import (
const dnsLoopbackAddr = "127.0.0.1:4053"
func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
func mockDNSQuery(w http.ResponseWriter, httpReq *http.Request) {
if httpReq.Header.Get("Content-Type") != "application/dns-message" {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "client didn't send Content-Type: application/dns-message")
}
if httpReq.Header.Get("Accept") != "application/dns-message" {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "client didn't accept Content-Type: application/dns-message")
}
requestBody, err := io.ReadAll(httpReq.Body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "reading body: %s", err)
}
httpReq.Body.Close()
r := new(dns.Msg)
err = r.Unpack(requestBody)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "unpacking request: %s", err)
}
m := new(dns.Msg)
m.SetReply(r)
m.Compress = false
@ -174,45 +200,37 @@ func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
}
}
err := w.WriteMsg(m)
body, err := m.Pack()
if err != nil {
fmt.Fprintf(os.Stderr, "packing reply: %s\n", err)
}
w.Header().Set("Content-Type", "application/dns-message")
_, err = w.Write(body)
if err != nil {
panic(err) // running tests, so panic is OK
}
}
func serveLoopResolver(stopChan chan bool) {
dns.HandleFunc(".", mockDNSQuery)
tcpServer := &dns.Server{
m := http.NewServeMux()
m.HandleFunc("/dns-query", mockDNSQuery)
httpServer := &http.Server{
Addr: dnsLoopbackAddr,
Net: "tcp",
ReadTimeout: time.Second,
WriteTimeout: time.Second,
}
udpServer := &dns.Server{
Addr: dnsLoopbackAddr,
Net: "udp",
Handler: m,
ReadTimeout: time.Second,
WriteTimeout: time.Second,
}
go func() {
err := tcpServer.ListenAndServe()
if err != nil {
fmt.Println(err)
}
}()
go func() {
err := udpServer.ListenAndServe()
cert := "../test/certs/ipki/localhost/cert.pem"
key := "../test/certs/ipki/localhost/key.pem"
err := httpServer.ListenAndServeTLS(cert, key)
if err != nil {
fmt.Println(err)
}
}()
go func() {
<-stopChan
err := tcpServer.Shutdown()
if err != nil {
log.Fatal(err)
}
err = udpServer.Shutdown()
err := httpServer.Shutdown(context.Background())
if err != nil {
log.Fatal(err)
}
@ -240,7 +258,21 @@ func pollServer() {
}
}
// tlsConfig is used for the TLS config of client instances that talk to the
// DoH server set up in TestMain.
var tlsConfig *tls.Config
func TestMain(m *testing.M) {
root, err := os.ReadFile("../test/certs/ipki/minica.pem")
if err != nil {
log.Fatal(err)
}
pool := x509.NewCertPool()
pool.AppendCertsFromPEM(root)
tlsConfig = &tls.Config{
RootCAs: pool,
}
stop := make(chan bool, 1)
serveLoopResolver(stop)
pollServer()
@ -253,7 +285,7 @@ func TestDNSNoServers(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Hour, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Hour, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "letsencrypt.org")
test.AssertEquals(t, len(resolvers), 0)
@ -270,7 +302,7 @@ func TestDNSOneServer(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "cps.letsencrypt.org")
test.AssertEquals(t, len(resolvers), 2)
@ -283,7 +315,7 @@ func TestDNSDuplicateServers(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr, dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "cps.letsencrypt.org")
test.AssertEquals(t, len(resolvers), 2)
@ -296,7 +328,7 @@ func TestDNSServFail(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
bad := "servfail.com"
_, _, err = obj.LookupTXT(context.Background(), bad)
@ -314,7 +346,7 @@ func TestDNSLookupTXT(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
a, _, err := obj.LookupTXT(context.Background(), "letsencrypt.org")
t.Logf("A: %v", a)
@ -332,7 +364,7 @@ func TestDNSLookupHost(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
ip, resolvers, err := obj.LookupHost(context.Background(), "servfail.com")
t.Logf("servfail.com - IP: %s, Err: %s", ip, err)
@ -418,7 +450,7 @@ func TestDNSNXDOMAIN(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
hostname := "nxdomain.letsencrypt.org"
_, _, err = obj.LookupHost(context.Background(), hostname)
@ -434,7 +466,7 @@ func TestDNSLookupCAA(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
removeIDExp := regexp.MustCompile(" id: [[:digit:]]+")
caas, resp, resolvers, err := obj.LookupCAA(context.Background(), "bracewel.net")
@ -513,10 +545,9 @@ func (te *testExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, time.Duration
}
func TestRetry(t *testing.T) {
isTempErr := &net.OpError{Op: "read", Err: tempError(true)}
nonTempErr := &net.OpError{Op: "read", Err: tempError(false)}
isTempErr := &url.Error{Op: "read", Err: tempError(true)}
nonTempErr := &url.Error{Op: "read", Err: tempError(false)}
servFailError := errors.New("DNS problem: server failure at resolver looking up TXT for example.com")
netError := errors.New("DNS problem: networking error looking up TXT for example.com")
type testCase struct {
name string
maxTries int
@ -567,7 +598,7 @@ func TestRetry(t *testing.T) {
isTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 3,
metricsAllRetries: 1,
},
@ -620,7 +651,7 @@ func TestRetry(t *testing.T) {
isTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 3,
metricsAllRetries: 1,
},
@ -634,7 +665,7 @@ func TestRetry(t *testing.T) {
nonTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 2,
},
}
@ -644,7 +675,7 @@ func TestRetry(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), tc.maxTries, "", blog.UseMock(), nil)
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), tc.maxTries, "", blog.UseMock(), tlsConfig)
dr := testClient.(*impl)
dr.dnsClient = tc.te
_, _, err = dr.LookupTXT(context.Background(), "example.com")
@ -675,7 +706,7 @@ func TestRetry(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 3, "", blog.UseMock(), nil)
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 3, "", blog.UseMock(), tlsConfig)
dr := testClient.(*impl)
dr.dnsClient = &testExchanger{errs: []error{isTempErr, isTempErr, nil}}
ctx, cancel := context.WithCancel(context.Background())
@ -754,7 +785,7 @@ func (e *rotateFailureExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, time.
// If its a broken server, return a retryable error
if e.brokenAddresses[a] {
isTempErr := &net.OpError{Op: "read", Err: tempError(true)}
isTempErr := &url.Error{Op: "read", Err: tempError(true)}
return nil, 2 * time.Millisecond, isTempErr
}
@ -776,10 +807,9 @@ func TestRotateServerOnErr(t *testing.T) {
// working server
staticProvider, err := NewStaticProvider(dnsServers)
test.AssertNotError(t, err, "Got error creating StaticProvider")
fmt.Println(staticProvider.servers)
maxTries := 5
client := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), maxTries, "", blog.UseMock(), nil)
client := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), maxTries, "", blog.UseMock(), tlsConfig)
// Configure a mock exchanger that will always return a retryable error for
// servers A and B. This will force server "[2606:4700:4700::1111]:53" to do
@ -843,13 +873,10 @@ func (dohE *dohAlwaysRetryExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, t
}
func TestDOHMetric(t *testing.T) {
features.Set(features.Config{DOH: true})
defer features.Reset()
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := New(time.Second*11, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 0, "", blog.UseMock(), nil)
testClient := New(time.Second*11, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 0, "", blog.UseMock(), tlsConfig)
resolver := testClient.(*impl)
resolver.dnsClient = &dohAlwaysRetryExchanger{err: &url.Error{Op: "read", Err: tempError(true)}}

View File

@ -33,6 +33,7 @@ import (
berrors "github.com/letsencrypt/boulder/errors"
"github.com/letsencrypt/boulder/features"
"github.com/letsencrypt/boulder/goodkey"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/issuance"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
@ -147,7 +148,7 @@ func setup(t *testing.T) *testCtx {
fc := clock.NewFake()
fc.Add(1 * time.Hour)
pa, err := policy.New(nil, nil, blog.NewMock())
pa, err := policy.New(map[identifier.IdentifierType]bool{"dns": true}, nil, blog.NewMock())
test.AssertNotError(t, err, "Couldn't create PA")
err = pa.LoadHostnamePolicyFile("../test/hostname-policy.yaml")
test.AssertNotError(t, err, "Couldn't set hostname policy")

View File

@ -32,10 +32,6 @@ type dryRunSAC struct {
}
func (d dryRunSAC) AddBlockedKey(_ context.Context, req *sapb.AddBlockedKeyRequest, _ ...grpc.CallOption) (*emptypb.Empty, error) {
b, err := prototext.Marshal(req)
if err != nil {
return nil, err
}
d.log.Infof("dry-run: %#v", string(b))
d.log.Infof("dry-run: Block SPKI hash %x by %s %s", req.KeyHash, req.Comment, req.Source)
return &emptypb.Empty{}, nil
}

View File

@ -1,84 +0,0 @@
package main
import (
"context"
"errors"
"flag"
"fmt"
"github.com/letsencrypt/boulder/sa"
)
// subcommandUpdateEmail encapsulates the "admin update-email" command.
//
// Note that this command may be very slow, as the initial query to find the set
// of accounts which have a matching contact email address does not use a
// database index. Therefore, when updating the found accounts, it does not exit
// on failure, preferring to continue and make as much progress as possible.
type subcommandUpdateEmail struct {
address string
clear bool
}
var _ subcommand = (*subcommandUpdateEmail)(nil)
func (s *subcommandUpdateEmail) Desc() string {
return "Change or remove an email address across all accounts"
}
func (s *subcommandUpdateEmail) Flags(flag *flag.FlagSet) {
flag.StringVar(&s.address, "address", "", "Email address to update")
flag.BoolVar(&s.clear, "clear", false, "If set, remove the address")
}
func (s *subcommandUpdateEmail) Run(ctx context.Context, a *admin) error {
if s.address == "" {
return errors.New("the -address flag is required")
}
if s.clear {
return a.clearEmail(ctx, s.address)
}
return errors.New("no action to perform on the given email was specified")
}
func (a *admin) clearEmail(ctx context.Context, address string) error {
a.log.AuditInfof("Scanning database for accounts with email addresses matching %q in order to clear the email addresses.", address)
// We use SQL `CONCAT` rather than interpolating with `+` or `%s` because we want to
// use a `?` placeholder for the email, which prevents SQL injection.
// Since this uses a substring match, it is important
// to subsequently parse the JSON list of addresses and look for exact matches.
// Because this does not use an index, it is very slow.
var regIDs []int64
_, err := a.dbMap.Select(ctx, &regIDs, "SELECT id FROM registrations WHERE contact LIKE CONCAT('%\"mailto:', ?, '\"%')", address)
if err != nil {
return fmt.Errorf("identifying matching accounts: %w", err)
}
a.log.Infof("Found %d registration IDs matching email %q.", len(regIDs), address)
failures := 0
for _, regID := range regIDs {
if a.dryRun {
a.log.Infof("dry-run: remove %q from account %d", address, regID)
continue
}
err := sa.ClearEmail(ctx, a.dbMap, regID, address)
if err != nil {
// Log, but don't fail, because it took a long time to find the relevant registration IDs
// and we don't want to have to redo that work.
a.log.AuditErrf("failed to clear email %q for registration ID %d: %s", address, regID, err)
failures++
} else {
a.log.AuditInfof("cleared email %q for registration ID %d", address, regID)
}
}
if failures > 0 {
return fmt.Errorf("failed to clear email for %d out of %d registration IDs", failures, len(regIDs))
}
return nil
}

View File

@ -178,6 +178,6 @@ func TestBlockSPKIHash(t *testing.T) {
err = a.blockSPKIHash(context.Background(), keyHash[:], u, "")
test.AssertNotError(t, err, "")
test.AssertEquals(t, len(log.GetAllMatching("Found 0 unexpired certificates")), 1)
test.AssertEquals(t, len(log.GetAllMatching("dry-run:")), 1)
test.AssertEquals(t, len(log.GetAllMatching("dry-run: Block SPKI hash "+hex.EncodeToString(keyHash[:]))), 1)
test.AssertEquals(t, len(msa.blockRequests), 0)
}

View File

@ -70,7 +70,6 @@ func main() {
subcommands := map[string]subcommand{
"revoke-cert": &subcommandRevokeCert{},
"block-key": &subcommandBlockKey{},
"update-email": &subcommandUpdateEmail{},
"pause-identifier": &subcommandPauseIdentifier{},
"unpause-account": &subcommandUnpauseAccount{},
}

View File

@ -1,15 +1,10 @@
package notmain
import (
"bytes"
"context"
"crypto/x509"
"flag"
"fmt"
"html/template"
netmail "net/mail"
"os"
"strings"
"time"
"github.com/jmhodges/clock"
@ -24,7 +19,6 @@ import (
"github.com/letsencrypt/boulder/db"
bgrpc "github.com/letsencrypt/boulder/grpc"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mail"
rapb "github.com/letsencrypt/boulder/ra/proto"
"github.com/letsencrypt/boulder/sa"
)
@ -43,10 +37,6 @@ var certsRevoked = prometheus.NewCounter(prometheus.CounterOpts{
Name: "bad_keys_certs_revoked",
Help: "A counter of certificates associated with rows in blockedKeys that have been revoked",
})
var mailErrors = prometheus.NewCounter(prometheus.CounterOpts{
Name: "bad_keys_mail_errors",
Help: "A counter of email send errors",
})
// revoker is an interface used to reduce the scope of a RA gRPC client
// to only the single method we need to use, this makes testing significantly
@ -60,9 +50,6 @@ type badKeyRevoker struct {
maxRevocations int
serialBatchSize int
raClient revoker
mailer mail.Mailer
emailSubject string
emailTemplate *template.Template
logger blog.Logger
clk clock.Clock
backoffIntervalBase time.Duration
@ -190,109 +177,27 @@ func (bkr *badKeyRevoker) markRowChecked(ctx context.Context, unchecked unchecke
return err
}
// resolveContacts builds a map of id -> email addresses
func (bkr *badKeyRevoker) resolveContacts(ctx context.Context, ids []int64) (map[int64][]string, error) {
idToEmail := map[int64][]string{}
for _, id := range ids {
var emails struct {
Contact []string
}
err := bkr.dbMap.SelectOne(ctx, &emails, "SELECT contact FROM registrations WHERE id = ?", id)
// revokeCerts revokes all the provided certificates. It uses reason
// keyCompromise and includes note indicating that they were revoked by
// bad-key-revoker.
func (bkr *badKeyRevoker) revokeCerts(certs []unrevokedCertificate) error {
for _, cert := range certs {
_, err := bkr.raClient.AdministrativelyRevokeCertificate(context.Background(), &rapb.AdministrativelyRevokeCertificateRequest{
Cert: cert.DER,
Serial: cert.Serial,
Code: int64(ocsp.KeyCompromise),
AdminName: "bad-key-revoker",
})
if err != nil {
// ErrNoRows is not acceptable here since there should always be a
// row for the registration, even if there are no contacts
return nil, err
return err
}
if len(emails.Contact) != 0 {
for _, email := range emails.Contact {
idToEmail[id] = append(idToEmail[id], strings.TrimPrefix(email, "mailto:"))
}
} else {
// if the account has no contacts add a placeholder empty contact
// so that we don't skip any certificates
idToEmail[id] = append(idToEmail[id], "")
continue
}
}
return idToEmail, nil
}
var maxSerials = 100
// sendMessage sends a single email to the provided address with the revoked
// serials
func (bkr *badKeyRevoker) sendMessage(addr string, serials []string) error {
conn, err := bkr.mailer.Connect()
if err != nil {
return err
}
defer func() {
_ = conn.Close()
}()
mutSerials := make([]string, len(serials))
copy(mutSerials, serials)
if len(mutSerials) > maxSerials {
more := len(mutSerials) - maxSerials
mutSerials = mutSerials[:maxSerials]
mutSerials = append(mutSerials, fmt.Sprintf("and %d more certificates.", more))
}
message := bytes.NewBuffer(nil)
err = bkr.emailTemplate.Execute(message, mutSerials)
if err != nil {
return err
}
err = conn.SendMail([]string{addr}, bkr.emailSubject, message.String())
if err != nil {
return err
certsRevoked.Inc()
}
return nil
}
// revokeCerts revokes all the certificates associated with a particular key hash and sends
// emails to the users that issued the certificates. Emails are not sent to the user which
// requested revocation of the original certificate which marked the key as compromised.
func (bkr *badKeyRevoker) revokeCerts(revokerEmails []string, emailToCerts map[string][]unrevokedCertificate) error {
revokerEmailsMap := map[string]bool{}
for _, email := range revokerEmails {
revokerEmailsMap[email] = true
}
alreadyRevoked := map[int]bool{}
for email, certs := range emailToCerts {
var revokedSerials []string
for _, cert := range certs {
revokedSerials = append(revokedSerials, cert.Serial)
if alreadyRevoked[cert.ID] {
continue
}
_, err := bkr.raClient.AdministrativelyRevokeCertificate(context.Background(), &rapb.AdministrativelyRevokeCertificateRequest{
Cert: cert.DER,
Serial: cert.Serial,
Code: int64(ocsp.KeyCompromise),
AdminName: "bad-key-revoker",
})
if err != nil {
return err
}
certsRevoked.Inc()
alreadyRevoked[cert.ID] = true
}
// don't send emails to the person who revoked the certificate
if revokerEmailsMap[email] || email == "" {
continue
}
err := bkr.sendMessage(email, revokedSerials)
if err != nil {
mailErrors.Inc()
bkr.logger.Errf("failed to send message: %s", err)
continue
}
}
return nil
}
// invoke processes a single key in the blockedKeys table and returns whether
// there were any rows to process or not.
// invoke exits early and returns true if there is no work to be done.
// Otherwise, it processes a single key in the blockedKeys table and returns false.
func (bkr *badKeyRevoker) invoke(ctx context.Context) (bool, error) {
// Gather a count of rows to be processed.
uncheckedCount, err := bkr.countUncheckedKeys(ctx)
@ -337,47 +242,14 @@ func (bkr *badKeyRevoker) invoke(ctx context.Context) (bool, error) {
return false, nil
}
// build a map of registration ID -> certificates, and collect a
// list of unique registration IDs
ownedBy := map[int64][]unrevokedCertificate{}
var ids []int64
for _, cert := range unrevokedCerts {
if ownedBy[cert.RegistrationID] == nil {
ids = append(ids, cert.RegistrationID)
}
ownedBy[cert.RegistrationID] = append(ownedBy[cert.RegistrationID], cert)
}
// if the account that revoked the original certificate isn't an owner of any
// extant certificates, still add them to ids so that we can resolve their
// email and avoid sending emails later. If RevokedBy == 0 it was a row
// inserted by admin-revoker with a dummy ID, since there won't be a registration
// to look up, don't bother adding it to ids.
if _, present := ownedBy[unchecked.RevokedBy]; !present && unchecked.RevokedBy != 0 {
ids = append(ids, unchecked.RevokedBy)
}
// get contact addresses for the list of IDs
idToEmails, err := bkr.resolveContacts(ctx, ids)
if err != nil {
return false, err
}
// build a map of email -> certificates, this de-duplicates accounts with
// the same email addresses
emailsToCerts := map[string][]unrevokedCertificate{}
for id, emails := range idToEmails {
for _, email := range emails {
emailsToCerts[email] = append(emailsToCerts[email], ownedBy[id]...)
}
}
var serials []string
for _, cert := range unrevokedCerts {
serials = append(serials, cert.Serial)
}
bkr.logger.AuditInfo(fmt.Sprintf("revoking serials %v for key with hash %s", serials, unchecked.KeyHash))
bkr.logger.AuditInfo(fmt.Sprintf("revoking serials %v for key with hash %x", serials, unchecked.KeyHash))
// revoke each certificate and send emails to their owners
err = bkr.revokeCerts(idToEmails[unchecked.RevokedBy], emailsToCerts)
// revoke each certificate
err = bkr.revokeCerts(unrevokedCerts)
if err != nil {
return false, err
}
@ -417,15 +289,14 @@ type Config struct {
// or no work to do.
BackoffIntervalMax config.Duration `validate:"-"`
// Deprecated: the bad-key-revoker no longer sends emails; we use ARI.
// TODO(#8199): Remove this config stanza entirely.
Mailer struct {
cmd.SMTPConfig
// Path to a file containing a list of trusted root certificates for use
// during the SMTP connection (as opposed to the gRPC connections).
cmd.SMTPConfig `validate:"-"`
SMTPTrustedRootFile string
From string `validate:"required"`
EmailSubject string `validate:"required"`
EmailTemplate string `validate:"required"`
From string
EmailSubject string
EmailTemplate string
}
}
@ -457,7 +328,6 @@ func main() {
scope.MustRegister(keysProcessed)
scope.MustRegister(certsRevoked)
scope.MustRegister(mailErrors)
dbMap, err := sa.InitWrappedDb(config.BadKeyRevoker.DB, scope, logger)
cmd.FailOnError(err, "While initializing dbMap")
@ -469,50 +339,11 @@ func main() {
cmd.FailOnError(err, "Failed to load credentials and create gRPC connection to RA")
rac := rapb.NewRegistrationAuthorityClient(conn)
var smtpRoots *x509.CertPool
if config.BadKeyRevoker.Mailer.SMTPTrustedRootFile != "" {
pem, err := os.ReadFile(config.BadKeyRevoker.Mailer.SMTPTrustedRootFile)
cmd.FailOnError(err, "Loading trusted roots file")
smtpRoots = x509.NewCertPool()
if !smtpRoots.AppendCertsFromPEM(pem) {
cmd.FailOnError(nil, "Failed to parse root certs PEM")
}
}
fromAddress, err := netmail.ParseAddress(config.BadKeyRevoker.Mailer.From)
cmd.FailOnError(err, fmt.Sprintf("Could not parse from address: %s", config.BadKeyRevoker.Mailer.From))
smtpPassword, err := config.BadKeyRevoker.Mailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Failed to load SMTP password")
mailClient := mail.New(
config.BadKeyRevoker.Mailer.Server,
config.BadKeyRevoker.Mailer.Port,
config.BadKeyRevoker.Mailer.Username,
smtpPassword,
smtpRoots,
*fromAddress,
logger,
scope,
1*time.Second, // reconnection base backoff
5*60*time.Second, // reconnection maximum backoff
)
if config.BadKeyRevoker.Mailer.EmailSubject == "" {
cmd.Fail("BadKeyRevoker.Mailer.EmailSubject must be populated")
}
templateBytes, err := os.ReadFile(config.BadKeyRevoker.Mailer.EmailTemplate)
cmd.FailOnError(err, fmt.Sprintf("failed to read email template %q: %s", config.BadKeyRevoker.Mailer.EmailTemplate, err))
emailTemplate, err := template.New("email").Parse(string(templateBytes))
cmd.FailOnError(err, fmt.Sprintf("failed to parse email template %q: %s", config.BadKeyRevoker.Mailer.EmailTemplate, err))
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: config.BadKeyRevoker.MaximumRevocations,
serialBatchSize: config.BadKeyRevoker.FindCertificatesBatchSize,
raClient: rac,
mailer: mailClient,
emailSubject: config.BadKeyRevoker.Mailer.EmailSubject,
emailTemplate: emailTemplate,
logger: logger,
clk: clk,
backoffIntervalMax: config.BadKeyRevoker.BackoffIntervalMax.Duration,

View File

@ -4,24 +4,22 @@ import (
"context"
"crypto/rand"
"fmt"
"html/template"
"strings"
"sync"
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mocks"
rapb "github.com/letsencrypt/boulder/ra/proto"
"github.com/letsencrypt/boulder/sa"
"github.com/letsencrypt/boulder/test"
"github.com/letsencrypt/boulder/test/vars"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
func randHash(t *testing.T) []byte {
@ -81,25 +79,16 @@ func TestSelectUncheckedRows(t *testing.T) {
test.AssertEquals(t, row.RevokedBy, int64(1))
}
func insertRegistration(t *testing.T, dbMap *db.WrappedMap, fc clock.Clock, addrs ...string) int64 {
func insertRegistration(t *testing.T, dbMap *db.WrappedMap, fc clock.Clock) int64 {
t.Helper()
jwkHash := make([]byte, 32)
_, err := rand.Read(jwkHash)
test.AssertNotError(t, err, "failed to read rand")
contactStr := "[]"
if len(addrs) > 0 {
contacts := []string{}
for _, addr := range addrs {
contacts = append(contacts, fmt.Sprintf(`"mailto:%s"`, addr))
}
contactStr = fmt.Sprintf("[%s]", strings.Join(contacts, ","))
}
res, err := dbMap.ExecContext(
context.Background(),
"INSERT INTO registrations (jwk, jwk_sha256, contact, agreement, createdAt, status, LockCol) VALUES (?, ?, ?, ?, ?, ?, ?)",
"INSERT INTO registrations (jwk, jwk_sha256, agreement, createdAt, status, LockCol) VALUES (?, ?, ?, ?, ?, ?)",
[]byte{},
fmt.Sprintf("%x", jwkHash),
contactStr,
"yes",
fc.Now(),
string(core.StatusValid),
@ -244,47 +233,6 @@ func TestFindUnrevoked(t *testing.T) {
test.AssertEquals(t, err.Error(), fmt.Sprintf("too many certificates to revoke associated with %x: got 1, max 0", hashA))
}
func TestResolveContacts(t *testing.T) {
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
test.AssertNotError(t, err, "failed setting up db client")
defer test.ResetBoulderTestDatabase(t)()
fc := clock.NewFake()
bkr := &badKeyRevoker{dbMap: dbMap, clk: fc}
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc, "example.com", "example-2.com")
regIDC := insertRegistration(t, dbMap, fc, "example.com")
regIDD := insertRegistration(t, dbMap, fc, "example-2.com")
idToEmail, err := bkr.resolveContacts(context.Background(), []int64{regIDA, regIDB, regIDC, regIDD})
test.AssertNotError(t, err, "resolveContacts failed")
test.AssertDeepEquals(t, idToEmail, map[int64][]string{
regIDA: {""},
regIDB: {"example.com", "example-2.com"},
regIDC: {"example.com"},
regIDD: {"example-2.com"},
})
}
var testTemplate = template.Must(template.New("testing").Parse("{{range .}}{{.}}\n{{end}}"))
func TestSendMessage(t *testing.T) {
mm := &mocks.Mailer{}
fc := clock.NewFake()
bkr := &badKeyRevoker{mailer: mm, emailSubject: "testing", emailTemplate: testTemplate, clk: fc}
maxSerials = 2
err := bkr.sendMessage("example.com", []string{"a", "b", "c"})
test.AssertNotError(t, err, "sendMessages failed")
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "example.com")
test.AssertEquals(t, mm.Messages[0].Subject, bkr.emailSubject)
test.AssertEquals(t, mm.Messages[0].Body, "a\nb\nand 1 more certificates.\n")
}
type mockRevoker struct {
revoked int
mu sync.Mutex
@ -303,20 +251,15 @@ func TestRevokeCerts(t *testing.T) {
defer test.ResetBoulderTestDatabase(t)()
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap, raClient: mr, mailer: mm, emailSubject: "testing", emailTemplate: testTemplate, clk: fc}
bkr := &badKeyRevoker{dbMap: dbMap, raClient: mr, clk: fc}
err = bkr.revokeCerts([]string{"revoker@example.com", "revoker-b@example.com"}, map[string][]unrevokedCertificate{
"revoker@example.com": {{ID: 0, Serial: "ff"}},
"revoker-b@example.com": {{ID: 0, Serial: "ff"}},
"other@example.com": {{ID: 1, Serial: "ee"}},
err = bkr.revokeCerts([]unrevokedCertificate{
{ID: 0, Serial: "ff"},
{ID: 1, Serial: "ee"},
})
test.AssertNotError(t, err, "revokeCerts failed")
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "other@example.com")
test.AssertEquals(t, mm.Messages[0].Subject, bkr.emailSubject)
test.AssertEquals(t, mm.Messages[0].Body, "ee\n")
test.AssertEquals(t, mr.revoked, 2)
}
func TestCertificateAbsent(t *testing.T) {
@ -329,7 +272,7 @@ func TestCertificateAbsent(t *testing.T) {
fc := clock.NewFake()
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "example.com")
regIDA := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDA, false)
@ -349,9 +292,6 @@ func TestCertificateAbsent(t *testing.T) {
maxRevocations: 1,
serialBatchSize: 1,
raClient: &mockRevoker{},
mailer: &mocks.Mailer{},
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
@ -368,24 +308,20 @@ func TestInvoke(t *testing.T) {
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
mailer: mm,
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "example.com")
regIDB := insertRegistration(t, dbMap, fc, "example.com")
regIDC := insertRegistration(t, dbMap, fc, "other.example.com", "uno.example.com")
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc)
regIDC := insertRegistration(t, dbMap, fc)
regIDD := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDC, false)
@ -398,8 +334,6 @@ func TestInvoke(t *testing.T) {
test.AssertNotError(t, err, "invoke failed")
test.AssertEquals(t, noWork, false)
test.AssertEquals(t, mr.revoked, 4)
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "example.com")
test.AssertMetricWithLabelsEquals(t, keysToProcess, prometheus.Labels{}, 1)
var checked struct {
@ -440,23 +374,19 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
mailer: mm,
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "a@example.com")
regIDB := insertRegistration(t, dbMap, fc, "a@example.com")
regIDC := insertRegistration(t, dbMap, fc, "b@example.com")
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc)
regIDC := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
@ -471,8 +401,6 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
test.AssertNotError(t, err, "invoke failed")
test.AssertEquals(t, noWork, false)
test.AssertEquals(t, mr.revoked, 4)
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "b@example.com")
}
func TestBackoffPolicy(t *testing.T) {

View File

@ -10,7 +10,7 @@ import (
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/iana"
"github.com/letsencrypt/boulder/va"
vaConfig "github.com/letsencrypt/boulder/va/config"
vapb "github.com/letsencrypt/boulder/va/proto"
@ -82,16 +82,12 @@ func main() {
clk := cmd.Clock()
var servers bdns.ServerProvider
proto := "udp"
if features.Get().DOH {
proto = "tcp"
}
if len(c.VA.DNSStaticResolvers) != 0 {
servers, err = bdns.NewStaticProvider(c.VA.DNSStaticResolvers)
cmd.FailOnError(err, "Couldn't start static DNS server resolver")
} else {
servers, err = bdns.StartDynamicProvider(c.VA.DNSProvider, 60*time.Second, proto)
servers, err = bdns.StartDynamicProvider(c.VA.DNSProvider, 60*time.Second, "tcp")
cmd.FailOnError(err, "Couldn't start dynamic DNS server resolver")
}
defer servers.Stop()
@ -153,7 +149,7 @@ func main() {
c.VA.AccountURIPrefixes,
va.PrimaryPerspective,
"",
policy.IsReservedIP)
iana.IsReservedAddr)
cmd.FailOnError(err, "Unable to create VA server")
start, err := bgrpc.NewServer(c.VA.GRPC, logger).Add(

View File

@ -127,6 +127,11 @@ type Config struct {
// Deprecated: This field no longer has any effect.
PendingAuthorizationLifetimeDays int `validate:"-"`
// MaxContactsPerRegistration limits the number of contact addresses which
// can be provided in a single NewAccount request. Requests containing more
// contacts than this are rejected. Default: 10.
MaxContactsPerRegistration int `validate:"omitempty,min=1"`
AccountCache *CacheConfig
Limiter struct {
@ -312,6 +317,10 @@ func main() {
c.WFE.StaleTimeout.Duration = time.Minute * 10
}
if c.WFE.MaxContactsPerRegistration == 0 {
c.WFE.MaxContactsPerRegistration = 10
}
var limiter *ratelimits.Limiter
var txnBuilder *ratelimits.TransactionBuilder
var limiterRedis *bredis.Ring
@ -346,6 +355,7 @@ func main() {
logger,
c.WFE.Timeout.Duration,
c.WFE.StaleTimeout.Duration,
c.WFE.MaxContactsPerRegistration,
rac,
sac,
eec,

View File

@ -15,16 +15,12 @@ import (
_ "github.com/letsencrypt/boulder/cmd/boulder-va"
_ "github.com/letsencrypt/boulder/cmd/boulder-wfe2"
_ "github.com/letsencrypt/boulder/cmd/cert-checker"
_ "github.com/letsencrypt/boulder/cmd/contact-auditor"
_ "github.com/letsencrypt/boulder/cmd/crl-checker"
_ "github.com/letsencrypt/boulder/cmd/crl-storer"
_ "github.com/letsencrypt/boulder/cmd/crl-updater"
_ "github.com/letsencrypt/boulder/cmd/email-exporter"
_ "github.com/letsencrypt/boulder/cmd/expiration-mailer"
_ "github.com/letsencrypt/boulder/cmd/id-exporter"
_ "github.com/letsencrypt/boulder/cmd/log-validator"
_ "github.com/letsencrypt/boulder/cmd/nonce-service"
_ "github.com/letsencrypt/boulder/cmd/notify-mailer"
_ "github.com/letsencrypt/boulder/cmd/ocsp-responder"
_ "github.com/letsencrypt/boulder/cmd/remoteva"
_ "github.com/letsencrypt/boulder/cmd/reversed-hostname-checker"

View File

@ -305,12 +305,11 @@ func makeTemplate(randReader io.Reader, profile *certProfile, pubKey []byte, tbc
case crlCert:
cert.IsCA = false
case requestCert, intermediateCert:
// id-kp-serverAuth and id-kp-clientAuth are included in intermediate
// certificates in order to technically constrain them. id-kp-serverAuth
// is required by 7.1.2.2.g of the CABF Baseline Requirements, but
// id-kp-clientAuth isn't. We include id-kp-clientAuth as we also include
// it in our end-entity certificates.
cert.ExtKeyUsage = []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth}
// id-kp-serverAuth is included in intermediate certificates, as required by
// Section 7.1.2.10.6 of the CA/BF Baseline Requirements.
// id-kp-clientAuth is excluded, as required by section 3.2.1 of the Chrome
// Root Program Requirements.
cert.ExtKeyUsage = []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}
cert.MaxPathLenZero = true
case crossCert:
cert.ExtKeyUsage = tbcs.ExtKeyUsage

View File

@ -133,9 +133,8 @@ func TestMakeTemplateRoot(t *testing.T) {
cert, err = makeTemplate(randReader, profile, pubKey, nil, intermediateCert)
test.AssertNotError(t, err, "makeTemplate failed when everything worked as expected")
test.Assert(t, cert.MaxPathLenZero, "MaxPathLenZero not set in intermediate template")
test.AssertEquals(t, len(cert.ExtKeyUsage), 2)
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageClientAuth)
test.AssertEquals(t, cert.ExtKeyUsage[1], x509.ExtKeyUsageServerAuth)
test.AssertEquals(t, len(cert.ExtKeyUsage), 1)
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageServerAuth)
}
func TestMakeTemplateRestrictedCrossCertificate(t *testing.T) {

View File

@ -313,8 +313,8 @@ func (c *certChecker) checkValidations(ctx context.Context, cert *corepb.Certifi
return fmt.Errorf("no relevant authzs found valid at %s", cert.Issued)
}
// We may get multiple authorizations for the same name, but that's okay.
// Any authorization for a given name is sufficient.
// We may get multiple authorizations for the same identifier, but that's
// okay. Any authorization for a given identifier is sufficient.
identToAuthz := make(map[identifier.ACMEIdentifier]*corepb.Authorization)
for _, m := range authzs {
identToAuthz[identifier.FromProto(m.Identifier)] = m

View File

@ -89,6 +89,8 @@ func (d *DBConfig) URL() (string, error) {
return strings.TrimSpace(string(url)), err
}
// SMTPConfig is deprecated.
// TODO(#8199): Delete this when it is removed from bad-key-revoker's config.
type SMTPConfig struct {
PasswordConfig
Server string `validate:"required"`
@ -463,7 +465,7 @@ type GRPCServerConfig struct {
// These service names must match the service names advertised by gRPC itself,
// which are identical to the names set in our gRPC .proto files prefixed by
// the package names set in those files (e.g. "ca.CertificateAuthority").
Services map[string]GRPCServiceConfig `json:"services" validate:"required,dive,required"`
Services map[string]*GRPCServiceConfig `json:"services" validate:"required,dive,required"`
// MaxConnectionAge specifies how long a connection may live before the server sends a GoAway to the
// client. Because gRPC connections re-resolve DNS after a connection close,
// this controls how long it takes before a client learns about changes to its
@ -474,10 +476,10 @@ type GRPCServerConfig struct {
// GRPCServiceConfig contains the information needed to configure a gRPC service.
type GRPCServiceConfig struct {
// PerServiceClientNames is a map of gRPC service names to client certificate
// SANs. The upstream listening server will reject connections from clients
// which do not appear in this list, and the server interceptor will reject
// RPC calls for this service from clients which are not listed here.
// ClientNames is the list of accepted gRPC client certificate SANs.
// Connections from clients not in this list will be rejected by the
// upstream listener, and RPCs from unlisted clients will be denied by the
// server interceptor.
ClientNames []string `json:"clientNames" validate:"min=1,dive,hostname,required"`
}

View File

@ -1,84 +0,0 @@
# Contact-Auditor
Audits subscriber registrations for e-mail addresses that
`notify-mailer` is currently configured to skip.
# Usage:
```shell
-config string
File containing a JSON config.
-to-file
Write the audit results to a file.
-to-stdout
Print the audit results to stdout.
```
## Results format:
```
<id> <createdAt> <problem type> "<contact contents or entry>" "<error msg>"
```
## Example output:
### Successful run with no violations encountered and `--to-file`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
I004823 contact-auditor 7LzGvQI Audit finished successfully
I004823 contact-auditor 5Pbk_QM Audit results were written to: audit-2006-01-02T15:04.tsv
```
### Contact contains entries that violate policy and `--to-stdout`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
1 2006-01-02 15:04:05 validation "<contact entry>" "<error msg>"
...
I004823 contact-auditor 2fv7-QY Audit finished successfully
```
### Contact is not valid JSON and `--to-stdout`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
3 2006-01-02 15:04:05 unmarshal "<contact contents>" "<error msg>"
...
I004823 contact-auditor 2fv7-QY Audit finished successfully
```
### Audit incomplete, query ended prematurely:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
...
E004823 contact-auditor 8LmTgww [AUDIT] Audit was interrupted, results may be incomplete: <error msg>
exit status 1
```
# Configuration file:
The path to a database config file like the one below must be provided
following the `-config` flag.
```json
{
"contactAuditor": {
"db": {
"dbConnectFile": <string>,
"maxOpenConns": <int>,
"maxIdleConns": <int>,
"connMaxLifetime": <int>,
"connMaxIdleTime": <int>
}
}
}
```

View File

@ -1,212 +0,0 @@
package notmain
import (
"context"
"database/sql"
"encoding/json"
"errors"
"flag"
"fmt"
"os"
"strings"
"time"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
)
type contactAuditor struct {
db *db.WrappedMap
resultsFile *os.File
writeToStdout bool
logger blog.Logger
}
type result struct {
id int64
contacts []string
createdAt string
}
func unmarshalContact(contact []byte) ([]string, error) {
var contacts []string
err := json.Unmarshal(contact, &contacts)
if err != nil {
return nil, err
}
return contacts, nil
}
func validateContacts(id int64, createdAt string, contacts []string) error {
// Setup a buffer to store any validation problems we encounter.
var probsBuff strings.Builder
// Helper to write validation problems to our buffer.
writeProb := func(contact string, prob string) {
// Add validation problem to buffer.
fmt.Fprintf(&probsBuff, "%d\t%s\tvalidation\t%q\t%q\t%q\n", id, createdAt, contact, prob, contacts)
}
for _, contact := range contacts {
if strings.HasPrefix(contact, "mailto:") {
err := policy.ValidEmail(strings.TrimPrefix(contact, "mailto:"))
if err != nil {
writeProb(contact, err.Error())
}
} else {
writeProb(contact, "missing 'mailto:' prefix")
}
}
if probsBuff.Len() != 0 {
return errors.New(probsBuff.String())
}
return nil
}
// beginAuditQuery executes the audit query and returns a cursor used to
// stream the results.
func (c contactAuditor) beginAuditQuery(ctx context.Context) (*sql.Rows, error) {
rows, err := c.db.QueryContext(ctx, `
SELECT DISTINCT id, contact, createdAt
FROM registrations
WHERE contact NOT IN ('[]', 'null');`)
if err != nil {
return nil, err
}
return rows, nil
}
func (c contactAuditor) writeResults(result string) {
if c.writeToStdout {
_, err := fmt.Print(result)
if err != nil {
c.logger.Errf("Error while writing result to stdout: %s", err)
}
}
if c.resultsFile != nil {
_, err := c.resultsFile.WriteString(result)
if err != nil {
c.logger.Errf("Error while writing result to file: %s", err)
}
}
}
// run retrieves a cursor from `beginAuditQuery` and then audits the
// `contact` column of all returned rows for abnormalities or policy
// violations.
func (c contactAuditor) run(ctx context.Context, resChan chan *result) error {
c.logger.Infof("Beginning database query")
rows, err := c.beginAuditQuery(ctx)
if err != nil {
return err
}
for rows.Next() {
var id int64
var contact []byte
var createdAt string
err := rows.Scan(&id, &contact, &createdAt)
if err != nil {
return err
}
contacts, err := unmarshalContact(contact)
if err != nil {
c.writeResults(fmt.Sprintf("%d\t%s\tunmarshal\t%q\t%q\n", id, createdAt, contact, err))
}
err = validateContacts(id, createdAt, contacts)
if err != nil {
c.writeResults(err.Error())
}
// Only used for testing.
if resChan != nil {
resChan <- &result{id, contacts, createdAt}
}
}
// Ensure the query wasn't interrupted before it could complete.
err = rows.Close() //nolint:sqlclosecheck // the lint wants us to do this in a defer instead, but we want to return the error
if err != nil {
return err
} else {
c.logger.Info("Query completed successfully")
}
// Only used for testing.
if resChan != nil {
close(resChan)
}
return nil
}
type Config struct {
ContactAuditor struct {
DB cmd.DBConfig
}
}
func main() {
configFile := flag.String("config", "", "File containing a JSON config.")
writeToStdout := flag.Bool("to-stdout", false, "Print the audit results to stdout.")
writeToFile := flag.Bool("to-file", false, "Write the audit results to a file.")
flag.Parse()
logger := cmd.NewLogger(cmd.SyslogConfig{StdoutLevel: 7})
logger.Info(cmd.VersionString())
if *configFile == "" {
flag.Usage()
os.Exit(1)
}
// Load config from JSON.
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, fmt.Sprintf("Error reading config file: %q", *configFile))
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Couldn't unmarshal config")
db, err := sa.InitWrappedDb(cfg.ContactAuditor.DB, nil, logger)
cmd.FailOnError(err, "Couldn't setup database client")
var resultsFile *os.File
if *writeToFile {
resultsFile, err = os.Create(
fmt.Sprintf("contact-audit-%s.tsv", time.Now().Format("2006-01-02T15:04")),
)
cmd.FailOnError(err, "Failed to create results file")
}
// Setup and run contact-auditor.
auditor := contactAuditor{
db: db,
resultsFile: resultsFile,
writeToStdout: *writeToStdout,
logger: logger,
}
logger.Info("Running contact-auditor")
err = auditor.run(context.TODO(), nil)
cmd.FailOnError(err, "Audit was interrupted, results may be incomplete")
logger.Info("Audit finished successfully")
if *writeToFile {
logger.Infof("Audit results were written to: %s", resultsFile.Name())
resultsFile.Close()
}
}
func init() {
cmd.RegisterCommand("contact-auditor", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,212 +0,0 @@
package notmain
import (
"context"
"fmt"
"os"
"strings"
"testing"
"time"
"github.com/jmhodges/clock"
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/sa"
"github.com/letsencrypt/boulder/test"
"github.com/letsencrypt/boulder/test/vars"
)
var (
regA *corepb.Registration
regB *corepb.Registration
regC *corepb.Registration
regD *corepb.Registration
)
const (
emailARaw = "test@example.com"
emailBRaw = "example@notexample.com"
emailCRaw = "test-example@notexample.com"
telNum = "666-666-7777"
)
func TestContactAuditor(t *testing.T) {
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations.
testCtx.addRegistrations(t)
resChan := make(chan *result, 10)
err := testCtx.c.run(context.Background(), resChan)
test.AssertNotError(t, err, "received error")
// We should get back A, B, C, and D
test.AssertEquals(t, len(resChan), 4)
for entry := range resChan {
err := validateContacts(entry.id, entry.createdAt, entry.contacts)
switch entry.id {
case regA.Id:
// Contact validation policy sad path.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:test@example.com"})
test.AssertError(t, err, "failed to error on a contact that violates our e-mail policy")
case regB.Id:
// Ensure grace period was respected.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:example@notexample.com"})
test.AssertNotError(t, err, "received error for a valid contact entry")
case regC.Id:
// Contact validation happy path.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:test-example@notexample.com"})
test.AssertNotError(t, err, "received error for a valid contact entry")
// Unmarshal Contact sad path.
_, err := unmarshalContact([]byte("[ mailto:test@example.com ]"))
test.AssertError(t, err, "failed to error while unmarshaling invalid Contact JSON")
// Fix our JSON and ensure that the contact field returns
// errors for our 2 additional contacts
contacts, err := unmarshalContact([]byte(`[ "mailto:test@example.com", "tel:666-666-7777" ]`))
test.AssertNotError(t, err, "received error while unmarshaling valid Contact JSON")
// Ensure Contact validation now fails.
err = validateContacts(entry.id, entry.createdAt, contacts)
test.AssertError(t, err, "failed to error on 2 invalid Contact entries")
case regD.Id:
test.AssertDeepEquals(t, entry.contacts, []string{"tel:666-666-7777"})
test.AssertError(t, err, "failed to error on an invalid contact entry")
default:
t.Errorf("ID: %d was not expected", entry.id)
}
}
// Load results file.
data, err := os.ReadFile(testCtx.c.resultsFile.Name())
if err != nil {
t.Error(err)
}
// Results file should contain 2 newlines, 1 for each result.
contentLines := strings.Split(strings.TrimRight(string(data), "\n"), "\n")
test.AssertEquals(t, len(contentLines), 2)
// Each result entry should contain six tab separated columns.
for _, line := range contentLines {
test.AssertEquals(t, len(strings.Split(line, "\t")), 6)
}
}
type testCtx struct {
c contactAuditor
dbMap *db.WrappedMap
ssa *sa.SQLStorageAuthority
cleanUp func()
}
func (tc testCtx) addRegistrations(t *testing.T) {
emailA := "mailto:" + emailARaw
emailB := "mailto:" + emailBRaw
emailC := "mailto:" + emailCRaw
tel := "tel:" + telNum
// Every registration needs a unique JOSE key
jsonKeyA := []byte(`{
"kty":"RSA",
"n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw",
"e":"AQAB"
}`)
jsonKeyB := []byte(`{
"kty":"RSA",
"n":"z8bp-jPtHt4lKBqepeKF28g_QAEOuEsCIou6sZ9ndsQsEjxEOQxQ0xNOQezsKa63eogw8YS3vzjUcPP5BJuVzfPfGd5NVUdT-vSSwxk3wvk_jtNqhrpcoG0elRPQfMVsQWmxCAXCVRz3xbcFI8GTe-syynG3l-g1IzYIIZVNI6jdljCZML1HOMTTW4f7uJJ8mM-08oQCeHbr5ejK7O2yMSSYxW03zY-Tj1iVEebROeMv6IEEJNFSS4yM-hLpNAqVuQxFGetwtwjDMC1Drs1dTWrPuUAAjKGrP151z1_dE74M5evpAhZUmpKv1hY-x85DC6N0hFPgowsanmTNNiV75w",
"e":"AAEAAQ"
}`)
jsonKeyC := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-sCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
jsonKeyD := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-FCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
regA = &corepb.Registration{
Id: 1,
Contact: []string{emailA},
Key: jsonKeyA,
}
regB = &corepb.Registration{
Id: 2,
Contact: []string{emailB},
Key: jsonKeyB,
}
regC = &corepb.Registration{
Id: 3,
Contact: []string{emailC},
Key: jsonKeyC,
}
// Reg D has a `tel:` contact ACME URL
regD = &corepb.Registration{
Id: 4,
Contact: []string{tel},
Key: jsonKeyD,
}
// Add the four test registrations
ctx := context.Background()
var err error
regA, err = tc.ssa.NewRegistration(ctx, regA)
test.AssertNotError(t, err, "Couldn't store regA")
regB, err = tc.ssa.NewRegistration(ctx, regB)
test.AssertNotError(t, err, "Couldn't store regB")
regC, err = tc.ssa.NewRegistration(ctx, regC)
test.AssertNotError(t, err, "Couldn't store regC")
regD, err = tc.ssa.NewRegistration(ctx, regD)
test.AssertNotError(t, err, "Couldn't store regD")
}
func setup(t *testing.T) testCtx {
log := blog.UseMock()
// Using DBConnSAFullPerms to be able to insert registrations and
// certificates
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
if err != nil {
t.Fatalf("Couldn't connect to the database: %s", err)
}
// Make temp results file
file, err := os.CreateTemp("", fmt.Sprintf("audit-%s", time.Now().Format("2006-01-02T15:04")))
if err != nil {
t.Fatal(err)
}
cleanUp := func() {
test.ResetBoulderTestDatabase(t)
file.Close()
os.Remove(file.Name())
}
db, err := sa.DBMapForTest(vars.DBConnSAMailer)
if err != nil {
t.Fatalf("Couldn't connect to the database: %s", err)
}
ssa, err := sa.NewSQLStorageAuthority(dbMap, dbMap, nil, 1, 0, clock.New(), log, metrics.NoopRegisterer)
if err != nil {
t.Fatalf("unable to create SQLStorageAuthority: %s", err)
}
return testCtx{
c: contactAuditor{
db: db,
resultsFile: file,
logger: blog.NewMock(),
},
dbMap: dbMap,
ssa: ssa,
cleanUp: cleanUp,
}
}

View File

@ -105,10 +105,9 @@ func main() {
clientSecret,
c.EmailExporter.SalesforceBaseURL,
c.EmailExporter.PardotBaseURL,
cache,
)
cmd.FailOnError(err, "Creating Pardot API client")
exporterServer := email.NewExporterImpl(pardotClient, c.EmailExporter.PerDayLimit, c.EmailExporter.MaxConcurrentRequests, scope, logger)
exporterServer := email.NewExporterImpl(pardotClient, cache, c.EmailExporter.PerDayLimit, c.EmailExporter.MaxConcurrentRequests, scope, logger)
tlsConfig, err := c.EmailExporter.TLS.Load(scope)
cmd.FailOnError(err, "Loading email-exporter TLS config")

View File

@ -1,964 +0,0 @@
package notmain
import (
"bytes"
"context"
"crypto/x509"
"encoding/json"
"errors"
"flag"
"fmt"
"math"
netmail "net/mail"
"net/url"
"os"
"sort"
"strings"
"sync"
"text/template"
"time"
"github.com/jmhodges/clock"
"google.golang.org/grpc"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/core"
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/identifier"
blog "github.com/letsencrypt/boulder/log"
bmail "github.com/letsencrypt/boulder/mail"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
sapb "github.com/letsencrypt/boulder/sa/proto"
)
const (
defaultExpirationSubject = "Let's Encrypt certificate expiration notice for domain {{.ExpirationSubject}}"
)
var (
errNoValidEmail = errors.New("no usable contact address")
)
type regStore interface {
GetRegistration(ctx context.Context, req *sapb.RegistrationID, _ ...grpc.CallOption) (*corepb.Registration, error)
}
// limiter tracks how many mails we've sent to a given address in a given day.
// Note that this does not track mails across restarts of the process.
// Modifications to `counts` and `currentDay` are protected by a mutex.
type limiter struct {
sync.RWMutex
// currentDay is a day in UTC, truncated to 24 hours. When the current
// time is more than 24 hours past this date, all counts reset and this
// date is updated.
currentDay time.Time
// counts is a map from address to number of mails we have attempted to
// send during `currentDay`.
counts map[string]int
// limit is the number of sends after which we'll return an error from
// check()
limit int
clk clock.Clock
}
const oneDay = 24 * time.Hour
// maybeBumpDay updates lim.currentDay if its current value is more than 24
// hours ago, and resets the counts map. Expects limiter is locked.
func (lim *limiter) maybeBumpDay() {
today := lim.clk.Now().Truncate(oneDay)
if (today.Sub(lim.currentDay) >= oneDay && len(lim.counts) > 0) ||
lim.counts == nil {
// Throw away counts so far and switch to a new day.
// This also does the initialization of counts and currentDay the first
// time inc() is called.
lim.counts = make(map[string]int)
lim.currentDay = today
}
}
// inc increments the count for the current day, and cleans up previous days
// if needed.
func (lim *limiter) inc(address string) {
lim.Lock()
defer lim.Unlock()
lim.maybeBumpDay()
lim.counts[address] += 1
}
// check checks whether the count for the given address is at the limit,
// and returns an error if so.
func (lim *limiter) check(address string) error {
lim.RLock()
defer lim.RUnlock()
lim.maybeBumpDay()
if lim.counts[address] >= lim.limit {
return errors.New("daily mail limit exceeded for this email address")
}
return nil
}
type mailer struct {
log blog.Logger
dbMap *db.WrappedMap
rs regStore
mailer bmail.Mailer
emailTemplate *template.Template
subjectTemplate *template.Template
nagTimes []time.Duration
parallelSends uint
certificatesPerTick int
// addressLimiter limits how many mails we'll send to a single address in
// a single day.
addressLimiter *limiter
// Maximum number of rows to update in a single SQL UPDATE statement.
updateChunkSize int
clk clock.Clock
stats mailerStats
}
type certDERWithRegID struct {
DER core.CertDER
RegID int64
}
type mailerStats struct {
sendDelay *prometheus.GaugeVec
sendDelayHistogram *prometheus.HistogramVec
nagsAtCapacity *prometheus.GaugeVec
errorCount *prometheus.CounterVec
sendLatency prometheus.Histogram
processingLatency prometheus.Histogram
certificatesExamined prometheus.Counter
certificatesAlreadyRenewed prometheus.Counter
certificatesPerAccountNeedingMail prometheus.Histogram
}
func (m *mailer) sendNags(conn bmail.Conn, contacts []string, certs []*x509.Certificate) error {
if len(certs) == 0 {
return errors.New("no certs given to send nags for")
}
emails := []string{}
for _, contact := range contacts {
parsed, err := url.Parse(contact)
if err != nil {
m.log.Errf("parsing contact email: %s", err)
continue
}
if parsed.Scheme != "mailto" {
continue
}
address := parsed.Opaque
err = policy.ValidEmail(address)
if err != nil {
m.log.Debugf("skipping invalid email: %s", err)
continue
}
err = m.addressLimiter.check(address)
if err != nil {
m.log.Infof("not sending mail: %s", err)
continue
}
m.addressLimiter.inc(address)
emails = append(emails, parsed.Opaque)
}
if len(emails) == 0 {
return errNoValidEmail
}
expiresIn := time.Duration(math.MaxInt64)
expDate := m.clk.Now()
domains := []string{}
serials := []string{}
// Pick out the expiration date that is closest to being hit.
for _, cert := range certs {
domains = append(domains, cert.DNSNames...)
serials = append(serials, core.SerialToString(cert.SerialNumber))
possible := cert.NotAfter.Sub(m.clk.Now())
if possible < expiresIn {
expiresIn = possible
expDate = cert.NotAfter
}
}
domains = core.UniqueLowerNames(domains)
sort.Strings(domains)
const maxSerials = 100
truncatedSerials := serials
if len(truncatedSerials) > maxSerials {
truncatedSerials = serials[0:maxSerials]
}
const maxDomains = 100
truncatedDomains := domains
if len(truncatedDomains) > maxDomains {
truncatedDomains = domains[0:maxDomains]
}
// Construct the information about the expiring certificates for use in the
// subject template
expiringSubject := fmt.Sprintf("%q", domains[0])
if len(domains) > 1 {
expiringSubject += fmt.Sprintf(" (and %d more)", len(domains)-1)
}
// Execute the subjectTemplate by filling in the ExpirationSubject
subjBuf := new(bytes.Buffer)
err := m.subjectTemplate.Execute(subjBuf, struct {
ExpirationSubject string
}{
ExpirationSubject: expiringSubject,
})
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "SubjectTemplateFailure"}).Inc()
return err
}
email := struct {
ExpirationDate string
DaysToExpiration int
DNSNames string
TruncatedDNSNames string
NumDNSNamesOmitted int
}{
ExpirationDate: expDate.UTC().Format(time.DateOnly),
DaysToExpiration: int(expiresIn.Hours() / 24),
DNSNames: strings.Join(domains, "\n"),
TruncatedDNSNames: strings.Join(truncatedDomains, "\n"),
NumDNSNamesOmitted: len(domains) - len(truncatedDomains),
}
msgBuf := new(bytes.Buffer)
err = m.emailTemplate.Execute(msgBuf, email)
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "TemplateFailure"}).Inc()
return err
}
logItem := struct {
DaysToExpiration int
TruncatedDNSNames []string
TruncatedSerials []string
}{
DaysToExpiration: email.DaysToExpiration,
TruncatedDNSNames: truncatedDomains,
TruncatedSerials: truncatedSerials,
}
logStr, err := json.Marshal(logItem)
if err != nil {
return fmt.Errorf("failed to serialize log line: %w", err)
}
m.log.Infof("attempting send for JSON=%s", string(logStr))
startSending := m.clk.Now()
err = conn.SendMail(emails, subjBuf.String(), msgBuf.String())
if err != nil {
return fmt.Errorf("failed send for %s: %w", string(logStr), err)
}
finishSending := m.clk.Now()
elapsed := finishSending.Sub(startSending)
m.stats.sendLatency.Observe(elapsed.Seconds())
return nil
}
// updateLastNagTimestamps updates the lastExpirationNagSent column for every cert in
// the given list. Even though it can encounter errors, it only logs them and
// does not return them, because we always prefer to simply continue.
func (m *mailer) updateLastNagTimestamps(ctx context.Context, certs []*x509.Certificate) {
for len(certs) > 0 {
size := len(certs)
if m.updateChunkSize > 0 && size > m.updateChunkSize {
size = m.updateChunkSize
}
chunk := certs[0:size]
certs = certs[size:]
m.updateLastNagTimestampsChunk(ctx, chunk)
}
}
// updateLastNagTimestampsChunk processes a single chunk (up to 65k) of certificates.
func (m *mailer) updateLastNagTimestampsChunk(ctx context.Context, certs []*x509.Certificate) {
params := make([]interface{}, len(certs)+1)
for i, cert := range certs {
params[i+1] = core.SerialToString(cert.SerialNumber)
}
query := fmt.Sprintf(
"UPDATE certificateStatus SET lastExpirationNagSent = ? WHERE serial IN (%s)",
db.QuestionMarks(len(certs)),
)
params[0] = m.clk.Now()
_, err := m.dbMap.ExecContext(ctx, query, params...)
if err != nil {
m.log.AuditErrf("Error updating certificate status for %d certs: %s", len(certs), err)
m.stats.errorCount.With(prometheus.Labels{"type": "UpdateCertificateStatus"}).Inc()
}
}
func (m *mailer) certIsRenewed(ctx context.Context, cert *x509.Certificate) (bool, error) {
idents := identifier.FromCert(cert)
var present bool
err := m.dbMap.SelectOne(
ctx,
&present,
`SELECT EXISTS (SELECT id FROM fqdnSets WHERE setHash = ? AND issued > ? LIMIT 1)`,
core.HashIdentifiers(idents),
cert.NotBefore,
)
return present, err
}
type work struct {
regID int64
certDERs []core.CertDER
}
func (m *mailer) processCerts(
ctx context.Context,
allCerts []certDERWithRegID,
expiresIn time.Duration,
) error {
regIDToCertDERs := make(map[int64][]core.CertDER)
for _, cert := range allCerts {
cs := regIDToCertDERs[cert.RegID]
cs = append(cs, cert.DER)
regIDToCertDERs[cert.RegID] = cs
}
parallelSends := m.parallelSends
if parallelSends == 0 {
parallelSends = 1
}
var wg sync.WaitGroup
workChan := make(chan work, len(regIDToCertDERs))
// Populate the work chan on a goroutine so work is available as soon
// as one of the sender routines starts.
go func(ch chan<- work) {
for regID, certs := range regIDToCertDERs {
ch <- work{regID, certs}
}
close(workChan)
}(workChan)
for senderNum := uint(0); senderNum < parallelSends; senderNum++ {
// For politeness' sake, don't open more than 1 new connection per
// second.
if senderNum > 0 {
time.Sleep(time.Second)
}
if ctx.Err() != nil {
return ctx.Err()
}
conn, err := m.mailer.Connect()
if err != nil {
m.log.AuditErrf("connecting parallel sender %d: %s", senderNum, err)
return err
}
wg.Add(1)
go func(conn bmail.Conn, ch <-chan work) {
defer wg.Done()
for w := range ch {
err := m.sendToOneRegID(ctx, conn, w.regID, w.certDERs, expiresIn)
if err != nil {
m.log.AuditErr(err.Error())
}
}
conn.Close()
}(conn, workChan)
}
wg.Wait()
return nil
}
func (m *mailer) sendToOneRegID(ctx context.Context, conn bmail.Conn, regID int64, certDERs []core.CertDER, expiresIn time.Duration) error {
if ctx.Err() != nil {
return ctx.Err()
}
if len(certDERs) == 0 {
return errors.New("shouldn't happen: empty certificate list in sendToOneRegID")
}
reg, err := m.rs.GetRegistration(ctx, &sapb.RegistrationID{Id: regID})
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "GetRegistration"}).Inc()
return fmt.Errorf("Error fetching registration %d: %s", regID, err)
}
parsedCerts := []*x509.Certificate{}
for i, certDER := range certDERs {
if ctx.Err() != nil {
return ctx.Err()
}
parsedCert, err := x509.ParseCertificate(certDER)
if err != nil {
// TODO(#1420): tell registration about this error
m.log.AuditErrf("Error parsing certificate: %s. Body: %x", err, certDER)
m.stats.errorCount.With(prometheus.Labels{"type": "ParseCertificate"}).Inc()
continue
}
// The histogram version of send delay reports the worst case send delay for
// a single regID in this cycle.
if i == 0 {
sendDelay := expiresIn - parsedCert.NotAfter.Sub(m.clk.Now())
m.stats.sendDelayHistogram.With(prometheus.Labels{"nag_group": expiresIn.String()}).Observe(
sendDelay.Truncate(time.Second).Seconds())
}
renewed, err := m.certIsRenewed(ctx, parsedCert)
if err != nil {
m.log.AuditErrf("expiration-mailer: error fetching renewal state: %v", err)
// assume not renewed
} else if renewed {
m.log.Debugf("Cert %s is already renewed", core.SerialToString(parsedCert.SerialNumber))
m.stats.certificatesAlreadyRenewed.Add(1)
m.updateLastNagTimestamps(ctx, []*x509.Certificate{parsedCert})
continue
}
parsedCerts = append(parsedCerts, parsedCert)
}
m.stats.certificatesPerAccountNeedingMail.Observe(float64(len(parsedCerts)))
if len(parsedCerts) == 0 {
// all certificates are renewed
return nil
}
err = m.sendNags(conn, reg.Contact, parsedCerts)
if err != nil {
// If the error was due to the address(es) being unusable or the mail being
// undeliverable, we don't want to try again later.
var badAddrErr *bmail.BadAddressSMTPError
if errors.Is(err, errNoValidEmail) || errors.As(err, &badAddrErr) {
m.updateLastNagTimestamps(ctx, parsedCerts)
// Some accounts have no email; some accounts have an invalid email.
// Treat those as non-error cases.
return nil
}
m.stats.errorCount.With(prometheus.Labels{"type": "SendNags"}).Inc()
return fmt.Errorf("sending nag emails: %s", err)
}
m.updateLastNagTimestamps(ctx, parsedCerts)
return nil
}
// findExpiringCertificates finds certificates that might need an expiration mail, filters them,
// groups by account, sends mail, and updates their status in the DB so we don't examine them again.
//
// Invariant: findExpiringCertificates should examine each certificate at most N times, where
// N is the number of reminders. For every certificate examined (barring errors), this function
// should update the lastExpirationNagSent field of certificateStatus, so it does not need to
// examine the same certificate again on the next go-round. This ensures we make forward progress
// and don't clog up the window of certificates to be examined.
func (m *mailer) findExpiringCertificates(ctx context.Context) error {
now := m.clk.Now()
// E.g. m.nagTimes = [2, 4, 8, 15] days from expiration
for i, expiresIn := range m.nagTimes {
left := now
if i > 0 {
left = left.Add(m.nagTimes[i-1])
}
right := now.Add(expiresIn)
m.log.Infof("expiration-mailer: Searching for certificates that expire between %s and %s and had last nag >%s before expiry",
left.UTC(), right.UTC(), expiresIn)
var certs []certDERWithRegID
var err error
if features.Get().ExpirationMailerUsesJoin {
certs, err = m.getCertsWithJoin(ctx, left, right, expiresIn)
} else {
certs, err = m.getCerts(ctx, left, right, expiresIn)
}
if err != nil {
return err
}
m.stats.certificatesExamined.Add(float64(len(certs)))
// If the number of rows was exactly `m.certificatesPerTick` rows we need to increment
// a stat indicating that this nag group is at capacity. If this condition
// continually occurs across mailer runs then we will not catch up,
// resulting in under-sending expiration mails. The effects of this
// were initially described in issue #2002[0].
//
// 0: https://github.com/letsencrypt/boulder/issues/2002
atCapacity := float64(0)
if len(certs) == m.certificatesPerTick {
m.log.Infof("nag group %s expiring certificates at configured capacity (select limit %d)",
expiresIn.String(), m.certificatesPerTick)
atCapacity = float64(1)
}
m.stats.nagsAtCapacity.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(atCapacity)
m.log.Infof("Found %d certificates expiring between %s and %s", len(certs),
left.Format(time.DateTime), right.Format(time.DateTime))
if len(certs) == 0 {
continue // nothing to do
}
processingStarted := m.clk.Now()
err = m.processCerts(ctx, certs, expiresIn)
if err != nil {
m.log.AuditErr(err.Error())
}
processingEnded := m.clk.Now()
elapsed := processingEnded.Sub(processingStarted)
m.stats.processingLatency.Observe(elapsed.Seconds())
}
return nil
}
func (m *mailer) getCertsWithJoin(ctx context.Context, left, right time.Time, expiresIn time.Duration) ([]certDERWithRegID, error) {
// First we do a query on the certificateStatus table to find certificates
// nearing expiry meeting our criteria for email notification. We later
// sequentially fetch the certificate details. This avoids an expensive
// JOIN.
var certs []certDERWithRegID
_, err := m.dbMap.Select(
ctx,
&certs,
`SELECT
cert.der as der, cert.registrationID as regID
FROM certificateStatus AS cs
JOIN certificates as cert
ON cs.serial = cert.serial
AND cs.notAfter > :cutoffA
AND cs.notAfter <= :cutoffB
AND cs.status != "revoked"
AND COALESCE(TIMESTAMPDIFF(SECOND, cs.lastExpirationNagSent, cs.notAfter) > :nagCutoff, 1)
ORDER BY cs.notAfter ASC
LIMIT :certificatesPerTick`,
map[string]interface{}{
"cutoffA": left,
"cutoffB": right,
"nagCutoff": expiresIn.Seconds(),
"certificatesPerTick": m.certificatesPerTick,
},
)
if err != nil {
m.log.AuditErrf("expiration-mailer: Error loading certificate serials: %s", err)
return nil, err
}
m.log.Debugf("found %d certificates", len(certs))
return certs, nil
}
func (m *mailer) getCerts(ctx context.Context, left, right time.Time, expiresIn time.Duration) ([]certDERWithRegID, error) {
// First we do a query on the certificateStatus table to find certificates
// nearing expiry meeting our criteria for email notification. We later
// sequentially fetch the certificate details. This avoids an expensive
// JOIN.
var serials []string
_, err := m.dbMap.Select(
ctx,
&serials,
`SELECT
cs.serial
FROM certificateStatus AS cs
WHERE cs.notAfter > :cutoffA
AND cs.notAfter <= :cutoffB
AND cs.status != "revoked"
AND COALESCE(TIMESTAMPDIFF(SECOND, cs.lastExpirationNagSent, cs.notAfter) > :nagCutoff, 1)
ORDER BY cs.notAfter ASC
LIMIT :certificatesPerTick`,
map[string]interface{}{
"cutoffA": left,
"cutoffB": right,
"nagCutoff": expiresIn.Seconds(),
"certificatesPerTick": m.certificatesPerTick,
},
)
if err != nil {
m.log.AuditErrf("expiration-mailer: Error loading certificate serials: %s", err)
return nil, err
}
m.log.Debugf("found %d certificates", len(serials))
// Now we can sequentially retrieve the certificate details for each of the
// certificate status rows
var certs []certDERWithRegID
for i, serial := range serials {
if ctx.Err() != nil {
return nil, ctx.Err()
}
cert, err := sa.SelectCertificate(ctx, m.dbMap, serial)
if err != nil {
// We can get a NoRowsErr when processing a serial number corresponding
// to a precertificate with no final certificate. Since this certificate
// is not being used by a subscriber, we don't send expiration email about
// it.
if db.IsNoRows(err) {
m.log.Infof("no rows for serial %q", serial)
continue
}
m.log.AuditErrf("expiration-mailer: Error loading cert %q: %s", cert.Serial, err)
continue
}
certs = append(certs, certDERWithRegID{
DER: cert.Der,
RegID: cert.RegistrationID,
})
if i == 0 {
// Report the send delay metric. Note: this is the worst-case send delay
// of any certificate in this batch because it's based on the first (oldest).
sendDelay := expiresIn - cert.Expires.AsTime().Sub(m.clk.Now())
m.stats.sendDelay.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(
sendDelay.Truncate(time.Second).Seconds())
}
}
return certs, nil
}
type durationSlice []time.Duration
func (ds durationSlice) Len() int {
return len(ds)
}
func (ds durationSlice) Less(a, b int) bool {
return ds[a] < ds[b]
}
func (ds durationSlice) Swap(a, b int) {
ds[a], ds[b] = ds[b], ds[a]
}
type Config struct {
Mailer struct {
DebugAddr string `validate:"omitempty,hostname_port"`
DB cmd.DBConfig
cmd.SMTPConfig
// From is an RFC 5322 formatted "From" address for reminder messages,
// e.g. "Example <example@test.org>"
From string `validate:"required"`
// Subject is the Subject line of reminder messages. This is a Go
// template with a single variable: ExpirationSubject, which contains
// a list of affected hostnames, possibly truncated.
Subject string
// CertLimit is the maximum number of certificates to investigate in a
// single batch. Defaults to 100.
CertLimit int `validate:"min=0"`
// MailsPerAddressPerDay is the maximum number of emails we'll send to
// a single address in a single day. Defaults to 0 (unlimited).
// Note that this does not track sends across restarts of the process,
// so we may send more than this when we restart expiration-mailer.
// This is a best-effort limitation. Defaults to math.MaxInt.
MailsPerAddressPerDay int `validate:"min=0"`
// UpdateChunkSize is the maximum number of rows to update in a single
// SQL UPDATE statement.
UpdateChunkSize int `validate:"min=0,max=65535"`
NagTimes []string `validate:"min=1,dive,required"`
// Path to a text/template email template with a .gotmpl or .txt file
// extension.
EmailTemplate string `validate:"required"`
// How often to process a batch of certificates
Frequency config.Duration
// ParallelSends is the number of parallel goroutines used to process
// each batch of emails. Defaults to 1.
ParallelSends uint
TLS cmd.TLSConfig
SAService *cmd.GRPCClientConfig
// Path to a file containing a list of trusted root certificates for use
// during the SMTP connection (as opposed to the gRPC connections).
SMTPTrustedRootFile string
Features features.Config
}
Syslog cmd.SyslogConfig
OpenTelemetry cmd.OpenTelemetryConfig
}
func initStats(stats prometheus.Registerer) mailerStats {
sendDelay := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "send_delay",
Help: "For the last batch of certificates, difference between the idealized send time and actual send time. Will always be nonzero, bigger numbers are worse",
},
[]string{"nag_group"})
stats.MustRegister(sendDelay)
sendDelayHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "send_delay_histogram",
Help: "For each mail sent, difference between the idealized send time and actual send time. Will always be nonzero, bigger numbers are worse",
Buckets: prometheus.LinearBuckets(86400, 86400, 10),
},
[]string{"nag_group"})
stats.MustRegister(sendDelayHistogram)
nagsAtCapacity := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "nags_at_capacity",
Help: "Count of nag groups at capacity",
},
[]string{"nag_group"})
stats.MustRegister(nagsAtCapacity)
errorCount := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "errors",
Help: "Number of errors",
},
[]string{"type"})
stats.MustRegister(errorCount)
sendLatency := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "send_latency",
Help: "Time the mailer takes sending messages in seconds",
Buckets: metrics.InternetFacingBuckets,
})
stats.MustRegister(sendLatency)
processingLatency := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "processing_latency",
Help: "Time the mailer takes processing certificates in seconds",
Buckets: []float64{30, 60, 75, 90, 120, 600, 3600},
})
stats.MustRegister(processingLatency)
certificatesExamined := prometheus.NewCounter(
prometheus.CounterOpts{
Name: "certificates_examined",
Help: "Number of certificates looked at that are potentially due for an expiration mail",
})
stats.MustRegister(certificatesExamined)
certificatesAlreadyRenewed := prometheus.NewCounter(
prometheus.CounterOpts{
Name: "certificates_already_renewed",
Help: "Number of certificates from certificates_examined that were ignored because they were already renewed",
})
stats.MustRegister(certificatesAlreadyRenewed)
accountsNeedingMail := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "certificates_per_account_needing_mail",
Help: "After ignoring certificates_already_renewed and grouping the remaining certificates by account, how many accounts needed to get an email; grouped by how many certificates each account needed",
Buckets: []float64{0, 1, 2, 100, 1000, 10000, 100000},
})
stats.MustRegister(accountsNeedingMail)
return mailerStats{
sendDelay: sendDelay,
sendDelayHistogram: sendDelayHistogram,
nagsAtCapacity: nagsAtCapacity,
errorCount: errorCount,
sendLatency: sendLatency,
processingLatency: processingLatency,
certificatesExamined: certificatesExamined,
certificatesAlreadyRenewed: certificatesAlreadyRenewed,
certificatesPerAccountNeedingMail: accountsNeedingMail,
}
}
func main() {
debugAddr := flag.String("debug-addr", "", "Debug server address override")
configFile := flag.String("config", "", "File path to the configuration file for this service")
certLimit := flag.Int("cert_limit", 0, "Count of certificates to process per expiration period")
reconnBase := flag.Duration("reconnectBase", 1*time.Second, "Base sleep duration between reconnect attempts")
reconnMax := flag.Duration("reconnectMax", 5*60*time.Second, "Max sleep duration between reconnect attempts after exponential backoff")
daemon := flag.Bool("daemon", false, "Run in daemon mode")
flag.Parse()
if *configFile == "" {
flag.Usage()
os.Exit(1)
}
var c Config
err := cmd.ReadConfigFile(*configFile, &c)
cmd.FailOnError(err, "Reading JSON config file into config structure")
features.Set(c.Mailer.Features)
if *debugAddr != "" {
c.Mailer.DebugAddr = *debugAddr
}
scope, logger, oTelShutdown := cmd.StatsAndLogging(c.Syslog, c.OpenTelemetry, c.Mailer.DebugAddr)
defer oTelShutdown(context.Background())
logger.Info(cmd.VersionString())
if *daemon && c.Mailer.Frequency.Duration == 0 {
fmt.Fprintln(os.Stderr, "mailer.frequency is not set in the JSON config")
os.Exit(1)
}
if *certLimit > 0 {
c.Mailer.CertLimit = *certLimit
}
// Default to 100 if no certLimit is set
if c.Mailer.CertLimit == 0 {
c.Mailer.CertLimit = 100
}
if c.Mailer.MailsPerAddressPerDay == 0 {
c.Mailer.MailsPerAddressPerDay = math.MaxInt
}
dbMap, err := sa.InitWrappedDb(c.Mailer.DB, scope, logger)
cmd.FailOnError(err, "While initializing dbMap")
tlsConfig, err := c.Mailer.TLS.Load(scope)
cmd.FailOnError(err, "TLS config")
clk := cmd.Clock()
conn, err := bgrpc.ClientSetup(c.Mailer.SAService, tlsConfig, scope, clk)
cmd.FailOnError(err, "Failed to load credentials and create gRPC connection to SA")
sac := sapb.NewStorageAuthorityClient(conn)
var smtpRoots *x509.CertPool
if c.Mailer.SMTPTrustedRootFile != "" {
pem, err := os.ReadFile(c.Mailer.SMTPTrustedRootFile)
cmd.FailOnError(err, "Loading trusted roots file")
smtpRoots = x509.NewCertPool()
if !smtpRoots.AppendCertsFromPEM(pem) {
cmd.FailOnError(nil, "Failed to parse root certs PEM")
}
}
// Load email template
emailTmpl, err := os.ReadFile(c.Mailer.EmailTemplate)
cmd.FailOnError(err, fmt.Sprintf("Could not read email template file [%s]", c.Mailer.EmailTemplate))
tmpl, err := template.New("expiry-email").Parse(string(emailTmpl))
cmd.FailOnError(err, "Could not parse email template")
// If there is no configured subject template, use a default
if c.Mailer.Subject == "" {
c.Mailer.Subject = defaultExpirationSubject
}
// Load subject template
subjTmpl, err := template.New("expiry-email-subject").Parse(c.Mailer.Subject)
cmd.FailOnError(err, "Could not parse email subject template")
fromAddress, err := netmail.ParseAddress(c.Mailer.From)
cmd.FailOnError(err, fmt.Sprintf("Could not parse from address: %s", c.Mailer.From))
smtpPassword, err := c.Mailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Failed to load SMTP password")
mailClient := bmail.New(
c.Mailer.Server,
c.Mailer.Port,
c.Mailer.Username,
smtpPassword,
smtpRoots,
*fromAddress,
logger,
scope,
*reconnBase,
*reconnMax)
var nags durationSlice
for _, nagDuration := range c.Mailer.NagTimes {
dur, err := time.ParseDuration(nagDuration)
if err != nil {
logger.AuditErrf("Failed to parse nag duration string [%s]: %s", nagDuration, err)
return
}
// Add some padding to the nag times so we send _before_ the configured
// time rather than after. See https://github.com/letsencrypt/boulder/pull/1029
adjustedInterval := dur + c.Mailer.Frequency.Duration
nags = append(nags, adjustedInterval)
}
// Make sure durations are sorted in increasing order
sort.Sort(nags)
if c.Mailer.UpdateChunkSize > 65535 {
// MariaDB limits the number of placeholders parameters to max_uint16:
// https://github.com/MariaDB/server/blob/10.5/sql/sql_prepare.cc#L2629-L2635
cmd.Fail(fmt.Sprintf("UpdateChunkSize of %d is too big", c.Mailer.UpdateChunkSize))
}
m := mailer{
log: logger,
dbMap: dbMap,
rs: sac,
mailer: mailClient,
subjectTemplate: subjTmpl,
emailTemplate: tmpl,
nagTimes: nags,
certificatesPerTick: c.Mailer.CertLimit,
addressLimiter: &limiter{clk: cmd.Clock(), limit: c.Mailer.MailsPerAddressPerDay},
updateChunkSize: c.Mailer.UpdateChunkSize,
parallelSends: c.Mailer.ParallelSends,
clk: clk,
stats: initStats(scope),
}
// Prefill this labelled stat with the possible label values, so each value is
// set to 0 on startup, rather than being missing from stats collection until
// the first mail run.
for _, expiresIn := range nags {
m.stats.nagsAtCapacity.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(0)
}
ctx, cancel := context.WithCancel(context.Background())
go cmd.CatchSignals(cancel)
if *daemon {
t := time.NewTicker(c.Mailer.Frequency.Duration)
for {
select {
case <-t.C:
err = m.findExpiringCertificates(ctx)
if err != nil && !errors.Is(err, context.Canceled) {
cmd.FailOnError(err, "expiration-mailer has failed")
}
case <-ctx.Done():
return
}
}
} else {
err = m.findExpiringCertificates(ctx)
if err != nil && !errors.Is(err, context.Canceled) {
cmd.FailOnError(err, "expiration-mailer has failed")
}
}
}
func init() {
cmd.RegisterCommand("expiration-mailer", main, &cmd.ConfigValidator{Config: &Config{}})
}

File diff suppressed because it is too large Load Diff

View File

@ -1,71 +0,0 @@
package notmain
import (
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"math/big"
"testing"
"time"
"github.com/letsencrypt/boulder/mocks"
"github.com/letsencrypt/boulder/test"
)
var (
email1 = "mailto:one@shared-example.com"
email2 = "mailto:two@shared-example.com"
)
func TestSendEarliestCertInfo(t *testing.T) {
expiresIn := 24 * time.Hour
ctx := setup(t, []time.Duration{expiresIn})
defer ctx.cleanUp()
rawCertA := newX509Cert("happy A",
ctx.fc.Now().AddDate(0, 0, 5),
[]string{"example-A.com", "SHARED-example.com"},
serial1,
)
rawCertB := newX509Cert("happy B",
ctx.fc.Now().AddDate(0, 0, 2),
[]string{"shared-example.com", "example-b.com"},
serial2,
)
conn, err := ctx.m.mailer.Connect()
test.AssertNotError(t, err, "connecting SMTP")
err = ctx.m.sendNags(conn, []string{email1, email2}, []*x509.Certificate{rawCertA, rawCertB})
if err != nil {
t.Fatal(err)
}
if len(ctx.mc.Messages) != 2 {
t.Errorf("num of messages, want %d, got %d", 2, len(ctx.mc.Messages))
}
if len(ctx.mc.Messages) == 0 {
t.Fatalf("no message sent")
}
domains := "example-a.com\nexample-b.com\nshared-example.com"
expected := mocks.MailerMessage{
Subject: "Testing: Let's Encrypt certificate expiration notice for domain \"example-a.com\" (and 2 more)",
Body: fmt.Sprintf(`hi, cert for DNS names %s is going to expire in 2 days (%s)`,
domains,
rawCertB.NotAfter.Format(time.DateOnly)),
}
expected.To = "one@shared-example.com"
test.AssertEquals(t, expected, ctx.mc.Messages[0])
expected.To = "two@shared-example.com"
test.AssertEquals(t, expected, ctx.mc.Messages[1])
}
func newX509Cert(commonName string, notAfter time.Time, dnsNames []string, serial *big.Int) *x509.Certificate {
return &x509.Certificate{
Subject: pkix.Name{
CommonName: commonName,
},
NotAfter: notAfter,
DNSNames: dnsNames,
SerialNumber: serial,
}
}

View File

@ -1,304 +0,0 @@
package notmain
import (
"bufio"
"context"
"encoding/json"
"errors"
"flag"
"fmt"
"os"
"strings"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
"github.com/letsencrypt/boulder/features"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/sa"
)
type idExporter struct {
log blog.Logger
dbMap *db.WrappedMap
clk clock.Clock
grace time.Duration
}
// resultEntry is a JSON marshalable exporter result entry.
type resultEntry struct {
// ID is exported to support marshaling to JSON.
ID int64 `json:"id"`
// Hostname is exported to support marshaling to JSON. Not all queries
// will fill this field, so it's JSON field tag marks at as
// omittable.
Hostname string `json:"hostname,omitempty"`
}
// encodeIssuedName converts FQDNs and IP addresses to/from their format in the
// issuedNames table.
func (r *resultEntry) encodeIssuedName() {
r.Hostname = sa.EncodeIssuedName(r.Hostname)
}
// idExporterResults is passed as a selectable 'holder' for the results
// of id-exporter database queries
type idExporterResults []*resultEntry
// marshalToJSON returns JSON as bytes for all elements of the inner `id`
// slice.
func (i *idExporterResults) marshalToJSON() ([]byte, error) {
data, err := json.Marshal(i)
if err != nil {
return nil, err
}
data = append(data, '\n')
return data, nil
}
// writeToFile writes the contents of the inner `ids` slice, as JSON, to
// a file
func (i *idExporterResults) writeToFile(outfile string) error {
data, err := i.marshalToJSON()
if err != nil {
return err
}
return os.WriteFile(outfile, data, 0644)
}
// findIDs gathers all registration IDs with unexpired certificates.
func (c idExporter) findIDs(ctx context.Context) (idExporterResults, error) {
var holder idExporterResults
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT DISTINCT r.id
FROM registrations AS r
INNER JOIN certificates AS c on c.registrationID = r.id
WHERE c.expires >= :expireCutoff;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
})
if err != nil {
c.log.AuditErrf("Error finding IDs: %s", err)
return nil, err
}
return holder, nil
}
// findIDsWithExampleHostnames gathers all registration IDs with
// unexpired certificates and a corresponding example hostname.
func (c idExporter) findIDsWithExampleHostnames(ctx context.Context) (idExporterResults, error) {
var holder idExporterResults
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT SQL_BIG_RESULT
cert.registrationID AS id,
name.reversedName AS hostname
FROM certificates AS cert
INNER JOIN issuedNames AS name ON name.serial = cert.serial
WHERE cert.expires >= :expireCutoff
GROUP BY cert.registrationID;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
})
if err != nil {
c.log.AuditErrf("Error finding IDs and example hostnames: %s", err)
return nil, err
}
for _, result := range holder {
result.encodeIssuedName()
}
return holder, nil
}
// findIDsForHostnames gathers all registration IDs with unexpired
// certificates for each `hostnames` entry.
func (c idExporter) findIDsForHostnames(ctx context.Context, hostnames []string) (idExporterResults, error) {
var holder idExporterResults
for _, hostname := range hostnames {
// Pass the same list in each time, borp will happily just append to the slice
// instead of overwriting it each time
// https://github.com/letsencrypt/borp/blob/c87bd6443d59746a33aca77db34a60cfc344adb2/select.go#L349-L353
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT DISTINCT c.registrationID AS id
FROM certificates AS c
INNER JOIN issuedNames AS n ON c.serial = n.serial
WHERE c.expires >= :expireCutoff
AND n.reversedName = :reversedName;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
"reversedName": sa.EncodeIssuedName(hostname),
},
)
if err != nil {
if db.IsNoRows(err) {
continue
}
return nil, err
}
}
return holder, nil
}
const usageIntro = `
Introduction:
The ID exporter exists to retrieve the IDs of all registered
users with currently unexpired certificates. This list of registration IDs can
then be given as input to the notification mailer to send bulk notifications.
The -grace parameter can be used to allow registrations with certificates that
have already expired to be included in the export. The argument is a Go duration
obeying the usual suffix rules (e.g. 24h).
Registration IDs are favoured over email addresses as the intermediate format in
order to ensure the most up to date contact information is used at the time of
notification. The notification mailer will resolve the ID to email(s) when the
mailing is underway, ensuring we use the correct address if a user has updated
their contact information between the time of export and the time of
notification.
By default, the ID exporter's output will be JSON of the form:
[
{ "id": 1 },
...
{ "id": n }
]
Operations that return a hostname will be JSON of the form:
[
{ "id": 1, "hostname": "example-1.com" },
...
{ "id": n, "hostname": "example-n.com" }
]
Examples:
Export all registration IDs with unexpired certificates to "regs.json":
id-exporter -config test/config/id-exporter.json -outfile regs.json
Export all registration IDs with certificates that are unexpired or expired
within the last two days to "regs.json":
id-exporter -config test/config/id-exporter.json -grace 48h -outfile
"regs.json"
Required arguments:
- config
- outfile`
// unmarshalHostnames unmarshals a hostnames file and ensures that the file
// contained at least one entry.
func unmarshalHostnames(filePath string) ([]string, error) {
file, err := os.Open(filePath)
if err != nil {
return nil, err
}
defer file.Close()
scanner := bufio.NewScanner(file)
scanner.Split(bufio.ScanLines)
var hostnames []string
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, " ") {
return nil, fmt.Errorf(
"line: %q contains more than one entry, entries must be separated by newlines", line)
}
hostnames = append(hostnames, line)
}
if len(hostnames) == 0 {
return nil, errors.New("provided file contains 0 hostnames")
}
return hostnames, nil
}
type Config struct {
ContactExporter struct {
DB cmd.DBConfig
cmd.PasswordConfig
Features features.Config
}
}
func main() {
outFile := flag.String("outfile", "", "File to output results JSON to.")
grace := flag.Duration("grace", 2*24*time.Hour, "Include results with certificates that expired in < grace ago.")
hostnamesFile := flag.String(
"hostnames", "", "Only include results with unexpired certificates that contain hostnames\nlisted (newline separated) in this file.")
withExampleHostnames := flag.Bool(
"with-example-hostnames", false, "Include an example hostname for each registration ID with an unexpired certificate.")
configFile := flag.String("config", "", "File containing a JSON config.")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "%s\n\n", usageIntro)
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
}
// Parse flags and check required.
flag.Parse()
if *outFile == "" || *configFile == "" {
flag.Usage()
os.Exit(1)
}
log := cmd.NewLogger(cmd.SyslogConfig{StdoutLevel: 7})
log.Info(cmd.VersionString())
// Load configuration file.
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, fmt.Sprintf("Reading %q", *configFile))
// Unmarshal JSON config file.
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Unmarshaling config")
features.Set(cfg.ContactExporter.Features)
dbMap, err := sa.InitWrappedDb(cfg.ContactExporter.DB, nil, log)
cmd.FailOnError(err, "While initializing dbMap")
exporter := idExporter{
log: log,
dbMap: dbMap,
clk: cmd.Clock(),
grace: *grace,
}
var results idExporterResults
if *hostnamesFile != "" {
hostnames, err := unmarshalHostnames(*hostnamesFile)
cmd.FailOnError(err, "Problem unmarshalling hostnames")
results, err = exporter.findIDsForHostnames(context.TODO(), hostnames)
cmd.FailOnError(err, "Could not find IDs for hostnames")
} else if *withExampleHostnames {
results, err = exporter.findIDsWithExampleHostnames(context.TODO())
cmd.FailOnError(err, "Could not find IDs with hostnames")
} else {
results, err = exporter.findIDs(context.TODO())
cmd.FailOnError(err, "Could not find IDs")
}
err = results.writeToFile(*outFile)
cmd.FailOnError(err, fmt.Sprintf("Could not write result to outfile %q", *outFile))
}
func init() {
cmd.RegisterCommand("id-exporter", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,461 +0,0 @@
package notmain
import (
"context"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"math/big"
"os"
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/core"
corepb "github.com/letsencrypt/boulder/core/proto"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/sa"
sapb "github.com/letsencrypt/boulder/sa/proto"
"github.com/letsencrypt/boulder/test"
isa "github.com/letsencrypt/boulder/test/inmem/sa"
"github.com/letsencrypt/boulder/test/vars"
)
var (
regA *corepb.Registration
regB *corepb.Registration
regC *corepb.Registration
regD *corepb.Registration
)
const (
emailARaw = "test@example.com"
emailBRaw = "example@example.com"
emailCRaw = "test-example@example.com"
telNum = "666-666-7777"
)
func TestFindIDs(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDs - since no certificates have been added corresponding to
// the above registrations, no IDs should be found.
results, err := testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
// Run findIDs - since there are three registrations with unexpired certs
// we should get exactly three IDs back: RegA, RegC and RegD. RegB should
// *not* be present since their certificate has already expired. Unlike
// previous versions of this test RegD is not filtered out for having a `tel:`
// contact field anymore - this is the duty of the notify-mailer.
results, err = testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
// Allow a 1 year grace period
testCtx.c.grace = 360 * 24 * time.Hour
results, err = testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
// Now all four registration should be returned, including RegB since its
// certificate expired within the grace period
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regB.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestFindIDsWithExampleHostnames(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDsWithExampleHostnames - since no certificates have been
// added corresponding to the above registrations, no IDs should be
// found.
results, err := testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
// Run findIDsWithExampleHostnames - since there are three
// registrations with unexpired certs we should get exactly three
// IDs back: RegA, RegC and RegD. RegB should *not* be present since
// their certificate has already expired.
results, err = testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
test.AssertEquals(t, entry.Hostname, "example-a.com")
case regC.Id:
test.AssertEquals(t, entry.Hostname, "example-c.com")
case regD.Id:
test.AssertEquals(t, entry.Hostname, "example-d.com")
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
// Allow a 1 year grace period
testCtx.c.grace = 360 * 24 * time.Hour
results, err = testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
// Now all four registrations should be returned, including RegB
// since it expired within the grace period
test.AssertEquals(t, len(results), 4)
for _, entry := range results {
switch entry.ID {
case regA.Id:
test.AssertEquals(t, entry.Hostname, "example-a.com")
case regB.Id:
test.AssertEquals(t, entry.Hostname, "example-b.com")
case regC.Id:
test.AssertEquals(t, entry.Hostname, "example-c.com")
case regD.Id:
test.AssertEquals(t, entry.Hostname, "example-d.com")
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestFindIDsForHostnames(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDsForHostnames - since no certificates have been added corresponding to
// the above registrations, no IDs should be found.
results, err := testCtx.c.findIDsForHostnames(ctx, []string{"example-a.com", "example-b.com", "example-c.com", "example-d.com"})
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
results, err = testCtx.c.findIDsForHostnames(ctx, []string{"example-a.com", "example-b.com", "example-c.com", "example-d.com"})
test.AssertNotError(t, err, "findIDsForHostnames() failed")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestWriteToFile(t *testing.T) {
expected := `[{"id":1},{"id":2},{"id":3}]`
mockResults := idExporterResults{{ID: 1}, {ID: 2}, {ID: 3}}
dir := os.TempDir()
f, err := os.CreateTemp(dir, "ids_test")
test.AssertNotError(t, err, "os.CreateTemp produced an error")
// Writing the result to an outFile should produce the correct results
err = mockResults.writeToFile(f.Name())
test.AssertNotError(t, err, fmt.Sprintf("writeIDs produced an error writing to %s", f.Name()))
contents, err := os.ReadFile(f.Name())
test.AssertNotError(t, err, fmt.Sprintf("os.ReadFile produced an error reading from %s", f.Name()))
test.AssertEquals(t, string(contents), expected+"\n")
}
func Test_unmarshalHostnames(t *testing.T) {
testDir := os.TempDir()
testFile, err := os.CreateTemp(testDir, "ids_test")
test.AssertNotError(t, err, "os.CreateTemp produced an error")
// Non-existent hostnamesFile
_, err = unmarshalHostnames("file_does_not_exist")
test.AssertError(t, err, "expected error for non-existent file")
// Empty hostnamesFile
err = os.WriteFile(testFile.Name(), []byte(""), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
_, err = unmarshalHostnames(testFile.Name())
test.AssertError(t, err, "expected error for file containing 0 entries")
// One hostname present in the hostnamesFile
err = os.WriteFile(testFile.Name(), []byte("example-a.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
results, err := unmarshalHostnames(testFile.Name())
test.AssertNotError(t, err, "error when unmarshalling hostnamesFile with a single hostname")
test.AssertEquals(t, len(results), 1)
// Two hostnames present in the hostnamesFile
err = os.WriteFile(testFile.Name(), []byte("example-a.com\nexample-b.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
results, err = unmarshalHostnames(testFile.Name())
test.AssertNotError(t, err, "error when unmarshalling hostnamesFile with a two hostnames")
test.AssertEquals(t, len(results), 2)
// Three hostnames present in the hostnamesFile but two are separated only by a space
err = os.WriteFile(testFile.Name(), []byte("example-a.com\nexample-b.com example-c.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
_, err = unmarshalHostnames(testFile.Name())
test.AssertError(t, err, "error when unmarshalling hostnamesFile with three space separated domains")
}
type testCtx struct {
c idExporter
ssa sapb.StorageAuthorityClient
cleanUp func()
}
func (tc testCtx) addRegistrations(t *testing.T) {
emailA := "mailto:" + emailARaw
emailB := "mailto:" + emailBRaw
emailC := "mailto:" + emailCRaw
tel := "tel:" + telNum
// Every registration needs a unique JOSE key
jsonKeyA := []byte(`{
"kty":"RSA",
"n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw",
"e":"AQAB"
}`)
jsonKeyB := []byte(`{
"kty":"RSA",
"n":"z8bp-jPtHt4lKBqepeKF28g_QAEOuEsCIou6sZ9ndsQsEjxEOQxQ0xNOQezsKa63eogw8YS3vzjUcPP5BJuVzfPfGd5NVUdT-vSSwxk3wvk_jtNqhrpcoG0elRPQfMVsQWmxCAXCVRz3xbcFI8GTe-syynG3l-g1IzYIIZVNI6jdljCZML1HOMTTW4f7uJJ8mM-08oQCeHbr5ejK7O2yMSSYxW03zY-Tj1iVEebROeMv6IEEJNFSS4yM-hLpNAqVuQxFGetwtwjDMC1Drs1dTWrPuUAAjKGrP151z1_dE74M5evpAhZUmpKv1hY-x85DC6N0hFPgowsanmTNNiV75w",
"e":"AAEAAQ"
}`)
jsonKeyC := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-sCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
jsonKeyD := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-FCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
// Regs A through C have `mailto:` contact ACME URL's
regA = &corepb.Registration{
Id: 1,
Contact: []string{emailA},
Key: jsonKeyA,
}
regB = &corepb.Registration{
Id: 2,
Contact: []string{emailB},
Key: jsonKeyB,
}
regC = &corepb.Registration{
Id: 3,
Contact: []string{emailC},
Key: jsonKeyC,
}
// Reg D has a `tel:` contact ACME URL
regD = &corepb.Registration{
Id: 4,
Contact: []string{tel},
Key: jsonKeyD,
}
// Add the four test registrations
ctx := context.Background()
var err error
regA, err = tc.ssa.NewRegistration(ctx, regA)
test.AssertNotError(t, err, "Couldn't store regA")
regB, err = tc.ssa.NewRegistration(ctx, regB)
test.AssertNotError(t, err, "Couldn't store regB")
regC, err = tc.ssa.NewRegistration(ctx, regC)
test.AssertNotError(t, err, "Couldn't store regC")
regD, err = tc.ssa.NewRegistration(ctx, regD)
test.AssertNotError(t, err, "Couldn't store regD")
}
func (tc testCtx) addCertificates(t *testing.T) {
ctx := context.Background()
serial1 := big.NewInt(1336)
serial1String := core.SerialToString(serial1)
serial2 := big.NewInt(1337)
serial2String := core.SerialToString(serial2)
serial3 := big.NewInt(1338)
serial3String := core.SerialToString(serial3)
serial4 := big.NewInt(1339)
serial4String := core.SerialToString(serial4)
key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
test.AssertNotError(t, err, "creating test key")
fc := clock.NewFake()
// Add one cert for RegA that expires in 30 days
rawCertA := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy A",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-a.com"},
SerialNumber: serial1,
}
certDerA, _ := x509.CreateCertificate(rand.Reader, &rawCertA, &rawCertA, key.Public(), key)
certA := &core.Certificate{
RegistrationID: regA.Id,
Serial: serial1String,
Expires: rawCertA.NotAfter,
DER: certDerA,
}
err = tc.c.dbMap.Insert(ctx, certA)
test.AssertNotError(t, err, "Couldn't add certA")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-a",
serial1String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certA")
// Add one cert for RegB that already expired 30 days ago
rawCertB := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy B",
},
NotAfter: fc.Now().Add(-30 * 24 * time.Hour),
DNSNames: []string{"example-b.com"},
SerialNumber: serial2,
}
certDerB, _ := x509.CreateCertificate(rand.Reader, &rawCertB, &rawCertB, key.Public(), key)
certB := &core.Certificate{
RegistrationID: regB.Id,
Serial: serial2String,
Expires: rawCertB.NotAfter,
DER: certDerB,
}
err = tc.c.dbMap.Insert(ctx, certB)
test.AssertNotError(t, err, "Couldn't add certB")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-b",
serial2String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certB")
// Add one cert for RegC that expires in 30 days
rawCertC := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy C",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-c.com"},
SerialNumber: serial3,
}
certDerC, _ := x509.CreateCertificate(rand.Reader, &rawCertC, &rawCertC, key.Public(), key)
certC := &core.Certificate{
RegistrationID: regC.Id,
Serial: serial3String,
Expires: rawCertC.NotAfter,
DER: certDerC,
}
err = tc.c.dbMap.Insert(ctx, certC)
test.AssertNotError(t, err, "Couldn't add certC")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-c",
serial3String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certC")
// Add one cert for RegD that expires in 30 days
rawCertD := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy D",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-d.com"},
SerialNumber: serial4,
}
certDerD, _ := x509.CreateCertificate(rand.Reader, &rawCertD, &rawCertD, key.Public(), key)
certD := &core.Certificate{
RegistrationID: regD.Id,
Serial: serial4String,
Expires: rawCertD.NotAfter,
DER: certDerD,
}
err = tc.c.dbMap.Insert(ctx, certD)
test.AssertNotError(t, err, "Couldn't add certD")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-d",
serial4String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certD")
}
func setup(t *testing.T) testCtx {
log := blog.UseMock()
fc := clock.NewFake()
// Using DBConnSAFullPerms to be able to insert registrations and certificates
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
if err != nil {
t.Fatalf("Couldn't connect the database: %s", err)
}
cleanUp := test.ResetBoulderTestDatabase(t)
ssa, err := sa.NewSQLStorageAuthority(dbMap, dbMap, nil, 1, 0, fc, log, metrics.NoopRegisterer)
if err != nil {
t.Fatalf("unable to create SQLStorageAuthority: %s", err)
}
return testCtx{
c: idExporter{
dbMap: dbMap,
log: log,
clk: fc,
},
ssa: isa.SA{Impl: ssa},
cleanUp: cleanUp,
}
}

View File

@ -1,619 +0,0 @@
package notmain
import (
"context"
"encoding/csv"
"encoding/json"
"errors"
"flag"
"fmt"
"io"
"net/mail"
"os"
"sort"
"strconv"
"strings"
"sync"
"text/template"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
bmail "github.com/letsencrypt/boulder/mail"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
)
type mailer struct {
clk clock.Clock
log blog.Logger
dbMap dbSelector
mailer bmail.Mailer
subject string
emailTemplate *template.Template
recipients []recipient
targetRange interval
sleepInterval time.Duration
parallelSends uint
}
// interval defines a range of email addresses to send to in alphabetical order.
// The `start` field is inclusive and the `end` field is exclusive. To include
// everything, set `end` to \xFF.
type interval struct {
start string
end string
}
// contactQueryResult is a receiver for queries to the `registrations` table.
type contactQueryResult struct {
// ID is exported to receive the value of `id`.
ID int64
// Contact is exported to receive the value of `contact`.
Contact []byte
}
func (i *interval) ok() error {
if i.start > i.end {
return fmt.Errorf("interval start value (%s) is greater than end value (%s)",
i.start, i.end)
}
return nil
}
func (i *interval) includes(s string) bool {
return s >= i.start && s < i.end
}
// ok ensures that both the `targetRange` and `sleepInterval` are valid.
func (m *mailer) ok() error {
err := m.targetRange.ok()
if err != nil {
return err
}
if m.sleepInterval < 0 {
return fmt.Errorf(
"sleep interval (%d) is < 0", m.sleepInterval)
}
return nil
}
func (m *mailer) logStatus(to string, current, total int, start time.Time) {
// Should never happen.
if total <= 0 || current < 1 || current > total {
m.log.AuditErrf("Invalid current (%d) or total (%d)", current, total)
}
completion := (float32(current) / float32(total)) * 100
now := m.clk.Now()
elapsed := now.Sub(start)
m.log.Infof("Sending message (%d) of (%d) to address (%s) [%.2f%%] time elapsed (%s)",
current, total, to, completion, elapsed)
}
func sortAddresses(input addressToRecipientMap) []string {
var addresses []string
for address := range input {
addresses = append(addresses, address)
}
sort.Strings(addresses)
return addresses
}
// makeMessageBody is a helper for mailer.run() that's split out for the
// purposes of testing.
func (m *mailer) makeMessageBody(recipients []recipient) (string, error) {
var messageBody strings.Builder
err := m.emailTemplate.Execute(&messageBody, recipients)
if err != nil {
return "", err
}
if messageBody.Len() == 0 {
return "", errors.New("templating resulted in an empty message body")
}
return messageBody.String(), nil
}
func (m *mailer) run(ctx context.Context) error {
err := m.ok()
if err != nil {
return err
}
totalRecipients := len(m.recipients)
m.log.Infof("Resolving addresses for (%d) recipients", totalRecipients)
addressToRecipient, err := m.resolveAddresses(ctx)
if err != nil {
return err
}
totalAddresses := len(addressToRecipient)
if totalAddresses == 0 {
return errors.New("0 recipients remained after resolving addresses")
}
m.log.Infof("%d recipients were resolved to %d addresses", totalRecipients, totalAddresses)
var mostRecipients string
var mostRecipientsLen int
for k, v := range addressToRecipient {
if len(v) > mostRecipientsLen {
mostRecipientsLen = len(v)
mostRecipients = k
}
}
m.log.Infof("Address %q was associated with the most recipients (%d)",
mostRecipients, mostRecipientsLen)
type work struct {
index int
address string
}
var wg sync.WaitGroup
workChan := make(chan work, totalAddresses)
startTime := m.clk.Now()
sortedAddresses := sortAddresses(addressToRecipient)
if (m.targetRange.start != "" && m.targetRange.start > sortedAddresses[totalAddresses-1]) ||
(m.targetRange.end != "" && m.targetRange.end < sortedAddresses[0]) {
return errors.New("Zero found addresses fall inside target range")
}
go func(ch chan<- work) {
for i, address := range sortedAddresses {
ch <- work{i, address}
}
close(workChan)
}(workChan)
if m.parallelSends < 1 {
m.parallelSends = 1
}
for senderNum := uint(0); senderNum < m.parallelSends; senderNum++ {
// For politeness' sake, don't open more than 1 new connection per
// second.
if senderNum > 0 {
m.clk.Sleep(time.Second)
}
conn, err := m.mailer.Connect()
if err != nil {
return fmt.Errorf("connecting parallel sender %d: %w", senderNum, err)
}
wg.Add(1)
go func(conn bmail.Conn, ch <-chan work) {
defer wg.Done()
for w := range ch {
if !m.targetRange.includes(w.address) {
m.log.Debugf("Address %q is outside of target range, skipping", w.address)
continue
}
err := policy.ValidEmail(w.address)
if err != nil {
m.log.Infof("Skipping %q due to policy violation: %s", w.address, err)
continue
}
recipients := addressToRecipient[w.address]
m.logStatus(w.address, w.index+1, totalAddresses, startTime)
messageBody, err := m.makeMessageBody(recipients)
if err != nil {
m.log.Errf("Skipping %q due to templating error: %s", w.address, err)
continue
}
err = conn.SendMail([]string{w.address}, m.subject, messageBody)
if err != nil {
var badAddrErr bmail.BadAddressSMTPError
if errors.As(err, &badAddrErr) {
m.log.Errf("address %q was rejected by server: %s", w.address, err)
continue
}
m.log.AuditErrf("while sending mail (%d) of (%d) to address %q: %s",
w.index, len(sortedAddresses), w.address, err)
}
m.clk.Sleep(m.sleepInterval)
}
conn.Close()
}(conn, workChan)
}
wg.Wait()
return nil
}
// resolveAddresses creates a mapping of email addresses to (a list of)
// `recipient`s that resolve to that email address.
func (m *mailer) resolveAddresses(ctx context.Context) (addressToRecipientMap, error) {
result := make(addressToRecipientMap, len(m.recipients))
for _, recipient := range m.recipients {
addresses, err := getAddressForID(ctx, recipient.id, m.dbMap)
if err != nil {
return nil, err
}
for _, address := range addresses {
parsed, err := mail.ParseAddress(address)
if err != nil {
m.log.Errf("Unparsable address %q, skipping ID (%d)", address, recipient.id)
continue
}
result[parsed.Address] = append(result[parsed.Address], recipient)
}
}
return result, nil
}
// dbSelector abstracts over a subset of methods from `borp.DbMap` objects to
// facilitate mocking in unit tests.
type dbSelector interface {
SelectOne(ctx context.Context, holder interface{}, query string, args ...interface{}) error
}
// getAddressForID queries the database for the email address associated with
// the provided registration ID.
func getAddressForID(ctx context.Context, id int64, dbMap dbSelector) ([]string, error) {
var result contactQueryResult
err := dbMap.SelectOne(ctx, &result,
`SELECT id,
contact
FROM registrations
WHERE contact NOT IN ('[]', 'null')
AND id = :id;`,
map[string]interface{}{"id": id})
if err != nil {
if db.IsNoRows(err) {
return []string{}, nil
}
return nil, err
}
var contacts []string
err = json.Unmarshal(result.Contact, &contacts)
if err != nil {
return nil, err
}
var addresses []string
for _, contact := range contacts {
if strings.HasPrefix(contact, "mailto:") {
addresses = append(addresses, strings.TrimPrefix(contact, "mailto:"))
}
}
return addresses, nil
}
// recipient represents a single record from the recipient list file. The 'id'
// column is parsed to the 'id' field, all additional data will be parsed to a
// mapping of column name to value in the 'Data' field. Please inform SRE if you
// make any changes to the exported fields of this struct. These fields are
// referenced in operationally critical e-mail templates used to notify
// subscribers during incident response.
type recipient struct {
// id is the subscriber's ID.
id int64
// Data is a mapping of column name to value parsed from a single record in
// the provided recipient list file. It's exported so the contents can be
// accessed by the template package. Please inform SRE if you make any
// changes to this field.
Data map[string]string
}
// addressToRecipientMap maps email addresses to a list of `recipient`s that
// resolve to that email address.
type addressToRecipientMap map[string][]recipient
// readRecipientsList parses the contents of a recipient list file into a list
// of `recipient` objects.
func readRecipientsList(filename string, delimiter rune) ([]recipient, string, error) {
f, err := os.Open(filename)
if err != nil {
return nil, "", err
}
reader := csv.NewReader(f)
reader.Comma = delimiter
// Parse header.
record, err := reader.Read()
if err != nil {
return nil, "", fmt.Errorf("failed to parse header: %w", err)
}
if record[0] != "id" {
return nil, "", errors.New("header must begin with \"id\"")
}
// Collect the names of each header column after `id`.
var dataColumns []string
for _, v := range record[1:] {
dataColumns = append(dataColumns, strings.TrimSpace(v))
if len(v) == 0 {
return nil, "", errors.New("header contains an empty column")
}
}
var recordsWithEmptyColumns []int64
var recordsWithDuplicateIDs []int64
var probsBuff strings.Builder
stringProbs := func() string {
if len(recordsWithEmptyColumns) != 0 {
fmt.Fprintf(&probsBuff, "ID(s) %v contained empty columns and ",
recordsWithEmptyColumns)
}
if len(recordsWithDuplicateIDs) != 0 {
fmt.Fprintf(&probsBuff, "ID(s) %v were skipped as duplicates",
recordsWithDuplicateIDs)
}
if probsBuff.Len() == 0 {
return ""
}
return strings.TrimSuffix(probsBuff.String(), " and ")
}
// Parse records.
recipientIDs := make(map[int64]bool)
var recipients []recipient
for {
record, err := reader.Read()
if errors.Is(err, io.EOF) {
// Finished parsing the file.
if len(recipients) == 0 {
return nil, stringProbs(), errors.New("no records after header")
}
return recipients, stringProbs(), nil
} else if err != nil {
return nil, "", err
}
// Ensure the first column of each record can be parsed as a valid
// registration ID.
recordID := record[0]
id, err := strconv.ParseInt(recordID, 10, 64)
if err != nil {
return nil, "", fmt.Errorf(
"%q couldn't be parsed as a registration ID due to: %s", recordID, err)
}
// Skip records that have the same ID as those read previously.
if recipientIDs[id] {
recordsWithDuplicateIDs = append(recordsWithDuplicateIDs, id)
continue
}
recipientIDs[id] = true
// Collect the columns of data after `id` into a map.
var emptyColumn bool
data := make(map[string]string)
for i, v := range record[1:] {
if len(v) == 0 {
emptyColumn = true
}
data[dataColumns[i]] = v
}
// Only used for logging.
if emptyColumn {
recordsWithEmptyColumns = append(recordsWithEmptyColumns, id)
}
recipients = append(recipients, recipient{id, data})
}
}
const usageIntro = `
Introduction:
The notification mailer exists to send a message to the contact associated
with a list of registration IDs. The attributes of the message (from address,
subject, and message content) are provided by the command line arguments. The
message content is provided as a path to a template file via the -body argument.
Provide a list of recipient user ids in a CSV file passed with the -recipientList
flag. The CSV file must have "id" as the first column and may have additional
fields to be interpolated into the email template:
id, lastIssuance
1234, "from example.com 2018-12-01"
5678, "from example.net 2018-12-13"
The additional fields will be interpolated with Golang templating, e.g.:
Your last issuance on each account was:
{{ range . }} {{ .Data.lastIssuance }}
{{ end }}
To help the operator gain confidence in the mailing run before committing fully
three safety features are supported: dry runs, intervals and a sleep between emails.
The -dryRun=true flag will use a mock mailer that prints message content to
stdout instead of performing an SMTP transaction with a real mailserver. This
can be used when the initial parameters are being tweaked to ensure no real
emails are sent. Using -dryRun=false will send real email.
Intervals supported via the -start and -end arguments. Only email addresses that
are alphabetically between the -start and -end strings will be sent. This can be used
to break up sending into batches, or more likely to resume sending if a batch is killed,
without resending messages that have already been sent. The -start flag is inclusive and
the -end flag is exclusive.
Notify-mailer de-duplicates email addresses and groups together the resulting recipient
structs, so a person who has multiple accounts using the same address will only receive
one email.
During mailing the -sleep argument is used to space out individual messages.
This can be used to ensure that the mailing happens at a steady pace with ample
opportunity for the operator to terminate early in the event of error. The
-sleep flag honours durations with a unit suffix (e.g. 1m for 1 minute, 10s for
10 seconds, etc). Using -sleep=0 will disable the sleep and send at full speed.
Examples:
Send an email with subject "Hello!" from the email "hello@goodbye.com" with
the contents read from "test_msg_body.txt" to every email associated with the
registration IDs listed in "test_reg_recipients.json", sleeping 10 seconds
between each message:
notify-mailer -config test/config/notify-mailer.json -body
cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-sleep 10s -dryRun=false
Do the same, but only to example@example.com:
notify-mailer -config test/config/notify-mailer.json
-body cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-start example@example.com -end example@example.comX
Send the message starting with example@example.com and emailing every address that's
alphabetically higher:
notify-mailer -config test/config/notify-mailer.json
-body cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-start example@example.com
Required arguments:
- body
- config
- from
- subject
- recipientList`
type Config struct {
NotifyMailer struct {
DB cmd.DBConfig
cmd.SMTPConfig
}
Syslog cmd.SyslogConfig
}
func main() {
from := flag.String("from", "", "From header for emails. Must be a bare email address.")
subject := flag.String("subject", "", "Subject of emails")
recipientListFile := flag.String("recipientList", "", "File containing a CSV list of registration IDs and extra info.")
parseAsTSV := flag.Bool("tsv", false, "Parse the recipient list file as a TSV.")
bodyFile := flag.String("body", "", "File containing the email body in Golang template format.")
dryRun := flag.Bool("dryRun", true, "Whether to do a dry run.")
sleep := flag.Duration("sleep", 500*time.Millisecond, "How long to sleep between emails.")
parallelSends := flag.Uint("parallelSends", 1, "How many parallel goroutines should process emails")
start := flag.String("start", "", "Alphabetically lowest email address to include.")
end := flag.String("end", "\xFF", "Alphabetically highest email address (exclusive).")
reconnBase := flag.Duration("reconnectBase", 1*time.Second, "Base sleep duration between reconnect attempts")
reconnMax := flag.Duration("reconnectMax", 5*60*time.Second, "Max sleep duration between reconnect attempts after exponential backoff")
configFile := flag.String("config", "", "File containing a JSON config.")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "%s\n\n", usageIntro)
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
}
// Validate required args.
flag.Parse()
if *from == "" || *subject == "" || *bodyFile == "" || *configFile == "" || *recipientListFile == "" {
flag.Usage()
os.Exit(1)
}
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, "Couldn't load JSON config file")
// Parse JSON config.
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Couldn't unmarshal JSON config file")
log := cmd.NewLogger(cfg.Syslog)
log.Info(cmd.VersionString())
dbMap, err := sa.InitWrappedDb(cfg.NotifyMailer.DB, nil, log)
cmd.FailOnError(err, "While initializing dbMap")
// Load and parse message body.
template, err := template.ParseFiles(*bodyFile)
cmd.FailOnError(err, "Couldn't parse message template")
// Ensure that in the event of a missing key, an informative error is
// returned.
template.Option("missingkey=error")
address, err := mail.ParseAddress(*from)
cmd.FailOnError(err, fmt.Sprintf("Couldn't parse %q to address", *from))
recipientListDelimiter := ','
if *parseAsTSV {
recipientListDelimiter = '\t'
}
recipients, probs, err := readRecipientsList(*recipientListFile, recipientListDelimiter)
cmd.FailOnError(err, "Couldn't populate recipients")
if probs != "" {
log.Infof("While reading the recipient list file %s", probs)
}
var mailClient bmail.Mailer
if *dryRun {
log.Infof("Starting %s in dry-run mode", cmd.VersionString())
mailClient = bmail.NewDryRun(*address, log)
} else {
log.Infof("Starting %s", cmd.VersionString())
smtpPassword, err := cfg.NotifyMailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Couldn't load SMTP password from file")
mailClient = bmail.New(
cfg.NotifyMailer.Server,
cfg.NotifyMailer.Port,
cfg.NotifyMailer.Username,
smtpPassword,
nil,
*address,
log,
metrics.NoopRegisterer,
*reconnBase,
*reconnMax)
}
m := mailer{
clk: cmd.Clock(),
log: log,
dbMap: dbMap,
mailer: mailClient,
subject: *subject,
recipients: recipients,
emailTemplate: template,
targetRange: interval{
start: *start,
end: *end,
},
sleepInterval: *sleep,
parallelSends: *parallelSends,
}
err = m.run(context.TODO())
cmd.FailOnError(err, "Couldn't complete")
log.Info("Completed successfully")
}
func init() {
cmd.RegisterCommand("notify-mailer", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,782 +0,0 @@
package notmain
import (
"context"
"database/sql"
"errors"
"fmt"
"io"
"os"
"testing"
"text/template"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mocks"
"github.com/letsencrypt/boulder/test"
)
func TestIntervalOK(t *testing.T) {
// Test a number of intervals know to be OK, ensure that no error is
// produced when calling `ok()`.
okCases := []struct {
testInterval interval
}{
{interval{}},
{interval{start: "aa", end: "\xFF"}},
{interval{end: "aa"}},
{interval{start: "aa", end: "bb"}},
}
for _, testcase := range okCases {
err := testcase.testInterval.ok()
test.AssertNotError(t, err, "valid interval produced ok() error")
}
badInterval := interval{start: "bb", end: "aa"}
err := badInterval.ok()
test.AssertError(t, err, "bad interval was considered ok")
}
func setupMakeRecipientList(t *testing.T, contents string) string {
entryFile, err := os.CreateTemp("", "")
test.AssertNotError(t, err, "couldn't create temp file")
_, err = entryFile.WriteString(contents)
test.AssertNotError(t, err, "couldn't write contents to temp file")
err = entryFile.Close()
test.AssertNotError(t, err, "couldn't close temp file")
return entryFile.Name()
}
func TestReadRecipientList(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
list, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
expected := []recipient{
{id: 10, Data: map[string]string{"date": "2018-11-21", "domainName": "example.com"}},
{id: 23, Data: map[string]string{"date": "2018-11-22", "domainName": "example.net"}},
}
test.AssertDeepEquals(t, list, expected)
contents = `id domainName date
10 example.com 2018-11-21
23 example.net 2018-11-22`
entryFile = setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
list, _, err = readRecipientsList(entryFile, '\t')
test.AssertNotError(t, err, "received an error for a valid TSV file")
test.AssertDeepEquals(t, list, expected)
}
func TestReadRecipientListNoExtraColumns(t *testing.T) {
contents := `id
10
23`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientsListFileNoExist(t *testing.T) {
_, _, err := readRecipientsList("doesNotExist", ',')
test.AssertError(t, err, "expected error for a file that doesn't exist")
}
func TestReadRecipientListWithEmptyColumnInHeader(t *testing.T) {
contents := `id, domainName,,date
10,example.com,2018-11-21
23,example.net`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "failed to error on CSV file with trailing delimiter in header")
test.AssertDeepEquals(t, err, errors.New("header contains an empty column"))
}
func TestReadRecipientListWithProblems(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,
10,example.com,2018-11-22
42,example.net,
24,example.com,2018-11-21
24,example.com,2018-11-21
`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
recipients, probs, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
test.AssertEquals(t, probs, "ID(s) [23 42] contained empty columns and ID(s) [10 24] were skipped as duplicates")
test.AssertEquals(t, len(recipients), 4)
// Ensure trailing " and " is trimmed from single problem.
contents = `id, domainName, date
23,example.net,
10,example.com,2018-11-21
42,example.net,
`
entryFile = setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, probs, err = readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
test.AssertEquals(t, probs, "ID(s) [23 42] contained empty columns")
}
func TestReadRecipientListWithEmptyLine(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientListWithMismatchedColumns(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "failed to error on CSV file with mismatched columns")
}
func TestReadRecipientListWithDuplicateIDs(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
10,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientListWithUnparsableID(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
twenty,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file that contains an unparsable registration ID")
}
func TestReadRecipientListWithoutIDHeader(t *testing.T) {
contents := `notId, domainName, date
10,example.com,2018-11-21
twenty,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file missing header field `id`")
}
func TestReadRecipientListWithNoRecords(t *testing.T) {
contents := `id, domainName, date
`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file containing only a header")
}
func TestReadRecipientListWithNoHeaderOrRecords(t *testing.T) {
contents := ``
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file containing only a header")
test.AssertErrorIs(t, err, io.EOF)
}
func TestMakeMessageBody(t *testing.T) {
emailTemplate := `{{range . }}
{{ .Data.date }}
{{ .Data.domainName }}
{{end}}`
m := &mailer{
log: blog.UseMock(),
mailer: &mocks.Mailer{},
emailTemplate: template.Must(template.New("email").Parse(emailTemplate)).Option("missingkey=error"),
sleepInterval: 0,
targetRange: interval{end: "\xFF"},
clk: clock.NewFake(),
recipients: nil,
dbMap: mockEmailResolver{},
}
recipients := []recipient{
{id: 10, Data: map[string]string{"date": "2018-11-21", "domainName": "example.com"}},
{id: 23, Data: map[string]string{"date": "2018-11-22", "domainName": "example.net"}},
}
expectedMessageBody := `
2018-11-21
example.com
2018-11-22
example.net
`
// Ensure that a very basic template with 2 recipients can be successfully
// executed.
messageBody, err := m.makeMessageBody(recipients)
test.AssertNotError(t, err, "failed to execute a valid template")
test.AssertEquals(t, messageBody, expectedMessageBody)
// With no recipients we should get an empty body error.
recipients = []recipient{}
_, err = m.makeMessageBody(recipients)
test.AssertError(t, err, "should have errored on empty body")
// With a missing key we should get an informative templating error.
recipients = []recipient{{id: 10, Data: map[string]string{"domainName": "example.com"}}}
_, err = m.makeMessageBody(recipients)
test.AssertEquals(t, err.Error(), "template: email:2:8: executing \"email\" at <.Data.date>: map has no entry for key \"date\"")
}
func TestSleepInterval(t *testing.T) {
const sleepLen = 10
mc := &mocks.Mailer{}
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}}
// Set up a mock mailer that sleeps for `sleepLen` seconds and only has one
// goroutine to process results
m := &mailer{
log: blog.UseMock(),
mailer: mc,
emailTemplate: tmpl,
sleepInterval: sleepLen * time.Second,
parallelSends: 1,
targetRange: interval{start: "", end: "\xFF"},
clk: clock.NewFake(),
recipients: recipients,
dbMap: dbMap,
}
// Call run() - this should sleep `sleepLen` per destination address
// After it returns, we expect (sleepLen * number of destinations) seconds has
// elapsed
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
expectedEnd := clock.NewFake()
expectedEnd.Add(time.Second * time.Duration(sleepLen*len(recipients)))
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
// Set up a mock mailer that doesn't sleep at all
m = &mailer{
log: blog.UseMock(),
mailer: mc,
emailTemplate: tmpl,
sleepInterval: 0,
targetRange: interval{end: "\xFF"},
clk: clock.NewFake(),
recipients: recipients,
dbMap: dbMap,
}
// Call run() - this should blast through all destinations without sleep
// After it returns, we expect no clock time to have elapsed on the fake clock
err = m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
expectedEnd = clock.NewFake()
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
}
func TestMailIntervals(t *testing.T) {
const testSubject = "Test Subject"
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}}
mc := &mocks.Mailer{}
// Create a mailer with a checkpoint interval larger than any of the
// destination email addresses.
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{start: "\xFF", end: "\xFF\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. It should produce an error about the interval start
mc.Clear()
err := m.run(context.Background())
test.AssertError(t, err, "expected error")
test.AssertEquals(t, len(mc.Messages), 0)
// Create a mailer with a negative sleep interval
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{},
sleepInterval: -10,
clk: clock.NewFake(),
}
// Run the mailer. It should produce an error about the sleep interval
mc.Clear()
err = m.run(context.Background())
test.AssertEquals(t, len(mc.Messages), 0)
test.AssertEquals(t, err.Error(), "sleep interval (-10) is < 0")
// Create a mailer with an interval starting with a specific email address.
// It should send email to that address and others alphabetically higher.
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}},
emailTemplate: tmpl,
targetRange: interval{start: "test-example-updated@letsencrypt.org", end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. Two messages should have been produced, one to
// test-example-updated@letsencrypt.org (beginning of the range),
// and one to test-test-test@letsencrypt.org.
mc.Clear()
err = m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
test.AssertEquals(t, len(mc.Messages), 2)
test.AssertEquals(t, mocks.MailerMessage{
To: "test-example-updated@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
test.AssertEquals(t, mocks.MailerMessage{
To: "test-test-test@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[1])
// Create a mailer with a checkpoint interval ending before
// "test-example-updated@letsencrypt.org"
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}},
emailTemplate: tmpl,
targetRange: interval{end: "test-example-updated@letsencrypt.org"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. Two messages should have been produced, one to
// example@letsencrypt.org (ID 1), one to example-example-example@example.com (ID 2)
mc.Clear()
err = m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
test.AssertEquals(t, len(mc.Messages), 2)
test.AssertEquals(t, mocks.MailerMessage{
To: "example-example-example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[1])
}
func TestParallelism(t *testing.T) {
const testSubject = "Test Subject"
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}}
mc := &mocks.Mailer{}
// Create a mailer with 10 parallel workers.
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
parallelSends: 10,
clk: clock.NewFake(),
}
mc.Clear()
err := m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
// The fake clock should have advanced 9 seconds, one for each parallel
// goroutine after the first doing its polite 1-second sleep at startup.
expectedEnd := clock.NewFake()
expectedEnd.Add(9 * time.Second)
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
// A message should have been sent to all four addresses.
test.AssertEquals(t, len(mc.Messages), 4)
expectedAddresses := []string{
"example@letsencrypt.org",
"test-example-updated@letsencrypt.org",
"test-test-test@letsencrypt.org",
"example-example-example@letsencrypt.org",
}
for _, msg := range mc.Messages {
test.AssertSliceContains(t, expectedAddresses, msg.To)
}
}
func TestMessageContentStatic(t *testing.T) {
// Create a mailer with fixed content
const (
testSubject = "Test Subject"
)
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}},
emailTemplate: template.Must(template.New("letter").Parse("an email body")),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
}
// Send mail with a variable interpolated.
func TestMessageContentInterpolated(t *testing.T) {
recipients := []recipient{
{
id: 1,
Data: map[string]string{
"validationMethod": "eyeballing it",
},
},
}
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test Subject",
recipients: recipients,
emailTemplate: template.Must(template.New("letter").Parse(
`issued by {{range .}}{{ .Data.validationMethod }}{{end}}`)),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: "Test Subject",
Body: "issued by eyeballing it",
}, mc.Messages[0])
}
// Send mail with a variable interpolated multiple times for accounts that share
// an email address.
func TestMessageContentInterpolatedMultiple(t *testing.T) {
recipients := []recipient{
{
id: 200,
Data: map[string]string{
"domain": "blog.example.com",
},
},
{
id: 201,
Data: map[string]string{
"domain": "nas.example.net",
},
},
{
id: 202,
Data: map[string]string{
"domain": "mail.example.org",
},
},
{
id: 203,
Data: map[string]string{
"domain": "panel.example.net",
},
},
}
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test Subject",
recipients: recipients,
emailTemplate: template.Must(template.New("letter").Parse(
`issued for:
{{range .}}{{ .Data.domain }}
{{end}}Thanks`)),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "gotta.lotta.accounts@letsencrypt.org",
Subject: "Test Subject",
Body: `issued for:
blog.example.com
nas.example.net
mail.example.org
panel.example.net
Thanks`,
}, mc.Messages[0])
}
// the `mockEmailResolver` implements the `dbSelector` interface from
// `notify-mailer/main.go` to allow unit testing without using a backing
// database
type mockEmailResolver struct{}
// the `mockEmailResolver` select method treats the requested reg ID as an index
// into a list of anonymous structs
func (bs mockEmailResolver) SelectOne(ctx context.Context, output interface{}, _ string, args ...interface{}) error {
// The "dbList" is just a list of contact records in memory
dbList := []contactQueryResult{
{
ID: 1,
Contact: []byte(`["mailto:example@letsencrypt.org"]`),
},
{
ID: 2,
Contact: []byte(`["mailto:test-example-updated@letsencrypt.org"]`),
},
{
ID: 3,
Contact: []byte(`["mailto:test-test-test@letsencrypt.org"]`),
},
{
ID: 4,
Contact: []byte(`["mailto:example-example-example@letsencrypt.org"]`),
},
{
ID: 5,
Contact: []byte(`["mailto:youve.got.mail@letsencrypt.org"]`),
},
{
ID: 6,
Contact: []byte(`["mailto:mail@letsencrypt.org"]`),
},
{
ID: 7,
Contact: []byte(`["mailto:***********"]`),
},
{
ID: 200,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 201,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 202,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 203,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 204,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
}
// Play the type cast game so that we can dig into the arguments map and get
// out an int64 `id` parameter.
argsRaw := args[0]
argsMap, ok := argsRaw.(map[string]interface{})
if !ok {
return fmt.Errorf("incorrect args type %T", args)
}
idRaw := argsMap["id"]
id, ok := idRaw.(int64)
if !ok {
return fmt.Errorf("incorrect args ID type %T", id)
}
// Play the type cast game to get a `*contactQueryResult` so we can write
// the result from the db list.
outputPtr, ok := output.(*contactQueryResult)
if !ok {
return fmt.Errorf("incorrect output type %T", output)
}
for _, v := range dbList {
if v.ID == id {
*outputPtr = v
}
}
if outputPtr.ID == 0 {
return db.ErrDatabaseOp{
Op: "select one",
Table: "registrations",
Err: sql.ErrNoRows,
}
}
return nil
}
func TestResolveEmails(t *testing.T) {
// Start with three reg. IDs. Note: the IDs have been matched with fake
// results in the `db` slice in `mockEmailResolver`'s `SelectOne`. If you add
// more test cases here you must also add the corresponding DB result in the
// mock.
recipients := []recipient{
{
id: 1,
},
{
id: 2,
},
{
id: 3,
},
// This registration ID deliberately doesn't exist in the mock data to make
// sure this case is handled gracefully
{
id: 999,
},
// This registration ID deliberately returns an invalid email to make sure any
// invalid contact info that slipped into the DB once upon a time will be ignored
{
id: 7,
},
{
id: 200,
},
{
id: 201,
},
{
id: 202,
},
{
id: 203,
},
{
id: 204,
},
}
tmpl := template.Must(template.New("letter").Parse("an email body"))
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test",
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
addressesToRecipients, err := m.resolveAddresses(context.Background())
test.AssertNotError(t, err, "failed to resolveEmailAddresses")
expected := []string{
"example@letsencrypt.org",
"test-example-updated@letsencrypt.org",
"test-test-test@letsencrypt.org",
"gotta.lotta.accounts@letsencrypt.org",
}
test.AssertEquals(t, len(addressesToRecipients), len(expected))
for _, address := range expected {
if _, ok := addressesToRecipients[address]; !ok {
t.Errorf("missing entry in addressesToRecipients: %q", address)
}
}
}

View File

@ -1,3 +0,0 @@
This is a test message body regarding these domains:
{{ range . }} {{ .Extra.domainName }}
{{ end }}

View File

@ -1,4 +0,0 @@
id,domainName
1,one.example.com
2,two.example.net
3,three.example.org
1 id domainName
2 1 one.example.com
3 2 two.example.net
4 3 three.example.org

View File

@ -11,7 +11,7 @@ import (
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/iana"
"github.com/letsencrypt/boulder/va"
vaConfig "github.com/letsencrypt/boulder/va/config"
vapb "github.com/letsencrypt/boulder/va/proto"
@ -87,16 +87,12 @@ func main() {
clk := cmd.Clock()
var servers bdns.ServerProvider
proto := "udp"
if features.Get().DOH {
proto = "tcp"
}
if len(c.RVA.DNSStaticResolvers) != 0 {
servers, err = bdns.NewStaticProvider(c.RVA.DNSStaticResolvers)
cmd.FailOnError(err, "Couldn't start static DNS server resolver")
} else {
servers, err = bdns.StartDynamicProvider(c.RVA.DNSProvider, 60*time.Second, proto)
servers, err = bdns.StartDynamicProvider(c.RVA.DNSProvider, 60*time.Second, "tcp")
cmd.FailOnError(err, "Couldn't start dynamic DNS server resolver")
}
defer servers.Stop()
@ -142,7 +138,7 @@ func main() {
c.RVA.AccountURIPrefixes,
c.RVA.Perspective,
c.RVA.RIR,
policy.IsReservedIP)
iana.IsReservedAddr)
cmd.FailOnError(err, "Unable to create Remote-VA server")
start, err := bgrpc.NewServer(c.RVA.GRPC, logger).Add(

View File

@ -31,7 +31,7 @@ import (
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.25.0"
semconv "go.opentelemetry.io/otel/semconv/v1.30.0"
"google.golang.org/grpc/grpclog"
"github.com/letsencrypt/boulder/config"

View File

@ -133,16 +133,13 @@ func TestReadConfigFile(t *testing.T) {
test.AssertError(t, err, "ReadConfigFile('') did not error")
type config struct {
NotifyMailer struct {
DB DBConfig
SMTPConfig
}
Syslog SyslogConfig
GRPC *GRPCClientConfig
TLS *TLSConfig
}
var c config
err = ReadConfigFile("../test/config/notify-mailer.json", &c)
test.AssertNotError(t, err, "ReadConfigFile(../test/config/notify-mailer.json) errored")
test.AssertEquals(t, c.NotifyMailer.SMTPConfig.Server, "localhost")
err = ReadConfigFile("../test/config/health-checker.json", &c)
test.AssertNotError(t, err, "ReadConfigFile(../test/config/health-checker.json) errored")
test.AssertEquals(t, c.GRPC.Timeout.Duration, 1*time.Second)
}
func TestLogWriter(t *testing.T) {

View File

@ -68,7 +68,7 @@ func (c AcmeChallenge) IsValid() bool {
}
}
// OCSPStatus defines the state of OCSP for a domain
// OCSPStatus defines the state of OCSP for a certificate
type OCSPStatus string
// These status are the states of OCSP
@ -123,8 +123,8 @@ type ValidationRecord struct {
// Shared
//
// TODO(#7311): Replace DnsName with Identifier.
DnsName string `json:"hostname,omitempty"`
// Hostname can hold either a DNS name or an IP address.
Hostname string `json:"hostname,omitempty"`
Port string `json:"port,omitempty"`
AddressesResolved []netip.Addr `json:"addressesResolved,omitempty"`
AddressUsed netip.Addr `json:"addressUsed,omitempty"`
@ -210,7 +210,7 @@ func (ch Challenge) RecordsSane() bool {
for _, rec := range ch.ValidationRecord {
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if rec.URL == "" || rec.DnsName == "" || rec.Port == "" || (rec.AddressUsed == netip.Addr{}) ||
if rec.URL == "" || rec.Hostname == "" || rec.Port == "" || (rec.AddressUsed == netip.Addr{}) ||
len(rec.AddressesResolved) == 0 {
return false
}
@ -224,7 +224,7 @@ func (ch Challenge) RecordsSane() bool {
}
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if ch.ValidationRecord[0].DnsName == "" || ch.ValidationRecord[0].Port == "" ||
if ch.ValidationRecord[0].Hostname == "" || ch.ValidationRecord[0].Port == "" ||
(ch.ValidationRecord[0].AddressUsed == netip.Addr{}) || len(ch.ValidationRecord[0].AddressesResolved) == 0 {
return false
}
@ -234,7 +234,7 @@ func (ch Challenge) RecordsSane() bool {
}
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if ch.ValidationRecord[0].DnsName == "" {
if ch.ValidationRecord[0].Hostname == "" {
return false
}
return true
@ -271,10 +271,10 @@ func (ch Challenge) StringID() string {
return base64.RawURLEncoding.EncodeToString(h.Sum(nil)[0:4])
}
// Authorization represents the authorization of an account key holder
// to act on behalf of a domain. This struct is intended to be used both
// internally and for JSON marshaling on the wire. Any fields that should be
// suppressed on the wire (e.g., ID, regID) must be made empty before marshaling.
// Authorization represents the authorization of an account key holder to act on
// behalf of an identifier. This struct is intended to be used both internally
// and for JSON marshaling on the wire. Any fields that should be suppressed on
// the wire (e.g., ID, regID) must be made empty before marshaling.
type Authorization struct {
// An identifier for this authorization, unique across
// authorizations and certificates within this instance.

View File

@ -37,7 +37,7 @@ func TestRecordSanityCheckOnUnsupportedChallengeType(t *testing.T) {
rec := []ValidationRecord{
{
URL: "http://localhost/test",
DnsName: "localhost",
Hostname: "localhost",
Port: "80",
AddressesResolved: []netip.Addr{netip.MustParseAddr("127.0.0.1")},
AddressUsed: netip.MustParseAddr("127.0.0.1"),

View File

@ -180,7 +180,6 @@ func (x *Challenge) GetValidationrecords() []*ValidationRecord {
type ValidationRecord struct {
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 9
// TODO(#7311): Replace hostname with Identifier.
Hostname string `protobuf:"bytes,1,opt,name=hostname,proto3" json:"hostname,omitempty"`
Port string `protobuf:"bytes,2,opt,name=port,proto3" json:"port,omitempty"`
AddressesResolved [][]byte `protobuf:"bytes,3,rep,name=addressesResolved,proto3" json:"addressesResolved,omitempty"` // netip.Addr.MarshalText()

View File

@ -28,7 +28,6 @@ message Challenge {
message ValidationRecord {
// Next unused field number: 9
// TODO(#7311): Replace hostname with Identifier.
string hostname = 1;
string port = 2;
repeated bytes addressesResolved = 3; // netip.Addr.MarshalText()

View File

@ -79,7 +79,7 @@ services:
- setup
bmysql:
image: mariadb:10.5
image: mariadb:10.6.22
networks:
bouldernet:
aliases:

View File

@ -236,7 +236,7 @@ order finalization and does not offer the new-cert endpoint.
* 3-4: RA does the following:
* Verify the PKCS#10 CSR in the certificate request object
* Verify that the CSR has a non-zero number of domain names
* Verify that the CSR has a non-zero number of identifiers
* Verify that the public key in the CSR is different from the account key
* For each authorization referenced in the certificate request
* Retrieve the authorization from the database
@ -303,7 +303,7 @@ ACME v2:
* 2-4: RA does the following:
* Verify the PKCS#10 CSR in the certificate request object
* Verify that the CSR has a non-zero number of domain names
* Verify that the CSR has a non-zero number of identifiers
* Verify that the public key in the CSR is different from the account key
* Retrieve and verify the status and expiry of the order object
* For each identifier referenced in the order request

View File

@ -53,7 +53,7 @@ func (c *EmailCache) Seen(email string) bool {
return true
}
func (c *EmailCache) Store(email string) {
func (c *EmailCache) Remove(email string) {
if c == nil {
// If the cache is nil we assume it was not configured.
return
@ -64,5 +64,29 @@ func (c *EmailCache) Store(email string) {
c.Lock()
defer c.Unlock()
c.cache.Add(hash, nil)
c.cache.Remove(hash)
}
// StoreIfAbsent stores the email in the cache if it is not already present, as
// a single atomic operation. It returns true if the email was stored and false
// if it was already in the cache. If the cache is nil, true is always returned.
func (c *EmailCache) StoreIfAbsent(email string) bool {
if c == nil {
// If the cache is nil we assume it was not configured.
return true
}
hash := hashEmail(email)
c.Lock()
defer c.Unlock()
_, ok := c.cache.Get(hash)
if ok {
c.requests.WithLabelValues("hit").Inc()
return false
}
c.cache.Add(hash, nil)
c.requests.WithLabelValues("miss").Inc()
return true
}

View File

@ -40,6 +40,7 @@ type ExporterImpl struct {
maxConcurrentRequests int
limiter *rate.Limiter
client PardotClient
emailCache *EmailCache
emailsHandledCounter prometheus.Counter
pardotErrorCounter prometheus.Counter
log blog.Logger
@ -54,7 +55,7 @@ var _ emailpb.ExporterServer = (*ExporterImpl)(nil)
// is assigned 40% (20,000 requests), it should also receive 40% of the max
// concurrent requests (e.g., 2 out of 5). For more details, see:
// https://developer.salesforce.com/docs/marketing/pardot/guide/overview.html?q=rate%20limits
func NewExporterImpl(client PardotClient, perDayLimit float64, maxConcurrentRequests int, scope prometheus.Registerer, logger blog.Logger) *ExporterImpl {
func NewExporterImpl(client PardotClient, cache *EmailCache, perDayLimit float64, maxConcurrentRequests int, scope prometheus.Registerer, logger blog.Logger) *ExporterImpl {
limiter := rate.NewLimiter(rate.Limit(perDayLimit/86400.0), maxConcurrentRequests)
emailsHandledCounter := prometheus.NewCounter(prometheus.CounterOpts{
@ -74,6 +75,7 @@ func NewExporterImpl(client PardotClient, perDayLimit float64, maxConcurrentRequ
limiter: limiter,
toSend: make([]string, 0, contactsQueueCap),
client: client,
emailCache: cache,
emailsHandledCounter: emailsHandledCounter,
pardotErrorCounter: pardotErrorCounter,
log: logger,
@ -145,6 +147,11 @@ func (impl *ExporterImpl) Start(daemonCtx context.Context) {
impl.toSend = impl.toSend[:last]
impl.Unlock()
if !impl.emailCache.StoreIfAbsent(email) {
// Another worker has already processed this email.
continue
}
err := impl.limiter.Wait(daemonCtx)
if err != nil && !errors.Is(err, context.Canceled) {
impl.log.Errf("Unexpected limiter.Wait() error: %s", err)
@ -153,10 +160,12 @@ func (impl *ExporterImpl) Start(daemonCtx context.Context) {
err = impl.client.SendContact(email)
if err != nil {
impl.emailCache.Remove(email)
impl.pardotErrorCounter.Inc()
impl.log.Errf("Sending Contact to Pardot: %s", err)
} else {
impl.emailsHandledCounter.Inc()
}
impl.emailsHandledCounter.Inc()
}
}

View File

@ -22,16 +22,14 @@ var ctx = context.Background()
type mockPardotClientImpl struct {
sync.Mutex
CreatedContacts []string
cache *EmailCache
}
// newMockPardotClientImpl returns a MockPardotClientImpl, implementing the
// PardotClient interface. Both refer to the same instance, with the interface
// for mock interaction and the struct for state inspection and modification.
func newMockPardotClientImpl(cache *EmailCache) (PardotClient, *mockPardotClientImpl) {
func newMockPardotClientImpl() (PardotClient, *mockPardotClientImpl) {
mockImpl := &mockPardotClientImpl{
CreatedContacts: []string{},
cache: cache,
}
return mockImpl, mockImpl
}
@ -41,8 +39,6 @@ func (m *mockPardotClientImpl) SendContact(email string) error {
m.Lock()
m.CreatedContacts = append(m.CreatedContacts, email)
m.Unlock()
m.cache.Store(email)
return nil
}
@ -59,8 +55,8 @@ func (m *mockPardotClientImpl) getCreatedContacts() []string {
// ExporterImpl queue and cleanup() to drain and shutdown. If start() is called,
// cleanup() must be called.
func setup() (*ExporterImpl, *mockPardotClientImpl, func(), func()) {
mockClient, clientImpl := newMockPardotClientImpl(nil)
exporter := NewExporterImpl(mockClient, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
mockClient, clientImpl := newMockPardotClientImpl()
exporter := NewExporterImpl(mockClient, nil, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
return exporter, clientImpl,
func() { exporter.Start(daemonCtx) },
@ -149,7 +145,7 @@ func TestSendContactsErrorMetrics(t *testing.T) {
t.Parallel()
mockClient := &mockAlwaysFailClient{}
exporter := NewExporterImpl(mockClient, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
exporter := NewExporterImpl(mockClient, nil, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
@ -166,3 +162,64 @@ func TestSendContactsErrorMetrics(t *testing.T) {
// Check that the error counter was incremented.
test.AssertMetricWithLabelsEquals(t, exporter.pardotErrorCounter, prometheus.Labels{}, 1)
}
func TestSendContactDeduplication(t *testing.T) {
t.Parallel()
cache := NewHashedEmailCache(1000, metrics.NoopRegisterer)
mockClient, clientImpl := newMockPardotClientImpl()
exporter := NewExporterImpl(mockClient, cache, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
_, err := exporter.SendContacts(ctx, &emailpb.SendContactsRequest{
Emails: []string{"duplicate@example.com", "duplicate@example.com"},
})
test.AssertNotError(t, err, "Error enqueuing contacts")
// Drain the queue.
cancel()
exporter.Drain()
contacts := clientImpl.getCreatedContacts()
test.AssertEquals(t, 1, len(contacts))
test.AssertEquals(t, "duplicate@example.com", contacts[0])
// Only one successful send should be recorded.
test.AssertMetricWithLabelsEquals(t, exporter.emailsHandledCounter, prometheus.Labels{}, 1)
if !cache.Seen("duplicate@example.com") {
t.Errorf("duplicate@example.com should have been cached after send")
}
}
func TestSendContactErrorRemovesFromCache(t *testing.T) {
t.Parallel()
cache := NewHashedEmailCache(1000, metrics.NoopRegisterer)
fc := &mockAlwaysFailClient{}
exporter := NewExporterImpl(fc, cache, 1000000, 1, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
_, err := exporter.SendContacts(ctx, &emailpb.SendContactsRequest{
Emails: []string{"error@example.com"},
})
test.AssertNotError(t, err, "enqueue failed")
// Drain the queue.
cancel()
exporter.Drain()
// The email should have been evicted from the cache after send encountered
// an error.
if cache.Seen("error@example.com") {
t.Errorf("error@example.com should have been evicted from cache after send errors")
}
// Check that the error counter was incremented.
test.AssertMetricWithLabelsEquals(t, exporter.pardotErrorCounter, prometheus.Labels{}, 1)
}

View File

@ -63,14 +63,13 @@ type PardotClientImpl struct {
contactsURL string
tokenURL string
token *oAuthToken
emailCache *EmailCache
clk clock.Clock
}
var _ PardotClient = &PardotClientImpl{}
// NewPardotClientImpl creates a new PardotClientImpl.
func NewPardotClientImpl(clk clock.Clock, businessUnit, clientId, clientSecret, oauthbaseURL, pardotBaseURL string, cache *EmailCache) (*PardotClientImpl, error) {
func NewPardotClientImpl(clk clock.Clock, businessUnit, clientId, clientSecret, oauthbaseURL, pardotBaseURL string) (*PardotClientImpl, error) {
contactsURL, err := url.JoinPath(pardotBaseURL, contactsPath)
if err != nil {
return nil, fmt.Errorf("failed to join contacts path: %w", err)
@ -87,7 +86,6 @@ func NewPardotClientImpl(clk clock.Clock, businessUnit, clientId, clientSecret,
contactsURL: contactsURL,
tokenURL: tokenURL,
token: &oAuthToken{},
emailCache: cache,
clk: clk,
}, nil
}
@ -145,15 +143,6 @@ func redactEmail(body []byte, email string) string {
// SendContact submits an email to the Pardot Contacts endpoint, retrying up
// to 3 times with exponential backoff.
func (pc *PardotClientImpl) SendContact(email string) error {
if pc.emailCache.Seen(email) {
// Another goroutine has already sent this email address.
return nil
}
// There is a possible race here where two goroutines could enqueue and send
// the same email address between this check and the actual HTTP request.
// However, at an average rate of ~1 email every 2 seconds, this is unlikely
// to happen in practice.
var err error
for attempt := range maxAttempts {
time.Sleep(core.RetryBackoff(attempt, retryBackoffMin, retryBackoffMax, retryBackoffBase))
@ -193,7 +182,6 @@ func (pc *PardotClientImpl) SendContact(email string) error {
defer resp.Body.Close()
if resp.StatusCode >= 200 && resp.StatusCode < 300 {
pc.emailCache.Store(email)
return nil
}

View File

@ -6,14 +6,11 @@ import (
"io"
"net/http"
"net/http/httptest"
"sync/atomic"
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/test"
"github.com/prometheus/client_golang/prometheus"
)
func defaultTokenHandler(w http.ResponseWriter, r *http.Request) {
@ -47,7 +44,7 @@ func TestSendContactSuccess(t *testing.T) {
defer contactSrv.Close()
clk := clock.NewFake()
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
test.AssertNotError(t, err, "failed to create client")
err = client.SendContact("test@example.com")
@ -73,7 +70,7 @@ func TestSendContactUpdateTokenFails(t *testing.T) {
defer contactSrv.Close()
clk := clock.NewFake()
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
test.AssertNotError(t, err, "Failed to create client")
err = client.SendContact("test@example.com")
@ -97,7 +94,7 @@ func TestSendContact4xx(t *testing.T) {
defer contactSrv.Close()
clk := clock.NewFake()
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
test.AssertNotError(t, err, "Failed to create client")
err = client.SendContact("test@example.com")
@ -145,7 +142,7 @@ func TestSendContactTokenExpiry(t *testing.T) {
defer contactSrv.Close()
clk := clock.NewFake()
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
test.AssertNotError(t, err, "Failed to create client")
// First call uses the initial token ("old_token").
@ -175,7 +172,7 @@ func TestSendContactServerErrorsAfterMaxAttempts(t *testing.T) {
contactSrv := httptest.NewServer(http.HandlerFunc(contactHandler))
defer contactSrv.Close()
client, _ := NewPardotClientImpl(clock.NewFake(), "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, _ := NewPardotClientImpl(clock.NewFake(), "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
err := client.SendContact("test@example.com")
test.AssertError(t, err, "Should fail after retrying all attempts")
@ -203,7 +200,7 @@ func TestSendContactRedactsEmail(t *testing.T) {
defer contactSrv.Close()
clk := clock.NewFake()
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL, nil)
client, err := NewPardotClientImpl(clk, "biz-unit", "cid", "csec", tokenSrv.URL, contactSrv.URL)
test.AssertNotError(t, err, "failed to create client")
err = client.SendContact(emailToTest)
@ -211,30 +208,3 @@ func TestSendContactRedactsEmail(t *testing.T) {
test.AssertNotContains(t, err.Error(), emailToTest)
test.AssertContains(t, err.Error(), "[REDACTED]")
}
func TestSendContactDeduplication(t *testing.T) {
t.Parallel()
tokenSrv := httptest.NewServer(http.HandlerFunc(defaultTokenHandler))
defer tokenSrv.Close()
var contactHits int32
contactSrv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
atomic.AddInt32(&contactHits, 1)
w.WriteHeader(http.StatusOK)
}))
defer contactSrv.Close()
cache := NewHashedEmailCache(1000, metrics.NoopRegisterer)
client, _ := NewPardotClientImpl(clock.New(), "biz", "cid", "csec", tokenSrv.URL, contactSrv.URL, cache)
err := client.SendContact("test@example.com")
test.AssertNotError(t, err, "SendContact should succeed on first call")
test.AssertMetricWithLabelsEquals(t, client.emailCache.requests, prometheus.Labels{"status": "miss"}, 1)
err = client.SendContact("test@example.com")
test.AssertNotError(t, err, "SendContact should succeed on second call")
test.AssertEquals(t, int32(1), atomic.LoadInt32(&contactHits))
test.AssertMetricWithLabelsEquals(t, client.emailCache.requests, prometheus.Labels{"status": "hit"}, 1)
}

View File

@ -25,16 +25,14 @@ type Config struct {
EnforceMPIC bool
MPICFullResults bool
UnsplitIssuance bool
ExpirationMailerUsesJoin bool
DOH bool
IgnoreAccountContacts bool
// ServeRenewalInfo exposes the renewalInfo endpoint in the directory and for
// GET requests. WARNING: This feature is a draft and highly unstable.
ServeRenewalInfo bool
// ExpirationMailerUsesJoin enables using a JOIN query in expiration-mailer
// rather than a SELECT from certificateStatus followed by thousands of
// one-row SELECTs from certificates.
ExpirationMailerUsesJoin bool
// CertCheckerChecksValidations enables an extra query for each certificate
// checked, to find the relevant authzs. Since this query might be
// expensive, we gate it behind a feature flag.
@ -53,9 +51,6 @@ type Config struct {
// for the cert URL to appear.
AsyncFinalize bool
// DOH enables DNS-over-HTTPS queries for validation
DOH bool
// CheckIdentifiersPaused checks if any of the identifiers in the order are
// currently paused at NewOrder time. If any are paused, an error is
// returned to the Subscriber indicating that the order cannot be processed
@ -85,10 +80,6 @@ type Config struct {
// StoreARIReplacesInOrders causes the SA to store and retrieve the optional
// ARI replaces field in the orders table.
StoreARIReplacesInOrders bool
// IgnoreAccountContacts causes the SA to omit the contacts column when
// creating new account rows, and when retrieving existing account rows.
IgnoreAccountContacts bool
}
var fMu = new(sync.RWMutex)

76
go.mod
View File

@ -3,10 +3,10 @@ module github.com/letsencrypt/boulder
go 1.24.0
require (
github.com/aws/aws-sdk-go-v2 v1.32.2
github.com/aws/aws-sdk-go-v2/config v1.27.43
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3
github.com/aws/smithy-go v1.22.0
github.com/aws/aws-sdk-go-v2 v1.36.5
github.com/aws/aws-sdk-go-v2/config v1.29.17
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0
github.com/aws/smithy-go v1.22.4
github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941
github.com/go-jose/go-jose/v4 v4.1.0
github.com/go-logr/stdr v1.2.2
@ -30,41 +30,41 @@ require (
github.com/weppos/publicsuffix-go v0.40.3-0.20250307081557-c05521c3453a
github.com/zmap/zcrypto v0.0.0-20250129210703-03c45d0bae98
github.com/zmap/zlint/v3 v3.6.6
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0
go.opentelemetry.io/otel v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0
go.opentelemetry.io/otel/sdk v1.34.0
go.opentelemetry.io/otel/trace v1.34.0
golang.org/x/crypto v0.36.0
golang.org/x/net v0.38.0
golang.org/x/sync v0.12.0
golang.org/x/term v0.30.0
golang.org/x/text v0.23.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0
go.opentelemetry.io/otel v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0
go.opentelemetry.io/otel/sdk v1.36.0
go.opentelemetry.io/otel/trace v1.36.0
golang.org/x/crypto v0.38.0
golang.org/x/net v0.40.0
golang.org/x/sync v0.14.0
golang.org/x/term v0.32.0
golang.org/x/text v0.25.0
golang.org/x/time v0.11.0
google.golang.org/grpc v1.71.1
google.golang.org/grpc v1.72.1
google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
)
require (
filippo.io/edwards25519 v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.41 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.70 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
@ -74,7 +74,7 @@ require (
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/poy/onpar v1.1.2 // indirect
@ -82,13 +82,13 @@ require (
github.com/prometheus/procfs v0.15.1 // indirect
github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.6.0 // indirect
golang.org/x/mod v0.22.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/tools v0.29.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
)

151
go.sum
View File

@ -7,42 +7,42 @@ github.com/a8m/expect v1.0.0/go.mod h1:4IwSCMumY49ScypDnjNbYEjgVeqy1/U2cEs3Lat96
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/aws/aws-sdk-go-v2 v1.32.2 h1:AkNLZEyYMLnx/Q/mSKkcMqwNFXMAvFto9bNsHqcTduI=
github.com/aws/aws-sdk-go-v2 v1.32.2/go.mod h1:2SK5n0a2karNTv5tbP1SjsX0uhttou00v/HpXKM1ZUo=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 h1:pT3hpW0cOHRJx8Y0DfJUEQuqPild8jRGmSFmBgvydr0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6/go.mod h1:j/I2++U0xX+cr44QjHay4Cvxj6FUbnxrgmqN3H1jTZA=
github.com/aws/aws-sdk-go-v2/config v1.27.43 h1:p33fDDihFC390dhhuv8nOmX419wjOSDQRb+USt20RrU=
github.com/aws/aws-sdk-go-v2/config v1.27.43/go.mod h1:pYhbtvg1siOOg8h5an77rXle9tVG8T+BWLWAo7cOukc=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41 h1:7gXo+Axmp+R4Z+AK8YFQO0ZV3L0gizGINCOWxSLY9W8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41/go.mod h1:u4Eb8d3394YLubphT4jLEwN1rLNq2wFOlT6OuxFwPzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 h1:TMH3f/SCAWdNtXXVPPu5D6wrr4G5hI1rAxbcocKfC7Q=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17/go.mod h1:1ZRXLdTpzdJb9fwTMXiLipENRxkGMTn1sfKexGllQCw=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 h1:UAsR3xA31QGf79WzpG/ixT9FZvQlh5HY1NRqSHBNOCk=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21/go.mod h1:JNr43NFf5L9YaG3eKTm7HQzls9J+A9YYcGI5Quh1r2Y=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 h1:6jZVETqmYCadGFvrYEQfC5fAQmlo80CeL5psbno6r0s=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21/go.mod h1:1SR0GbLlnN3QUmYaflZNiH1ql+1qrSiB2vwcJ+4UM60=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 h1:7edmS3VOBDhK00b/MwGtGglCm7hhwNYnjJs/PgFdMQE=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21/go.mod h1:Q9o5h4HoIWG8XfzxqiuK/CGUbepCJ8uTlaE3bAbxytQ=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 h1:TToQNkvGguu209puTojY/ozlqy2d/SFNcoLIqTFi42g=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0/go.mod h1:0jp+ltwkf+SwG2fm/PKo8t4y8pJSgOCO4D8Lz3k0aHQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 h1:4FMHqLfk0efmTqhXVRL5xYRqlEBNBiRI7N6w4jsEdd4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2/go.mod h1:LWoqeWlK9OZeJxsROW2RqrSPvQHKTpp69r/iDjwsSaw=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 h1:s7NA1SOw8q/5c0wr8477yOPp0z+uBaXBnLE0XYb0POA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2/go.mod h1:fnjjWyAW/Pj5HYOxl9LJqWtEwS7W2qgcRLWP+uWbss0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 h1:t7iUP9+4wdc5lt3E41huP+GvQZJD38WLsgVp4iOtAjg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2/go.mod h1:/niFCtmuQNxqx9v8WAPq5qh7EH25U4BF6tjoyq9bObM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3 h1:xxHGZ+wUgZNACQmxtdvP5tgzfsxGS3vPpTP5Hy3iToE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3/go.mod h1:cB6oAuus7YXRZhWCc1wIwPywwZ1XwweNp2TVAEGYeB8=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 h1:bSYXVyUzoTHoKalBmwaZxs97HU9DWWI3ehHSAMa7xOk=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2/go.mod h1:skMqY7JElusiOUjMJMOv1jJsP7YUg7DrhgqZZWuzu1U=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 h1:AhmO1fHINP9vFYUE0LHzCWg/LfUWUF+zFPEcY9QXb7o=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2/go.mod h1:o8aQygT2+MVP0NaV6kbdE1YnnIM8RRVQzoeUH45GOdI=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 h1:CiS7i0+FUe+/YY1GvIBLLrR/XNGZ4CtM1Ll0XavNuVo=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2/go.mod h1:HtaiBI8CjYoNVde8arShXb94UbQQi9L4EMr6D+xGBwo=
github.com/aws/smithy-go v1.22.0 h1:uunKnWlcoL3zO7q+gG2Pk53joueEOsnNB28QdMsmiMM=
github.com/aws/smithy-go v1.22.0/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/aws/aws-sdk-go-v2 v1.36.5 h1:0OF9RiEMEdDdZEMqF9MRjevyxAQcf6gY+E7vwBILFj0=
github.com/aws/aws-sdk-go-v2 v1.36.5/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.29.17 h1:jSuiQ5jEe4SAMH6lLRMY9OVC+TqJLP5655pBGjmnjr0=
github.com/aws/aws-sdk-go-v2/config v1.29.17/go.mod h1:9P4wwACpbeXs9Pm9w1QTh6BwWwJjwYvJ1iCt5QbCXh8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70 h1:ONnH5CM16RTXRkS8Z1qg7/s2eDOhHhaXVd72mmyv4/0=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70/go.mod h1:M+lWhhmomVGgtuPOhO85u4pEa3SmssPTdcYpP/5J/xc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 h1:KAXP9JSHO1vKGCr5f4O6WmlVKLFFXgWYAGoJosorxzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32/go.mod h1:h4Sg6FQdexC1yYG9RDnOvLbW1a/P986++/Y/a+GyEM8=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 h1:SsytQyTMHMDPspp+spo7XwXTP44aJZZAC7fBV2C5+5s=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36/go.mod h1:Q1lnJArKRXkenyog6+Y+zr7WDpk4e6XlR6gs20bbeNo=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 h1:i2vNHQiXUvKhs3quBR6aqlgJaiaexz/aNvdCktW/kAM=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36/go.mod h1:UdyGa7Q91id/sdyHPwth+043HhmP6yP9MBHgbZM0xo8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36/go.mod h1:gDhdAV6wL3PmPqBhiPbnlS447GoWs8HTTOYef9/9Inw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 h1:CXV68E2dNqhuynZJPB80bhPQwAKqBWVer887figW6Jc=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4/go.mod h1:/xFi9KtvBXP97ppCz1TAEvU1Uf66qvid89rbem3wCzQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 h1:t0E6FzREdtCsiLIoLCWsYliNsRBgyGD/MCK571qk4MI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17/go.mod h1:ygpklyoaypuyDvOM5ujWGrYWpAK3h7ugnmKCU/76Ys4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0 h1:1GmCadhKR3J2sMVKs2bAYq9VnwYeCqfRyZzD4RASGlA=
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 h1:AIRJ3lfb2w/1/8wOOSqYb9fUKGwQbtysJ2H1MofRUPg=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5/go.mod h1:b7SiVprpU+iGazDUqvRSLf5XmCdn+JtT1on7uNL6Ipc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 h1:BpOxT3yhLwSJ77qIY3DoHAQjZsc4HEGfMCE4NGy3uFg=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3/go.mod h1:vq/GQR1gOFLquZMSrxUK/cpvKCNVYibNyJ1m7JrU88E=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 h1:NFOJ/NXEGV4Rq//71Hs1jC/NvPs1ezajK+yQmkwnPV0=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0/go.mod h1:7ph2tGpfQvwzgistp2+zga9f+bCjlQJPkPUmMgDSD7w=
github.com/aws/smithy-go v1.22.4 h1:uqXzVZNuNexwc/xrh6Tb56u89WDlJY6HS+KC0S4QSjw=
github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@ -51,8 +51,8 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
@ -126,8 +126,8 @@ github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 h1:pRhl55Yx1eC7BZ1N+BBWwn
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0/go.mod h1:XKMd7iuf/RGPSMJ/U4HP0zS2Z9Fh8Ps9a+6X26m/tmI=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 h1:asbCHRVmodnJTuQ3qamDwqVOIjwqUPTYmYuemVOx+Ys=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0/go.mod h1:ggCgvZ2r7uOoQjOyu2Y1NhHmEPPzzuhWgcza5M1Ji1I=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jmhodges/clock v1.2.0 h1:eq4kys+NI0PLngzaHEe7AmPT90XMGIEySD1JfV1PDIs=
@ -274,26 +274,26 @@ github.com/zmap/zlint/v3 v3.6.6/go.mod h1:6yXG+CBOQBRpMCOnpIVPUUL296m5HYksZC9bj5
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0 h1:hCq2hNMwsegUvPzI7sPOvtO9cqyy5GbWt/Ybp2xrx8Q=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0/go.mod h1:LqaApwGx/oUmzsbqxkzuBvyoPpkxk3JQWnqfVrJ3wCA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 h1:ZIg3ZT/aQ7AfKqdwp7ECpOK6vHqquXXuyTjIO8ZdmPs=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0/go.mod h1:DQAwmETtZV00skUwgD6+0U89g80NKsJE3DCKeLLPQMI=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 h1:lsInsfvhVIfOI6qHVyysXMNDnjO9Npvl7tlDPJFBVd4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0/go.mod h1:KQsVNh4OjgjTG0G6EiNi1jVpnaeeKsKMRwbLN+f1+8M=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0 h1:m0yTiGDLUvVYaTFbAvCkVYIYcvwKt3G7OLoN77NUs/8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0/go.mod h1:wBQbT4UekBfegL2nx0Xk1vBcnzyBPsIVm9hRG4fYcr4=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0 h1:JgtbA0xkWHnTmYk7YusopJFX6uleBmAuZ8n05NEh8nQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0/go.mod h1:179AK5aar5R3eS9FucPy6rggvU0g52cvKId8pv4+v0c=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.6.0 h1:jQjP+AQyTf+Fe7OKj/MfkDrmK4MNVtw2NpXsf9fefDI=
go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
@ -311,8 +311,9 @@ golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDf
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@ -343,8 +344,8 @@ golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8=
golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -358,8 +359,9 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -384,8 +386,9 @@ golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
@ -397,8 +400,9 @@ golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
@ -409,8 +413,9 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
@ -431,14 +436,14 @@ golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422 h1:GVIKPyP/kLIyVOgOnTwFOrvQaQUzOzGMCxgFUOEmm24=
google.golang.org/genproto/googleapis/api v0.0.0-20250106144421-5f5ef82da422/go.mod h1:b6h1vNKhxaSoEI+5jc3PJUCustfli/mRab7295pY7rw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 h1:Kog3KlB4xevJlAcbbbzPfRG0+X9fdoGM+UBRKVz6Wr0=
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237/go.mod h1:ezi0AVyMKDWy5xAncvjLWH7UcLBB5n7y2fQ8MzjJcto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 h1:cJfm9zPbe1e873mHJzmQ1nwVEeRDU/T1wXDK2kUSU34=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.71.1 h1:ffsFWr7ygTUscGPI0KKK6TLrGz0476KUvvsbqWK0rPI=
google.golang.org/grpc v1.71.1/go.mod h1:H0GRtasmQOh9LkFoCPDu3ZrwUtD1YGE+b2vYBYd/8Ec=
google.golang.org/grpc v1.72.1 h1:HR03wO6eyZ7lknl75XlxABNVLLFc2PAb6mHlYh756mA=
google.golang.org/grpc v1.72.1/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=

View File

@ -14,11 +14,13 @@ import (
"github.com/letsencrypt/boulder/cmd"
bcreds "github.com/letsencrypt/boulder/grpc/creds"
// 'grpc/health' is imported for its init function, which causes clients to
// rely on the Health Service for load-balancing.
// 'grpc/internal/resolver/dns' is imported for its init function, which
// registers the SRV resolver.
"google.golang.org/grpc/balancer/roundrobin"
// 'grpc/health' is imported for its init function, which causes clients to
// rely on the Health Service for load-balancing as long as a
// "healthCheckConfig" is specified in the gRPC service config.
_ "google.golang.org/grpc/health"
_ "github.com/letsencrypt/boulder/grpc/internal/resolver/dns"
@ -46,13 +48,11 @@ func ClientSetup(c *cmd.GRPCClientConfig, tlsConfig *tls.Config, statsRegistry p
unaryInterceptors := []grpc.UnaryClientInterceptor{
cmi.Unary,
cmi.metrics.grpcMetrics.UnaryClientInterceptor(),
otelgrpc.UnaryClientInterceptor(),
}
streamInterceptors := []grpc.StreamClientInterceptor{
cmi.Stream,
cmi.metrics.grpcMetrics.StreamClientInterceptor(),
otelgrpc.StreamClientInterceptor(),
}
target, hostOverride, err := c.MakeTargetAndHostOverride()
@ -61,12 +61,27 @@ func ClientSetup(c *cmd.GRPCClientConfig, tlsConfig *tls.Config, statsRegistry p
}
creds := bcreds.NewClientCredentials(tlsConfig.RootCAs, tlsConfig.Certificates, hostOverride)
return grpc.Dial(
return grpc.NewClient(
target,
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"loadBalancingConfig": [{"%s":{}}]}`, roundrobin.Name)),
grpc.WithDefaultServiceConfig(
fmt.Sprintf(
// By setting the service name to an empty string in
// healthCheckConfig, we're instructing the gRPC client to query
// the overall health status of each server. The grpc-go health
// server, as constructed by health.NewServer(), unconditionally
// sets the overall service (e.g. "") status to SERVING. If a
// specific service name were set, the server would need to
// explicitly transition that service to SERVING; otherwise,
// clients would receive a NOT_FOUND status and the connection
// would be marked as unhealthy (TRANSIENT_FAILURE).
`{"healthCheckConfig": {"serviceName": ""},"loadBalancingConfig": [{"%s":{}}]}`,
roundrobin.Name,
),
),
grpc.WithTransportCredentials(creds),
grpc.WithChainUnaryInterceptor(unaryInterceptors...),
grpc.WithChainStreamInterceptor(streamInterceptors...),
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)
}

View File

@ -141,7 +141,7 @@ func ValidationRecordToPB(record core.ValidationRecord) (*corepb.ValidationRecor
return nil, err
}
return &corepb.ValidationRecord{
Hostname: record.DnsName,
Hostname: record.Hostname,
Port: record.Port,
AddressesResolved: addrs,
AddressUsed: addrUsed,
@ -177,7 +177,7 @@ func PBToValidationRecord(in *corepb.ValidationRecord) (record core.ValidationRe
return
}
return core.ValidationRecord{
DnsName: in.Hostname,
Hostname: in.Hostname,
Port: in.Port,
AddressesResolved: addrs,
AddressUsed: addrUsed,
@ -351,8 +351,8 @@ func newOrderValid(order *corepb.Order) bool {
return !(order.RegistrationID == 0 || order.Expires == nil || len(order.Identifiers) == 0)
}
// PBToAuthzMap converts a protobuf map of domains mapped to protobuf authorizations to a
// golang map[string]*core.Authorization.
// PBToAuthzMap converts a protobuf map of identifiers mapped to protobuf
// authorizations to a golang map[string]*core.Authorization.
func PBToAuthzMap(pb *sapb.Authorizations) (map[identifier.ACMEIdentifier]*core.Authorization, error) {
m := make(map[identifier.ACMEIdentifier]*core.Authorization, len(pb.Authzs))
for _, v := range pb.Authzs {

View File

@ -72,7 +72,7 @@ func TestChallenge(t *testing.T) {
ip := netip.MustParseAddr("1.1.1.1")
chall.ValidationRecord = []core.ValidationRecord{
{
DnsName: "example.com",
Hostname: "example.com",
Port: "2020",
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
@ -113,7 +113,7 @@ func TestChallenge(t *testing.T) {
func TestValidationRecord(t *testing.T) {
ip := netip.MustParseAddr("1.1.1.1")
vr := core.ValidationRecord{
DnsName: "exampleA.com",
Hostname: "exampleA.com",
Port: "80",
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
@ -134,7 +134,7 @@ func TestValidationRecord(t *testing.T) {
func TestValidationResult(t *testing.T) {
ip := netip.MustParseAddr("1.1.1.1")
vrA := core.ValidationRecord{
DnsName: "exampleA.com",
Hostname: "exampleA.com",
Port: "443",
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
@ -143,7 +143,7 @@ func TestValidationResult(t *testing.T) {
ResolverAddrs: []string{"resolver:5353"},
}
vrB := core.ValidationRecord{
DnsName: "exampleB.com",
Hostname: "exampleB.com",
Port: "443",
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,

View File

@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"net"
"slices"
"strings"
"time"
@ -123,12 +124,21 @@ func (sb *serverBuilder) Build(tlsConfig *tls.Config, statsRegistry prometheus.R
// This is the names which are allowlisted at the server level, plus the union
// of all names which are allowlisted for any individual service.
acceptedSANs := make(map[string]struct{})
var acceptedSANsSlice []string
for _, service := range sb.cfg.Services {
for _, name := range service.ClientNames {
acceptedSANs[name] = struct{}{}
if !slices.Contains(acceptedSANsSlice, name) {
acceptedSANsSlice = append(acceptedSANsSlice, name)
}
}
}
// Ensure that the health service has the same ClientNames as the other
// services, so that health checks can be performed by clients which are
// allowed to connect to the server.
sb.cfg.Services[healthpb.Health_ServiceDesc.ServiceName].ClientNames = acceptedSANsSlice
creds, err := bcreds.NewServerCredentials(tlsConfig, acceptedSANs)
if err != nil {
return nil, err
@ -224,8 +234,12 @@ func (sb *serverBuilder) Build(tlsConfig *tls.Config, statsRegistry prometheus.R
// initLongRunningCheck initializes a goroutine which will periodically check
// the health of the provided service and update the health server accordingly.
//
// TODO(#8255): Remove the service parameter and instead rely on transitioning
// the overall health of the server (e.g. "") instead of individual services.
func (sb *serverBuilder) initLongRunningCheck(shutdownCtx context.Context, service string, checkImpl func(context.Context) error) {
// Set the initial health status for the service.
sb.healthSrv.SetServingStatus("", healthpb.HealthCheckResponse_NOT_SERVING)
sb.healthSrv.SetServingStatus(service, healthpb.HealthCheckResponse_NOT_SERVING)
// check is a helper function that checks the health of the service and, if
@ -249,10 +263,13 @@ func (sb *serverBuilder) initLongRunningCheck(shutdownCtx context.Context, servi
}
if next != healthpb.HealthCheckResponse_SERVING {
sb.logger.Errf("transitioning overall health from %q to %q, due to: %s", last, next, err)
sb.logger.Errf("transitioning health of %q from %q to %q, due to: %s", service, last, next, err)
} else {
sb.logger.Infof("transitioning overall health from %q to %q", last, next)
sb.logger.Infof("transitioning health of %q from %q to %q", service, last, next)
}
sb.healthSrv.SetServingStatus("", next)
sb.healthSrv.SetServingStatus(service, next)
return next
}

View File

@ -11,7 +11,7 @@ import (
"google.golang.org/grpc/health"
)
func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
func TestServerBuilderInitLongRunningCheck(t *testing.T) {
t.Parallel()
hs := health.NewServer()
mockLogger := blog.NewMock()
@ -41,8 +41,8 @@ func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
// - ~100ms 3rd check failed, SERVING to NOT_SERVING
serving := mockLogger.GetAllMatching(".*\"NOT_SERVING\" to \"SERVING\"")
notServing := mockLogger.GetAllMatching((".*\"SERVING\" to \"NOT_SERVING\""))
test.Assert(t, len(serving) == 1, "expected one serving log line")
test.Assert(t, len(notServing) == 1, "expected one not serving log line")
test.Assert(t, len(serving) == 2, "expected two serving log lines")
test.Assert(t, len(notServing) == 2, "expected two not serving log lines")
mockLogger.Clear()
@ -67,6 +67,6 @@ func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
// - ~100ms 3rd check passed, NOT_SERVING to SERVING
serving = mockLogger.GetAllMatching(".*\"NOT_SERVING\" to \"SERVING\"")
notServing = mockLogger.GetAllMatching((".*\"SERVING\" to \"NOT_SERVING\""))
test.Assert(t, len(serving) == 2, "expected two serving log lines")
test.Assert(t, len(notServing) == 1, "expected one not serving log line")
test.Assert(t, len(serving) == 4, "expected four serving log lines")
test.Assert(t, len(notServing) == 2, "expected two not serving log lines")
}

View File

@ -0,0 +1,26 @@
Address Block,Name,RFC,Allocation Date,Termination Date,Source,Destination,Forwardable,Globally Reachable,Reserved-by-Protocol
0.0.0.0/8,"""This network""","[RFC791], Section 3.2",1981-09,N/A,True,False,False,False,True
0.0.0.0/32,"""This host on this network""","[RFC1122], Section 3.2.1.3",1981-09,N/A,True,False,False,False,True
10.0.0.0/8,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
100.64.0.0/10,Shared Address Space,[RFC6598],2012-04,N/A,True,True,True,False,False
127.0.0.0/8,Loopback,"[RFC1122], Section 3.2.1.3",1981-09,N/A,False [1],False [1],False [1],False [1],True
169.254.0.0/16,Link Local,[RFC3927],2005-05,N/A,True,True,False,False,True
172.16.0.0/12,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
192.0.0.0/24 [2],IETF Protocol Assignments,"[RFC6890], Section 2.1",2010-01,N/A,False,False,False,False,False
192.0.0.0/29,IPv4 Service Continuity Prefix,[RFC7335],2011-06,N/A,True,True,True,False,False
192.0.0.8/32,IPv4 dummy address,[RFC7600],2015-03,N/A,True,False,False,False,False
192.0.0.9/32,Port Control Protocol Anycast,[RFC7723],2015-10,N/A,True,True,True,True,False
192.0.0.10/32,Traversal Using Relays around NAT Anycast,[RFC8155],2017-02,N/A,True,True,True,True,False
"192.0.0.170/32, 192.0.0.171/32",NAT64/DNS64 Discovery,"[RFC8880][RFC7050], Section 2.2",2013-02,N/A,False,False,False,False,True
192.0.2.0/24,Documentation (TEST-NET-1),[RFC5737],2010-01,N/A,False,False,False,False,False
192.31.196.0/24,AS112-v4,[RFC7535],2014-12,N/A,True,True,True,True,False
192.52.193.0/24,AMT,[RFC7450],2014-12,N/A,True,True,True,True,False
192.88.99.0/24,Deprecated (6to4 Relay Anycast),[RFC7526],2001-06,2015-03,,,,,
192.168.0.0/16,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
192.175.48.0/24,Direct Delegation AS112 Service,[RFC7534],1996-01,N/A,True,True,True,True,False
198.18.0.0/15,Benchmarking,[RFC2544],1999-03,N/A,True,True,True,False,False
198.51.100.0/24,Documentation (TEST-NET-2),[RFC5737],2010-01,N/A,False,False,False,False,False
203.0.113.0/24,Documentation (TEST-NET-3),[RFC5737],2010-01,N/A,False,False,False,False,False
240.0.0.0/4,Reserved,"[RFC1112], Section 4",1989-08,N/A,False,False,False,False,True
255.255.255.255/32,Limited Broadcast,"[RFC8190]
[RFC919], Section 7",1984-10,N/A,False,True,False,False,True
1 Address Block Name RFC Allocation Date Termination Date Source Destination Forwardable Globally Reachable Reserved-by-Protocol
2 0.0.0.0/8 "This network" [RFC791], Section 3.2 1981-09 N/A True False False False True
3 0.0.0.0/32 "This host on this network" [RFC1122], Section 3.2.1.3 1981-09 N/A True False False False True
4 10.0.0.0/8 Private-Use [RFC1918] 1996-02 N/A True True True False False
5 100.64.0.0/10 Shared Address Space [RFC6598] 2012-04 N/A True True True False False
6 127.0.0.0/8 Loopback [RFC1122], Section 3.2.1.3 1981-09 N/A False [1] False [1] False [1] False [1] True
7 169.254.0.0/16 Link Local [RFC3927] 2005-05 N/A True True False False True
8 172.16.0.0/12 Private-Use [RFC1918] 1996-02 N/A True True True False False
9 192.0.0.0/24 [2] IETF Protocol Assignments [RFC6890], Section 2.1 2010-01 N/A False False False False False
10 192.0.0.0/29 IPv4 Service Continuity Prefix [RFC7335] 2011-06 N/A True True True False False
11 192.0.0.8/32 IPv4 dummy address [RFC7600] 2015-03 N/A True False False False False
12 192.0.0.9/32 Port Control Protocol Anycast [RFC7723] 2015-10 N/A True True True True False
13 192.0.0.10/32 Traversal Using Relays around NAT Anycast [RFC8155] 2017-02 N/A True True True True False
14 192.0.0.170/32, 192.0.0.171/32 NAT64/DNS64 Discovery [RFC8880][RFC7050], Section 2.2 2013-02 N/A False False False False True
15 192.0.2.0/24 Documentation (TEST-NET-1) [RFC5737] 2010-01 N/A False False False False False
16 192.31.196.0/24 AS112-v4 [RFC7535] 2014-12 N/A True True True True False
17 192.52.193.0/24 AMT [RFC7450] 2014-12 N/A True True True True False
18 192.88.99.0/24 Deprecated (6to4 Relay Anycast) [RFC7526] 2001-06 2015-03
19 192.168.0.0/16 Private-Use [RFC1918] 1996-02 N/A True True True False False
20 192.175.48.0/24 Direct Delegation AS112 Service [RFC7534] 1996-01 N/A True True True True False
21 198.18.0.0/15 Benchmarking [RFC2544] 1999-03 N/A True True True False False
22 198.51.100.0/24 Documentation (TEST-NET-2) [RFC5737] 2010-01 N/A False False False False False
23 203.0.113.0/24 Documentation (TEST-NET-3) [RFC5737] 2010-01 N/A False False False False False
24 240.0.0.0/4 Reserved [RFC1112], Section 4 1989-08 N/A False False False False True
25 255.255.255.255/32 Limited Broadcast [RFC8190] [RFC919], Section 7 1984-10 N/A False True False False True

View File

@ -0,0 +1,28 @@
Address Block,Name,RFC,Allocation Date,Termination Date,Source,Destination,Forwardable,Globally Reachable,Reserved-by-Protocol
::1/128,Loopback Address,[RFC4291],2006-02,N/A,False,False,False,False,True
::/128,Unspecified Address,[RFC4291],2006-02,N/A,True,False,False,False,True
::ffff:0:0/96,IPv4-mapped Address,[RFC4291],2006-02,N/A,False,False,False,False,True
64:ff9b::/96,IPv4-IPv6 Translat.,[RFC6052],2010-10,N/A,True,True,True,True,False
64:ff9b:1::/48,IPv4-IPv6 Translat.,[RFC8215],2017-06,N/A,True,True,True,False,False
100::/64,Discard-Only Address Block,[RFC6666],2012-06,N/A,True,True,True,False,False
100:0:0:1::/64,Dummy IPv6 Prefix,[RFC9780],2025-04,N/A,True,False,False,False,False
2001::/23,IETF Protocol Assignments,[RFC2928],2000-09,N/A,False [1],False [1],False [1],False [1],False
2001::/32,TEREDO,"[RFC4380]
[RFC8190]",2006-01,N/A,True,True,True,N/A [2],False
2001:1::1/128,Port Control Protocol Anycast,[RFC7723],2015-10,N/A,True,True,True,True,False
2001:1::2/128,Traversal Using Relays around NAT Anycast,[RFC8155],2017-02,N/A,True,True,True,True,False
2001:1::3/128,DNS-SD Service Registration Protocol Anycast,[RFC9665],2024-04,N/A,True,True,True,True,False
2001:2::/48,Benchmarking,[RFC5180][RFC Errata 1752],2008-04,N/A,True,True,True,False,False
2001:3::/32,AMT,[RFC7450],2014-12,N/A,True,True,True,True,False
2001:4:112::/48,AS112-v6,[RFC7535],2014-12,N/A,True,True,True,True,False
2001:10::/28,Deprecated (previously ORCHID),[RFC4843],2007-03,2014-03,,,,,
2001:20::/28,ORCHIDv2,[RFC7343],2014-07,N/A,True,True,True,True,False
2001:30::/28,Drone Remote ID Protocol Entity Tags (DETs) Prefix,[RFC9374],2022-12,N/A,True,True,True,True,False
2001:db8::/32,Documentation,[RFC3849],2004-07,N/A,False,False,False,False,False
2002::/16 [3],6to4,[RFC3056],2001-02,N/A,True,True,True,N/A [3],False
2620:4f:8000::/48,Direct Delegation AS112 Service,[RFC7534],2011-05,N/A,True,True,True,True,False
3fff::/20,Documentation,[RFC9637],2024-07,N/A,False,False,False,False,False
5f00::/16,Segment Routing (SRv6) SIDs,[RFC9602],2024-04,N/A,True,True,True,False,False
fc00::/7,Unique-Local,"[RFC4193]
[RFC8190]",2005-10,N/A,True,True,True,False [4],False
fe80::/10,Link-Local Unicast,[RFC4291],2006-02,N/A,True,True,False,False,True
1 Address Block Name RFC Allocation Date Termination Date Source Destination Forwardable Globally Reachable Reserved-by-Protocol
2 ::1/128 Loopback Address [RFC4291] 2006-02 N/A False False False False True
3 ::/128 Unspecified Address [RFC4291] 2006-02 N/A True False False False True
4 ::ffff:0:0/96 IPv4-mapped Address [RFC4291] 2006-02 N/A False False False False True
5 64:ff9b::/96 IPv4-IPv6 Translat. [RFC6052] 2010-10 N/A True True True True False
6 64:ff9b:1::/48 IPv4-IPv6 Translat. [RFC8215] 2017-06 N/A True True True False False
7 100::/64 Discard-Only Address Block [RFC6666] 2012-06 N/A True True True False False
8 100:0:0:1::/64 Dummy IPv6 Prefix [RFC9780] 2025-04 N/A True False False False False
9 2001::/23 IETF Protocol Assignments [RFC2928] 2000-09 N/A False [1] False [1] False [1] False [1] False
10 2001::/32 TEREDO [RFC4380] [RFC8190] 2006-01 N/A True True True N/A [2] False
11 2001:1::1/128 Port Control Protocol Anycast [RFC7723] 2015-10 N/A True True True True False
12 2001:1::2/128 Traversal Using Relays around NAT Anycast [RFC8155] 2017-02 N/A True True True True False
13 2001:1::3/128 DNS-SD Service Registration Protocol Anycast [RFC9665] 2024-04 N/A True True True True False
14 2001:2::/48 Benchmarking [RFC5180][RFC Errata 1752] 2008-04 N/A True True True False False
15 2001:3::/32 AMT [RFC7450] 2014-12 N/A True True True True False
16 2001:4:112::/48 AS112-v6 [RFC7535] 2014-12 N/A True True True True False
17 2001:10::/28 Deprecated (previously ORCHID) [RFC4843] 2007-03 2014-03
18 2001:20::/28 ORCHIDv2 [RFC7343] 2014-07 N/A True True True True False
19 2001:30::/28 Drone Remote ID Protocol Entity Tags (DETs) Prefix [RFC9374] 2022-12 N/A True True True True False
20 2001:db8::/32 Documentation [RFC3849] 2004-07 N/A False False False False False
21 2002::/16 [3] 6to4 [RFC3056] 2001-02 N/A True True True N/A [3] False
22 2620:4f:8000::/48 Direct Delegation AS112 Service [RFC7534] 2011-05 N/A True True True True False
23 3fff::/20 Documentation [RFC9637] 2024-07 N/A False False False False False
24 5f00::/16 Segment Routing (SRv6) SIDs [RFC9602] 2024-04 N/A True True True False False
25 fc00::/7 Unique-Local [RFC4193] [RFC8190] 2005-10 N/A True True True False [4] False
26 fe80::/10 Link-Local Unicast [RFC4291] 2006-02 N/A True True False False True

179
iana/ip.go Normal file
View File

@ -0,0 +1,179 @@
package iana
import (
"bytes"
"encoding/csv"
"errors"
"fmt"
"io"
"net/netip"
"regexp"
"slices"
"strings"
_ "embed"
)
type reservedPrefix struct {
// addressFamily is "IPv4" or "IPv6".
addressFamily string
// The other fields are defined in:
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
addressBlock netip.Prefix
name string
rfc string
// The BRs' requirement that we not issue for Reserved IP Addresses only
// cares about presence in one of these registries, not any of the other
// metadata fields tracked by the registries. Therefore, we ignore the
// Allocation Date, Termination Date, Source, Destination, Forwardable,
// Globally Reachable, and Reserved By Protocol columns.
}
var (
reservedPrefixes []reservedPrefix
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
//go:embed data/iana-ipv4-special-registry-1.csv
ipv4Registry []byte
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
//go:embed data/iana-ipv6-special-registry-1.csv
ipv6Registry []byte
)
// init parses and loads the embedded IANA special-purpose address registry CSV
// files for all address families, panicking if any one fails.
func init() {
ipv4Prefixes, err := parseReservedPrefixFile(ipv4Registry, "IPv4")
if err != nil {
panic(err)
}
ipv6Prefixes, err := parseReservedPrefixFile(ipv6Registry, "IPv6")
if err != nil {
panic(err)
}
// Add multicast addresses, which aren't in the IANA registries.
//
// TODO(#8237): Move these entries to IP address blocklists once they're
// implemented.
additionalPrefixes := []reservedPrefix{
{
addressFamily: "IPv4",
addressBlock: netip.MustParsePrefix("224.0.0.0/4"),
name: "Multicast Addresses",
rfc: "[RFC3171]",
},
{
addressFamily: "IPv6",
addressBlock: netip.MustParsePrefix("ff00::/8"),
name: "Multicast Addresses",
rfc: "[RFC4291]",
},
}
reservedPrefixes = slices.Concat(ipv4Prefixes, ipv6Prefixes, additionalPrefixes)
// Sort the list of reserved prefixes in descending order of prefix size, so
// that checks will match the most-specific reserved prefix first.
slices.SortFunc(reservedPrefixes, func(a, b reservedPrefix) int {
if a.addressBlock.Bits() == b.addressBlock.Bits() {
return 0
}
if a.addressBlock.Bits() > b.addressBlock.Bits() {
return -1
}
return 1
})
}
// Define regexps we'll use to clean up poorly formatted registry entries.
var (
// 2+ sequential whitespace characters. The csv package takes care of
// newlines automatically.
ianaWhitespacesRE = regexp.MustCompile(`\s{2,}`)
// Footnotes at the end, like `[2]`.
ianaFootnotesRE = regexp.MustCompile(`\[\d+\]$`)
)
// parseReservedPrefixFile parses and returns the IANA special-purpose address
// registry CSV data for a single address family, or returns an error if parsing
// fails.
func parseReservedPrefixFile(registryData []byte, addressFamily string) ([]reservedPrefix, error) {
if addressFamily != "IPv4" && addressFamily != "IPv6" {
return nil, fmt.Errorf("failed to parse reserved address registry: invalid address family %q", addressFamily)
}
if registryData == nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry: empty", addressFamily)
}
reader := csv.NewReader(bytes.NewReader(registryData))
// Parse the header row.
record, err := reader.Read()
if err != nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry header: %w", addressFamily, err)
}
if record[0] != "Address Block" || record[1] != "Name" || record[2] != "RFC" {
return nil, fmt.Errorf("failed to parse reserved %s address registry header: must begin with \"Address Block\", \"Name\" and \"RFC\"", addressFamily)
}
// Parse the records.
var prefixes []reservedPrefix
for {
row, err := reader.Read()
if errors.Is(err, io.EOF) {
// Finished parsing the file.
if len(prefixes) < 1 {
return nil, fmt.Errorf("failed to parse reserved %s address registry: no rows after header", addressFamily)
}
break
} else if err != nil {
return nil, err
} else if len(row) < 3 {
return nil, fmt.Errorf("failed to parse reserved %s address registry: incomplete row", addressFamily)
}
// Remove any footnotes, then handle each comma-separated prefix.
for _, prefixStr := range strings.Split(ianaFootnotesRE.ReplaceAllLiteralString(row[0], ""), ",") {
prefix, err := netip.ParsePrefix(strings.TrimSpace(prefixStr))
if err != nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry: couldn't parse entry %q as an IP address prefix: %s", addressFamily, prefixStr, err)
}
prefixes = append(prefixes, reservedPrefix{
addressFamily: addressFamily,
addressBlock: prefix,
name: row[1],
// Replace any whitespace sequences with a single space.
rfc: ianaWhitespacesRE.ReplaceAllLiteralString(row[2], " "),
})
}
}
return prefixes, nil
}
// IsReservedAddr returns an error if an IP address is part of a reserved range.
func IsReservedAddr(ip netip.Addr) error {
for _, rpx := range reservedPrefixes {
if rpx.addressBlock.Contains(ip) {
return fmt.Errorf("IP address is in a reserved address block: %s: %s", rpx.rfc, rpx.name)
}
}
return nil
}
// IsReservedPrefix returns an error if an IP address prefix overlaps with a
// reserved range.
func IsReservedPrefix(prefix netip.Prefix) error {
for _, rpx := range reservedPrefixes {
if rpx.addressBlock.Overlaps(prefix) {
return fmt.Errorf("IP address is in a reserved address block: %s: %s", rpx.rfc, rpx.name)
}
}
return nil
}

96
iana/ip_test.go Normal file
View File

@ -0,0 +1,96 @@
package iana
import (
"net/netip"
"strings"
"testing"
)
func TestIsReservedAddr(t *testing.T) {
t.Parallel()
cases := []struct {
ip string
want string
}{
{"127.0.0.1", "Loopback"}, // second-lowest IP in a reserved /8, common mistaken request
{"128.0.0.1", ""}, // second-lowest IP just above a reserved /8
{"192.168.254.254", "Private-Use"}, // highest IP in a reserved /16
{"192.169.255.255", ""}, // highest IP in the /16 above a reserved /16
{"::", "Unspecified Address"}, // lowest possible IPv6 address, reserved, possible parsing edge case
{"::1", "Loopback Address"}, // reserved, common mistaken request
{"::2", ""}, // surprisingly unreserved
{"fe80::1", "Link-Local Unicast"}, // second-lowest IP in a reserved /10
{"febf:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "Link-Local Unicast"}, // highest IP in a reserved /10
{"fec0::1", ""}, // second-lowest IP just above a reserved /10
{"192.0.0.170", "NAT64/DNS64 Discovery"}, // first of two reserved IPs that are comma-split in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"192.0.0.171", "NAT64/DNS64 Discovery"}, // second of two reserved IPs that are comma-split in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"2001:1::1", "Port Control Protocol Anycast"}, // reserved IP that comes after a line with a line break in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"2002::", "6to4"}, // lowest IP in a reserved /16 that has a footnote in IANA's CSV
{"2002:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "6to4"}, // highest IP in a reserved /16 that has a footnote in IANA's CSV
{"0100::", "Discard-Only Address Block"}, // part of a reserved block in a non-canonical IPv6 format
{"0100::0000:ffff:ffff:ffff:ffff", "Discard-Only Address Block"}, // part of a reserved block in a non-canonical IPv6 format
{"0100::0002:0000:0000:0000:0000", ""}, // non-reserved but in a non-canonical IPv6 format
// TODO(#8237): Move these entries to IP address blocklists once they're
// implemented.
{"ff00::1", "Multicast Addresses"}, // second-lowest IP in a reserved /8 we hardcode
{"ff10::1", "Multicast Addresses"}, // in the middle of a reserved /8 we hardcode
{"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "Multicast Addresses"}, // highest IP in a reserved /8 we hardcode
}
for _, tc := range cases {
t.Run(tc.ip, func(t *testing.T) {
t.Parallel()
err := IsReservedAddr(netip.MustParseAddr(tc.ip))
if err == nil && tc.want != "" {
t.Errorf("Got success, wanted error for %#v", tc.ip)
}
if err != nil && !strings.Contains(err.Error(), tc.want) {
t.Errorf("%#v: got %q, want %q", tc.ip, err.Error(), tc.want)
}
})
}
}
func TestIsReservedPrefix(t *testing.T) {
t.Parallel()
cases := []struct {
cidr string
want bool
}{
{"172.16.0.0/12", true},
{"172.16.0.0/32", true},
{"172.16.0.1/32", true},
{"172.31.255.0/24", true},
{"172.31.255.255/24", true},
{"172.31.255.255/32", true},
{"172.32.0.0/24", false},
{"172.32.0.1/32", false},
{"100::/64", true},
{"100::/128", true},
{"100::1/128", true},
{"100::1:ffff:ffff:ffff:ffff/128", true},
{"100:0:0:2::/64", false},
{"100:0:0:2::1/128", false},
}
for _, tc := range cases {
t.Run(tc.cidr, func(t *testing.T) {
t.Parallel()
err := IsReservedPrefix(netip.MustParsePrefix(tc.cidr))
if err != nil && !tc.want {
t.Error(err)
}
if err == nil && tc.want {
t.Errorf("Wanted error for %#v, got success", tc.cidr)
}
})
}
}

View File

@ -110,21 +110,6 @@ func NewDNSSlice(input []string) ACMEIdentifiers {
return out
}
// ToDNSSlice returns a list of DNS names from the input if the input contains
// only DNS identifiers. Otherwise, it returns an error.
//
// TODO(#8023): Remove this when we no longer have any bare dnsNames slices.
func (idents ACMEIdentifiers) ToDNSSlice() ([]string, error) {
var out []string
for _, in := range idents {
if in.Type != "dns" {
return nil, fmt.Errorf("identifier '%s' is of type '%s', not DNS", in.Value, in.Type)
}
out = append(out, in.Value)
}
return out, nil
}
// NewIP is a convenience function for creating an ACMEIdentifier with Type "ip"
// for a given IP address.
func NewIP(ip netip.Addr) ACMEIdentifier {
@ -227,37 +212,3 @@ func (idents ACMEIdentifiers) ToValues() ([]string, []net.IP, error) {
return dnsNames, ipAddresses, nil
}
// hasIdentifier matches any protobuf struct that has both Identifier and
// DnsName fields, like Authorization, Order, or many SA requests. This lets us
// convert these to ACMEIdentifier, vice versa, etc.
type hasIdentifier interface {
GetIdentifier() *corepb.Identifier
GetDnsName() string
}
// FromProtoWithDefault can be removed after DnsNames are no longer used in RPCs.
// TODO(#8023)
func FromProtoWithDefault(input hasIdentifier) ACMEIdentifier {
if input.GetIdentifier() != nil {
return FromProto(input.GetIdentifier())
}
return NewDNS(input.GetDnsName())
}
// hasIdentifiers matches any protobuf struct that has both Identifiers and
// DnsNames fields, like NewOrderRequest or many SA requests. This lets us
// convert these to ACMEIdentifiers, vice versa, etc.
type hasIdentifiers interface {
GetIdentifiers() []*corepb.Identifier
GetDnsNames() []string
}
// FromProtoSliceWithDefault can be removed after DnsNames are no longer used in
// RPCs. TODO(#8023)
func FromProtoSliceWithDefault(input hasIdentifiers) ACMEIdentifiers {
if len(input.GetIdentifiers()) > 0 {
return FromProtoSlice(input.GetIdentifiers())
}
return NewDNSSlice(input.GetDnsNames())
}

View File

@ -8,125 +8,8 @@ import (
"reflect"
"slices"
"testing"
corepb "github.com/letsencrypt/boulder/core/proto"
)
type withDefaultTestCases struct {
Name string
InputIdents []*corepb.Identifier
InputNames []string
want ACMEIdentifiers
}
func (tc withDefaultTestCases) GetIdentifiers() []*corepb.Identifier {
return tc.InputIdents
}
func (tc withDefaultTestCases) GetDnsNames() []string {
return tc.InputNames
}
func TestFromProtoSliceWithDefault(t *testing.T) {
testCases := []withDefaultTestCases{
{
Name: "Populated identifiers, populated names, same values",
InputIdents: []*corepb.Identifier{
{Type: "dns", Value: "a.example.com"},
{Type: "dns", Value: "b.example.com"},
},
InputNames: []string{"a.example.com", "b.example.com"},
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "a.example.com"},
{Type: TypeDNS, Value: "b.example.com"},
},
},
{
Name: "Populated identifiers, populated names, different values",
InputIdents: []*corepb.Identifier{
{Type: "dns", Value: "coffee.example.com"},
},
InputNames: []string{"tea.example.com"},
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "coffee.example.com"},
},
},
{
Name: "Populated identifiers, empty names",
InputIdents: []*corepb.Identifier{
{Type: "dns", Value: "example.com"},
},
InputNames: []string{},
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "example.com"},
},
},
{
Name: "Populated identifiers, nil names",
InputIdents: []*corepb.Identifier{
{Type: "dns", Value: "example.com"},
},
InputNames: nil,
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "example.com"},
},
},
{
Name: "Empty identifiers, populated names",
InputIdents: []*corepb.Identifier{},
InputNames: []string{"a.example.com", "b.example.com"},
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "a.example.com"},
{Type: TypeDNS, Value: "b.example.com"},
},
},
{
Name: "Empty identifiers, empty names",
InputIdents: []*corepb.Identifier{},
InputNames: []string{},
want: nil,
},
{
Name: "Empty identifiers, nil names",
InputIdents: []*corepb.Identifier{},
InputNames: nil,
want: nil,
},
{
Name: "Nil identifiers, populated names",
InputIdents: nil,
InputNames: []string{"a.example.com", "b.example.com"},
want: ACMEIdentifiers{
{Type: TypeDNS, Value: "a.example.com"},
{Type: TypeDNS, Value: "b.example.com"},
},
},
{
Name: "Nil identifiers, empty names",
InputIdents: nil,
InputNames: []string{},
want: nil,
},
{
Name: "Nil identifiers, nil names",
InputIdents: nil,
InputNames: nil,
want: nil,
},
}
for _, tc := range testCases {
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
got := FromProtoSliceWithDefault(tc)
if !slices.Equal(got, tc.want) {
t.Errorf("Got %#v, but want %#v", got, tc.want)
}
})
}
}
// TestFromX509 tests FromCert and FromCSR, which are fromX509's public
// wrappers.
func TestFromX509(t *testing.T) {

View File

@ -1,430 +0,0 @@
package mail
import (
"bytes"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io"
"math"
"math/big"
"mime/quotedprintable"
"net"
"net/mail"
"net/smtp"
"net/textproto"
"strconv"
"strings"
"syscall"
"time"
"github.com/jmhodges/clock"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/core"
blog "github.com/letsencrypt/boulder/log"
)
type idGenerator interface {
generate() *big.Int
}
var maxBigInt = big.NewInt(math.MaxInt64)
type realSource struct{}
func (s realSource) generate() *big.Int {
randInt, err := rand.Int(rand.Reader, maxBigInt)
if err != nil {
panic(err)
}
return randInt
}
// Mailer is an interface that allows creating Conns. Implementations must
// be safe for concurrent use.
type Mailer interface {
Connect() (Conn, error)
}
// Conn is an interface that allows sending mail. When you are done with a
// Conn, call Close(). Implementations are not required to be safe for
// concurrent use.
type Conn interface {
SendMail([]string, string, string) error
Close() error
}
// connImpl represents a single connection to a mail server. It is not safe
// for concurrent use.
type connImpl struct {
config
client smtpClient
}
// mailerImpl defines a mail transfer agent to use for sending mail. It is
// safe for concurrent us.
type mailerImpl struct {
config
}
type config struct {
log blog.Logger
dialer dialer
from mail.Address
clk clock.Clock
csprgSource idGenerator
reconnectBase time.Duration
reconnectMax time.Duration
sendMailAttempts *prometheus.CounterVec
}
type dialer interface {
Dial() (smtpClient, error)
}
type smtpClient interface {
Mail(string) error
Rcpt(string) error
Data() (io.WriteCloser, error)
Reset() error
Close() error
}
type dryRunClient struct {
log blog.Logger
}
func (d dryRunClient) Dial() (smtpClient, error) {
return d, nil
}
func (d dryRunClient) Mail(from string) error {
d.log.Debugf("MAIL FROM:<%s>", from)
return nil
}
func (d dryRunClient) Rcpt(to string) error {
d.log.Debugf("RCPT TO:<%s>", to)
return nil
}
func (d dryRunClient) Close() error {
return nil
}
func (d dryRunClient) Data() (io.WriteCloser, error) {
return d, nil
}
func (d dryRunClient) Write(p []byte) (n int, err error) {
for _, line := range strings.Split(string(p), "\n") {
d.log.Debugf("data: %s", line)
}
return len(p), nil
}
func (d dryRunClient) Reset() (err error) {
d.log.Debugf("RESET")
return nil
}
// New constructs a Mailer to represent an account on a particular mail
// transfer agent.
func New(
server,
port,
username,
password string,
rootCAs *x509.CertPool,
from mail.Address,
logger blog.Logger,
stats prometheus.Registerer,
reconnectBase time.Duration,
reconnectMax time.Duration) *mailerImpl {
sendMailAttempts := prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "send_mail_attempts",
Help: "A counter of send mail attempts labelled by result",
}, []string{"result", "error"})
stats.MustRegister(sendMailAttempts)
return &mailerImpl{
config: config{
dialer: &dialerImpl{
username: username,
password: password,
server: server,
port: port,
rootCAs: rootCAs,
},
log: logger,
from: from,
clk: clock.New(),
csprgSource: realSource{},
reconnectBase: reconnectBase,
reconnectMax: reconnectMax,
sendMailAttempts: sendMailAttempts,
},
}
}
// NewDryRun constructs a Mailer suitable for doing a dry run. It simply logs
// each command that would have been run, at debug level.
func NewDryRun(from mail.Address, logger blog.Logger) *mailerImpl {
return &mailerImpl{
config: config{
dialer: dryRunClient{logger},
from: from,
clk: clock.New(),
csprgSource: realSource{},
sendMailAttempts: prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "send_mail_attempts",
Help: "A counter of send mail attempts labelled by result",
}, []string{"result", "error"}),
},
}
}
func (c config) generateMessage(to []string, subject, body string) ([]byte, error) {
mid := c.csprgSource.generate()
now := c.clk.Now().UTC()
addrs := []string{}
for _, a := range to {
if !core.IsASCII(a) {
return nil, fmt.Errorf("Non-ASCII email address")
}
addrs = append(addrs, strconv.Quote(a))
}
headers := []string{
fmt.Sprintf("To: %s", strings.Join(addrs, ", ")),
fmt.Sprintf("From: %s", c.from.String()),
fmt.Sprintf("Subject: %s", subject),
fmt.Sprintf("Date: %s", now.Format(time.RFC822)),
fmt.Sprintf("Message-Id: <%s.%s.%s>", now.Format("20060102T150405"), mid.String(), c.from.Address),
"MIME-Version: 1.0",
"Content-Type: text/plain; charset=UTF-8",
"Content-Transfer-Encoding: quoted-printable",
}
for i := range headers[1:] {
// strip LFs
headers[i] = strings.Replace(headers[i], "\n", "", -1)
}
bodyBuf := new(bytes.Buffer)
mimeWriter := quotedprintable.NewWriter(bodyBuf)
_, err := mimeWriter.Write([]byte(body))
if err != nil {
return nil, err
}
err = mimeWriter.Close()
if err != nil {
return nil, err
}
return []byte(fmt.Sprintf(
"%s\r\n\r\n%s\r\n",
strings.Join(headers, "\r\n"),
bodyBuf.String(),
)), nil
}
func (c *connImpl) reconnect() {
for i := 0; ; i++ {
sleepDuration := core.RetryBackoff(i, c.reconnectBase, c.reconnectMax, 2)
c.log.Infof("sleeping for %s before reconnecting mailer", sleepDuration)
c.clk.Sleep(sleepDuration)
c.log.Info("attempting to reconnect mailer")
client, err := c.dialer.Dial()
if err != nil {
c.log.Warningf("reconnect error: %s", err)
continue
}
c.client = client
break
}
c.log.Info("reconnected successfully")
}
// Connect opens a connection to the specified mail server. It must be called
// before SendMail.
func (m *mailerImpl) Connect() (Conn, error) {
client, err := m.dialer.Dial()
if err != nil {
return nil, err
}
return &connImpl{m.config, client}, nil
}
type dialerImpl struct {
username, password, server, port string
rootCAs *x509.CertPool
}
func (di *dialerImpl) Dial() (smtpClient, error) {
hostport := net.JoinHostPort(di.server, di.port)
var conn net.Conn
var err error
conn, err = tls.Dial("tcp", hostport, &tls.Config{
RootCAs: di.rootCAs,
})
if err != nil {
return nil, err
}
client, err := smtp.NewClient(conn, di.server)
if err != nil {
return nil, err
}
auth := smtp.PlainAuth("", di.username, di.password, di.server)
if err = client.Auth(auth); err != nil {
return nil, err
}
return client, nil
}
// resetAndError resets the current mail transaction and then returns its
// argument as an error. If the reset command also errors, it combines both
// errors and returns them. Without this we would get `nested MAIL command`.
// https://github.com/letsencrypt/boulder/issues/3191
func (c *connImpl) resetAndError(err error) error {
if err == io.EOF {
return err
}
if err2 := c.client.Reset(); err2 != nil {
return fmt.Errorf("%s (also, on sending RSET: %s)", err, err2)
}
return err
}
func (c *connImpl) sendOne(to []string, subject, msg string) error {
if c.client == nil {
return errors.New("call Connect before SendMail")
}
body, err := c.generateMessage(to, subject, msg)
if err != nil {
return err
}
if err = c.client.Mail(c.from.String()); err != nil {
return err
}
for _, t := range to {
if err = c.client.Rcpt(t); err != nil {
return c.resetAndError(err)
}
}
w, err := c.client.Data()
if err != nil {
return c.resetAndError(err)
}
_, err = w.Write(body)
if err != nil {
return c.resetAndError(err)
}
err = w.Close()
if err != nil {
return c.resetAndError(err)
}
return nil
}
// BadAddressSMTPError is returned by SendMail when the server rejects a message
// but for a reason that doesn't prevent us from continuing to send mail. The
// error message contains the error code and the error message returned from the
// server.
type BadAddressSMTPError struct {
Message string
}
func (e BadAddressSMTPError) Error() string {
return e.Message
}
// Based on reading of various SMTP documents these are a handful
// of errors we are likely to be able to continue sending mail after
// receiving. The majority of these errors boil down to 'bad address'.
var badAddressErrorCodes = map[int]bool{
401: true, // Invalid recipient
422: true, // Recipient mailbox is full
441: true, // Recipient server is not responding
450: true, // User's mailbox is not available
501: true, // Bad recipient address syntax
510: true, // Invalid recipient
511: true, // Invalid recipient
513: true, // Address type invalid
541: true, // Recipient rejected message
550: true, // Non-existent address
553: true, // Non-existent address
}
// SendMail sends an email to the provided list of recipients. The email body
// is simple text.
func (c *connImpl) SendMail(to []string, subject, msg string) error {
var protoErr *textproto.Error
for {
err := c.sendOne(to, subject, msg)
if err == nil {
// If the error is nil, we sent the mail without issue. nice!
break
} else if err == io.EOF {
c.sendMailAttempts.WithLabelValues("failure", "EOF").Inc()
// If the error is an EOF, we should try to reconnect on a backoff
// schedule, sleeping between attempts.
c.reconnect()
// After reconnecting, loop around and try `sendOne` again.
continue
} else if errors.Is(err, syscall.ECONNRESET) {
c.sendMailAttempts.WithLabelValues("failure", "TCP RST").Inc()
// If the error is `syscall.ECONNRESET`, we should try to reconnect on a backoff
// schedule, sleeping between attempts.
c.reconnect()
// After reconnecting, loop around and try `sendOne` again.
continue
} else if errors.Is(err, syscall.EPIPE) {
// EPIPE also seems to be a common way to signal TCP RST.
c.sendMailAttempts.WithLabelValues("failure", "EPIPE").Inc()
c.reconnect()
continue
} else if errors.As(err, &protoErr) && protoErr.Code == 421 {
c.sendMailAttempts.WithLabelValues("failure", "SMTP 421").Inc()
/*
* If the error is an instance of `textproto.Error` with a SMTP error code,
* and that error code is 421 then treat this as a reconnect-able event.
*
* The SMTP RFC defines this error code as:
* 421 <domain> Service not available, closing transmission channel
* (This may be a reply to any command if the service knows it
* must shut down)
*
* In practice we see this code being used by our production SMTP server
* when the connection has gone idle for too long. For more information
* see issue #2249[0].
*
* [0] - https://github.com/letsencrypt/boulder/issues/2249
*/
c.reconnect()
// After reconnecting, loop around and try `sendOne` again.
continue
} else if errors.As(err, &protoErr) && badAddressErrorCodes[protoErr.Code] {
c.sendMailAttempts.WithLabelValues("failure", fmt.Sprintf("SMTP %d", protoErr.Code)).Inc()
return BadAddressSMTPError{fmt.Sprintf("%d: %s", protoErr.Code, protoErr.Msg)}
} else {
// If it wasn't an EOF error or a recoverable SMTP error it is unexpected and we
// return from SendMail() with the error
c.sendMailAttempts.WithLabelValues("failure", "unexpected").Inc()
return err
}
}
c.sendMailAttempts.WithLabelValues("success", "").Inc()
return nil
}
// Close closes the connection.
func (c *connImpl) Close() error {
err := c.client.Close()
if err != nil {
return err
}
c.client = nil
return nil
}

View File

@ -1,545 +0,0 @@
package mail
import (
"bufio"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"fmt"
"math/big"
"net"
"net/mail"
"net/textproto"
"strings"
"testing"
"time"
"github.com/jmhodges/clock"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/test"
)
var (
// These variables are populated by init(), and then referenced by setup() and
// listenForever(). smtpCert is the TLS certificate which will be served by
// the fake SMTP server, and smtpRoot is the issuer of that certificate which
// will be trusted by the SMTP client under test.
smtpRoot *x509.CertPool
smtpCert *tls.Certificate
)
func init() {
// Populate the global smtpRoot and smtpCert variables. We use a single self
// signed cert for both, for ease of generation. It has to assert the name
// localhost to appease the mailer, which is connecting to localhost.
key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
fmt.Println(err)
template := x509.Certificate{
DNSNames: []string{"localhost"},
SerialNumber: big.NewInt(123),
NotBefore: time.Now().Add(-24 * time.Hour),
NotAfter: time.Now().Add(24 * time.Hour),
}
certDER, err := x509.CreateCertificate(rand.Reader, &template, &template, key.Public(), key)
fmt.Println(err)
cert, err := x509.ParseCertificate(certDER)
fmt.Println(err)
smtpRoot = x509.NewCertPool()
smtpRoot.AddCert(cert)
smtpCert = &tls.Certificate{
Certificate: [][]byte{certDER},
PrivateKey: key,
Leaf: cert,
}
}
type fakeSource struct{}
func (f fakeSource) generate() *big.Int {
return big.NewInt(1991)
}
func TestGenerateMessage(t *testing.T) {
fc := clock.NewFake()
fromAddress, _ := mail.ParseAddress("happy sender <send@email.com>")
log := blog.UseMock()
m := New("", "", "", "", nil, *fromAddress, log, metrics.NoopRegisterer, 0, 0)
m.clk = fc
m.csprgSource = fakeSource{}
messageBytes, err := m.generateMessage([]string{"recv@email.com"}, "test subject", "this is the body\n")
test.AssertNotError(t, err, "Failed to generate email body")
message := string(messageBytes)
fields := strings.Split(message, "\r\n")
test.AssertEquals(t, len(fields), 12)
fmt.Println(message)
test.AssertEquals(t, fields[0], "To: \"recv@email.com\"")
test.AssertEquals(t, fields[1], "From: \"happy sender\" <send@email.com>")
test.AssertEquals(t, fields[2], "Subject: test subject")
test.AssertEquals(t, fields[3], "Date: 01 Jan 70 00:00 UTC")
test.AssertEquals(t, fields[4], "Message-Id: <19700101T000000.1991.send@email.com>")
test.AssertEquals(t, fields[5], "MIME-Version: 1.0")
test.AssertEquals(t, fields[6], "Content-Type: text/plain; charset=UTF-8")
test.AssertEquals(t, fields[7], "Content-Transfer-Encoding: quoted-printable")
test.AssertEquals(t, fields[8], "")
test.AssertEquals(t, fields[9], "this is the body")
}
func TestFailNonASCIIAddress(t *testing.T) {
log := blog.UseMock()
fromAddress, _ := mail.ParseAddress("send@email.com")
m := New("", "", "", "", nil, *fromAddress, log, metrics.NoopRegisterer, 0, 0)
_, err := m.generateMessage([]string{"遗憾@email.com"}, "test subject", "this is the body\n")
test.AssertError(t, err, "Allowed a non-ASCII to address incorrectly")
}
func expect(t *testing.T, buf *bufio.Reader, expected string) error {
line, _, err := buf.ReadLine()
if err != nil {
t.Errorf("readline: %s expected: %s\n", err, expected)
return err
}
if string(line) != expected {
t.Errorf("Expected %s, got %s", expected, line)
return fmt.Errorf("Expected %s, got %s", expected, line)
}
return nil
}
type connHandler func(int, *testing.T, net.Conn, *net.TCPConn)
func listenForever(l *net.TCPListener, t *testing.T, handler connHandler) {
tlsConf := &tls.Config{
Certificates: []tls.Certificate{*smtpCert},
}
connID := 0
for {
tcpConn, err := l.AcceptTCP()
if err != nil {
return
}
tlsConn := tls.Server(tcpConn, tlsConf)
connID++
go handler(connID, t, tlsConn, tcpConn)
}
}
func authenticateClient(t *testing.T, conn net.Conn) {
buf := bufio.NewReader(conn)
// we can ignore write errors because any
// failures will be caught on the connecting
// side
_, _ = conn.Write([]byte("220 smtp.example.com ESMTP\n"))
err := expect(t, buf, "EHLO localhost")
if err != nil {
return
}
_, _ = conn.Write([]byte("250-PIPELINING\n"))
_, _ = conn.Write([]byte("250-AUTH PLAIN LOGIN\n"))
_, _ = conn.Write([]byte("250 8BITMIME\n"))
// Base64 encoding of "\0user@example.com\0passwd"
err = expect(t, buf, "AUTH PLAIN AHVzZXJAZXhhbXBsZS5jb20AcGFzc3dk")
if err != nil {
return
}
_, _ = conn.Write([]byte("235 2.7.0 Authentication successful\n"))
}
// The normal handler authenticates the client and then disconnects without
// further command processing. It is sufficient for TestConnect()
func normalHandler(connID int, t *testing.T, tlsConn net.Conn, tcpConn *net.TCPConn) {
defer func() {
err := tlsConn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, tlsConn)
}
// The disconnectHandler authenticates the client like the normalHandler but
// additionally processes an email flow (e.g. MAIL, RCPT and DATA commands).
// When the `connID` is <= `closeFirst` the connection is closed immediately
// after the MAIL command is received and prior to issuing a 250 response. If
// a `goodbyeMsg` is provided, it is written to the client immediately before
// closing. In this way the first `closeFirst` connections will not complete
// normally and can be tested for reconnection logic.
func disconnectHandler(closeFirst int, goodbyeMsg string) connHandler {
return func(connID int, t *testing.T, conn net.Conn, _ *net.TCPConn) {
defer func() {
err := conn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, conn)
buf := bufio.NewReader(conn)
err := expect(t, buf, "MAIL FROM:<<you-are-a-winner@example.com>> BODY=8BITMIME")
if err != nil {
return
}
if connID <= closeFirst {
// If there was a `goodbyeMsg` specified, write it to the client before
// closing the connection. This is a good way to deliver a SMTP error
// before closing
if goodbyeMsg != "" {
_, _ = fmt.Fprintf(conn, "%s\r\n", goodbyeMsg)
t.Logf("Wrote goodbye msg: %s", goodbyeMsg)
}
t.Log("Cutting off client early")
return
}
_, _ = conn.Write([]byte("250 Sure. Go on. \r\n"))
err = expect(t, buf, "RCPT TO:<hi@bye.com>")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Tell Me More \r\n"))
err = expect(t, buf, "DATA")
if err != nil {
return
}
_, _ = conn.Write([]byte("354 Cool Data\r\n"))
_, _ = conn.Write([]byte("250 Peace Out\r\n"))
}
}
func badEmailHandler(messagesToProcess int) connHandler {
return func(_ int, t *testing.T, conn net.Conn, _ *net.TCPConn) {
defer func() {
err := conn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, conn)
buf := bufio.NewReader(conn)
err := expect(t, buf, "MAIL FROM:<<you-are-a-winner@example.com>> BODY=8BITMIME")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Sure. Go on. \r\n"))
err = expect(t, buf, "RCPT TO:<hi@bye.com>")
if err != nil {
return
}
_, _ = conn.Write([]byte("401 4.1.3 Bad recipient address syntax\r\n"))
err = expect(t, buf, "RSET")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Ok yr rset now\r\n"))
}
}
// The rstHandler authenticates the client like the normalHandler but
// additionally processes an email flow (e.g. MAIL, RCPT and DATA
// commands). When the `connID` is <= `rstFirst` the socket of the
// listening connection is set to abruptively close (sends TCP RST but
// no FIN). The listening connection is closed immediately after the
// MAIL command is received and prior to issuing a 250 response. In this
// way the first `rstFirst` connections will not complete normally and
// can be tested for reconnection logic.
func rstHandler(rstFirst int) connHandler {
return func(connID int, t *testing.T, tlsConn net.Conn, tcpConn *net.TCPConn) {
defer func() {
err := tcpConn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, tlsConn)
buf := bufio.NewReader(tlsConn)
err := expect(t, buf, "MAIL FROM:<<you-are-a-winner@example.com>> BODY=8BITMIME")
if err != nil {
return
}
// Set the socket of the listening connection to abruptively
// close.
if connID <= rstFirst {
err := tcpConn.SetLinger(0)
if err != nil {
t.Error(err)
return
}
t.Log("Socket set for abruptive close. Cutting off client early")
return
}
_, _ = tlsConn.Write([]byte("250 Sure. Go on. \r\n"))
err = expect(t, buf, "RCPT TO:<hi@bye.com>")
if err != nil {
return
}
_, _ = tlsConn.Write([]byte("250 Tell Me More \r\n"))
err = expect(t, buf, "DATA")
if err != nil {
return
}
_, _ = tlsConn.Write([]byte("354 Cool Data\r\n"))
_, _ = tlsConn.Write([]byte("250 Peace Out\r\n"))
}
}
func setup(t *testing.T) (*mailerImpl, *net.TCPListener, func()) {
fromAddress, _ := mail.ParseAddress("you-are-a-winner@example.com")
log := blog.UseMock()
// Listen on port 0 to get any free available port
tcpAddr, err := net.ResolveTCPAddr("tcp", ":0")
if err != nil {
t.Fatalf("resolving tcp addr: %s", err)
}
tcpl, err := net.ListenTCP("tcp", tcpAddr)
if err != nil {
t.Fatalf("listen: %s", err)
}
cleanUp := func() {
err := tcpl.Close()
if err != nil {
t.Errorf("listen.Close: %s", err)
}
}
// We can look at the listener Addr() to figure out which free port was
// assigned by the operating system
_, port, err := net.SplitHostPort(tcpl.Addr().String())
if err != nil {
t.Fatal("failed parsing port from tcp listen")
}
m := New(
"localhost",
port,
"user@example.com",
"passwd",
smtpRoot,
*fromAddress,
log,
metrics.NoopRegisterer,
time.Second*2, time.Second*10)
return m, tcpl, cleanUp
}
func TestConnect(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
go listenForever(l, t, normalHandler)
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.Close()
if err != nil {
t.Errorf("Failed to clean up: %s", err)
}
}
func TestReconnectSuccess(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
const closedConns = 5
// Configure a test server that will disconnect the first `closedConns`
// connections after the MAIL cmd
go listenForever(l, t, disconnectHandler(closedConns, ""))
// With a mailer client that has a max attempt > `closedConns` we expect no
// error. The message should be delivered after `closedConns` reconnect
// attempts.
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
if err != nil {
t.Errorf("Expected SendMail() to not fail. Got err: %s", err)
}
}
func TestBadEmailError(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
const messages = 3
go listenForever(l, t, badEmailHandler(messages))
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
// We expect there to be an error
if err == nil {
t.Errorf("Expected SendMail() to return an BadAddressSMTPError, got nil")
}
expected := "401: 4.1.3 Bad recipient address syntax"
var badAddrErr BadAddressSMTPError
test.AssertErrorWraps(t, err, &badAddrErr)
test.AssertEquals(t, badAddrErr.Message, expected)
}
func TestReconnectSMTP421(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
const closedConns = 5
// A SMTP 421 can be generated when the server times out an idle connection.
// For more information see https://github.com/letsencrypt/boulder/issues/2249
smtp421 := "421 1.2.3 green.eggs.and.spam Error: timeout exceeded"
// Configure a test server that will disconnect the first `closedConns`
// connections after the MAIL cmd with a SMTP 421 error
go listenForever(l, t, disconnectHandler(closedConns, smtp421))
// With a mailer client that has a max attempt > `closedConns` we expect no
// error. The message should be delivered after `closedConns` reconnect
// attempts.
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
if err != nil {
t.Errorf("Expected SendMail() to not fail. Got err: %s", err)
}
}
func TestOtherError(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
go listenForever(l, t, func(_ int, t *testing.T, conn net.Conn, _ *net.TCPConn) {
defer func() {
err := conn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, conn)
buf := bufio.NewReader(conn)
err := expect(t, buf, "MAIL FROM:<<you-are-a-winner@example.com>> BODY=8BITMIME")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Sure. Go on. \r\n"))
err = expect(t, buf, "RCPT TO:<hi@bye.com>")
if err != nil {
return
}
_, _ = conn.Write([]byte("999 1.1.1 This would probably be bad?\r\n"))
err = expect(t, buf, "RSET")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Ok yr rset now\r\n"))
})
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
// We expect there to be an error
if err == nil {
t.Errorf("Expected SendMail() to return an error, got nil")
}
expected := "999 1.1.1 This would probably be bad?"
var rcptErr *textproto.Error
test.AssertErrorWraps(t, err, &rcptErr)
test.AssertEquals(t, rcptErr.Error(), expected)
m, l, cleanUp = setup(t)
defer cleanUp()
go listenForever(l, t, func(_ int, t *testing.T, conn net.Conn, _ *net.TCPConn) {
defer func() {
err := conn.Close()
if err != nil {
t.Errorf("conn.Close: %s", err)
}
}()
authenticateClient(t, conn)
buf := bufio.NewReader(conn)
err := expect(t, buf, "MAIL FROM:<<you-are-a-winner@example.com>> BODY=8BITMIME")
if err != nil {
return
}
_, _ = conn.Write([]byte("250 Sure. Go on. \r\n"))
err = expect(t, buf, "RCPT TO:<hi@bye.com>")
if err != nil {
return
}
_, _ = conn.Write([]byte("999 1.1.1 This would probably be bad?\r\n"))
err = expect(t, buf, "RSET")
if err != nil {
return
}
_, _ = conn.Write([]byte("nop\r\n"))
})
conn, err = m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
// We expect there to be an error
test.AssertError(t, err, "SendMail didn't fail as expected")
test.AssertEquals(t, err.Error(), "999 1.1.1 This would probably be bad? (also, on sending RSET: short response: nop)")
}
func TestReconnectAfterRST(t *testing.T) {
m, l, cleanUp := setup(t)
defer cleanUp()
const rstConns = 5
// Configure a test server that will RST and disconnect the first
// `closedConns` connections
go listenForever(l, t, rstHandler(rstConns))
// With a mailer client that has a max attempt > `closedConns` we expect no
// error. The message should be delivered after `closedConns` reconnect
// attempts.
conn, err := m.Connect()
if err != nil {
t.Errorf("Failed to connect: %s", err)
}
err = conn.SendMail([]string{"hi@bye.com"}, "You are already a winner!", "Just kidding")
if err != nil {
t.Errorf("Expected SendMail() to not fail. Got err: %s", err)
}
}

View File

@ -1,60 +0,0 @@
package mocks
import (
"sync"
"github.com/letsencrypt/boulder/mail"
)
// Mailer is a mock
type Mailer struct {
sync.Mutex
Messages []MailerMessage
}
var _ mail.Mailer = &Mailer{}
// mockMailerConn is a mock that satisfies the mail.Conn interface
type mockMailerConn struct {
parent *Mailer
}
var _ mail.Conn = &mockMailerConn{}
// MailerMessage holds the captured emails from SendMail()
type MailerMessage struct {
To string
Subject string
Body string
}
// Clear removes any previously recorded messages
func (m *Mailer) Clear() {
m.Lock()
defer m.Unlock()
m.Messages = nil
}
// SendMail is a mock
func (m *mockMailerConn) SendMail(to []string, subject, msg string) error {
m.parent.Lock()
defer m.parent.Unlock()
for _, rcpt := range to {
m.parent.Messages = append(m.parent.Messages, MailerMessage{
To: rcpt,
Subject: subject,
Body: msg,
})
}
return nil
}
// Close is a mock
func (m *mockMailerConn) Close() error {
return nil
}
// Connect is a mock
func (m *Mailer) Connect() (mail.Conn, error) {
return &mockMailerConn{parent: m}, nil
}

View File

@ -1,93 +0,0 @@
package policy
import (
"fmt"
"net/netip"
)
var (
// TODO(#8080): Rebuild these as structs that track the structure of IANA's
// CSV files, for better automated handling.
//
// Private CIDRs to ignore. Sourced from:
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
privateV4Prefixes = map[netip.Prefix]string{
netip.MustParsePrefix("0.0.0.0/8"): "RFC 791, Section 3.2: This network",
netip.MustParsePrefix("0.0.0.0/32"): "RFC 1122, Section 3.2.1.3: This host on this network",
netip.MustParsePrefix("10.0.0.0/8"): "RFC 1918: Private-Use",
netip.MustParsePrefix("100.64.0.0/10"): "RFC 6598: Shared Address Space",
netip.MustParsePrefix("127.0.0.0/8"): "RFC 1122, Section 3.2.1.3: Loopback",
netip.MustParsePrefix("169.254.0.0/16"): "RFC 3927: Link Local",
netip.MustParsePrefix("172.16.0.0/12"): "RFC 1918: Private-Use",
netip.MustParsePrefix("192.0.0.0/24"): "RFC 6890, Section 2.1: IETF Protocol Assignments",
netip.MustParsePrefix("192.0.0.0/29"): "RFC 7335: IPv4 Service Continuity Prefix",
netip.MustParsePrefix("192.0.0.8/32"): "RFC 7600: IPv4 dummy address",
netip.MustParsePrefix("192.0.0.9/32"): "RFC 7723: Port Control Protocol Anycast",
netip.MustParsePrefix("192.0.0.10/32"): "RFC 8155: Traversal Using Relays around NAT Anycast",
netip.MustParsePrefix("192.0.0.170/32"): "RFC 8880 & RFC 7050, Section 2.2: NAT64/DNS64 Discovery",
netip.MustParsePrefix("192.0.0.171/32"): "RFC 8880 & RFC 7050, Section 2.2: NAT64/DNS64 Discovery",
netip.MustParsePrefix("192.0.2.0/24"): "RFC 5737: Documentation (TEST-NET-1)",
netip.MustParsePrefix("192.31.196.0/24"): "RFC 7535: AS112-v4",
netip.MustParsePrefix("192.52.193.0/24"): "RFC 7450: AMT",
netip.MustParsePrefix("192.88.99.0/24"): "RFC 7526: Deprecated (6to4 Relay Anycast)",
netip.MustParsePrefix("192.168.0.0/16"): "RFC 1918: Private-Use",
netip.MustParsePrefix("192.175.48.0/24"): "RFC 7534: Direct Delegation AS112 Service",
netip.MustParsePrefix("198.18.0.0/15"): "RFC 2544: Benchmarking",
netip.MustParsePrefix("198.51.100.0/24"): "RFC 5737: Documentation (TEST-NET-2)",
netip.MustParsePrefix("203.0.113.0/24"): "RFC 5737: Documentation (TEST-NET-3)",
netip.MustParsePrefix("240.0.0.0/4"): "RFC1112, Section 4: Reserved",
netip.MustParsePrefix("255.255.255.255/32"): "RFC 8190 & RFC 919, Section 7: Limited Broadcast",
// 224.0.0.0/4 are multicast addresses as per RFC 3171. They are not
// present in the IANA registry.
netip.MustParsePrefix("224.0.0.0/4"): "RFC 3171: Multicast Addresses",
}
// Sourced from:
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
privateV6Prefixes = map[netip.Prefix]string{
netip.MustParsePrefix("::/128"): "RFC 4291: Unspecified Address",
netip.MustParsePrefix("::1/128"): "RFC 4291: Loopback Address",
netip.MustParsePrefix("::ffff:0:0/96"): "RFC 4291: IPv4-mapped Address",
netip.MustParsePrefix("64:ff9b::/96"): "RFC 6052: IPv4-IPv6 Translat.",
netip.MustParsePrefix("64:ff9b:1::/48"): "RFC 8215: IPv4-IPv6 Translat.",
netip.MustParsePrefix("100::/64"): "RFC 6666: Discard-Only Address Block",
netip.MustParsePrefix("2001::/23"): "RFC 2928: IETF Protocol Assignments",
netip.MustParsePrefix("2001::/32"): "RFC 4380 & RFC 8190: TEREDO",
netip.MustParsePrefix("2001:1::1/128"): "RFC 7723: Port Control Protocol Anycast",
netip.MustParsePrefix("2001:1::2/128"): "RFC 8155: Traversal Using Relays around NAT Anycast",
netip.MustParsePrefix("2001:1::3/128"): "RFC-ietf-dnssd-srp-25: DNS-SD Service Registration Protocol Anycast",
netip.MustParsePrefix("2001:2::/48"): "RFC 5180 & RFC Errata 1752: Benchmarking",
netip.MustParsePrefix("2001:3::/32"): "RFC 7450: AMT",
netip.MustParsePrefix("2001:4:112::/48"): "RFC 7535: AS112-v6",
netip.MustParsePrefix("2001:10::/28"): "RFC 4843: Deprecated (previously ORCHID)",
netip.MustParsePrefix("2001:20::/28"): "RFC 7343: ORCHIDv2",
netip.MustParsePrefix("2001:30::/28"): "RFC 9374: Drone Remote ID Protocol Entity Tags (DETs) Prefix",
netip.MustParsePrefix("2001:db8::/32"): "RFC 3849: Documentation",
netip.MustParsePrefix("2002::/16"): "RFC 3056: 6to4",
netip.MustParsePrefix("2620:4f:8000::/48"): "RFC 7534: Direct Delegation AS112 Service",
netip.MustParsePrefix("3fff::/20"): "RFC 9637: Documentation",
netip.MustParsePrefix("5f00::/16"): "RFC 9602: Segment Routing (SRv6) SIDs",
netip.MustParsePrefix("fc00::/7"): "RFC 4193 & RFC 8190: Unique-Local",
netip.MustParsePrefix("fe80::/10"): "RFC 4291: Link-Local Unicast",
// ff00::/8 are multicast addresses as per RFC 4291, Sections 2.4 & 2.7.
// They are not present in the IANA registry.
netip.MustParsePrefix("ff00::/8"): "RFC 4291: Multicast Addresses",
}
)
// IsReservedIP returns an error if an IP address is part of a reserved range.
func IsReservedIP(ip netip.Addr) error {
var reservedPrefixes map[netip.Prefix]string
if ip.Is4() {
reservedPrefixes = privateV4Prefixes
} else {
reservedPrefixes = privateV6Prefixes
}
for net, name := range reservedPrefixes {
if net.Contains(ip) {
return fmt.Errorf("%w: %s", errIPReserved, name)
}
}
return nil
}

View File

@ -1,57 +0,0 @@
package policy
import (
"net/netip"
"testing"
)
func TestIsReservedIP(t *testing.T) {
t.Parallel()
cases := []struct {
ip string
want bool
}{
{"127.0.0.1", true},
{"192.168.254.254", true},
{"10.255.0.3", true},
{"172.16.255.255", true},
{"172.31.255.255", true},
{"128.0.0.1", false},
{"192.169.255.255", false},
{"9.255.0.255", false},
{"172.32.255.255", false},
{"::0", true},
{"::1", true},
{"::2", false},
{"fe80::1", true},
{"febf::1", true},
{"fec0::1", false},
{"feff::1", false},
{"ff00::1", true},
{"ff10::1", true},
{"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", true},
{"2002::", true},
{"2002:ffff:ffff:ffff:ffff:ffff:ffff:ffff", true},
{"0100::", true},
{"0100::0000:ffff:ffff:ffff:ffff", true},
{"0100::0001:0000:0000:0000:0000", false},
}
for _, tc := range cases {
t.Run(tc.ip, func(t *testing.T) {
t.Parallel()
err := IsReservedIP(netip.MustParseAddr(tc.ip))
if err != nil && !tc.want {
t.Error(err)
}
if err == nil && tc.want {
t.Errorf("Wanted error for %#v, got success", tc.ip)
}
})
}
}

View File

@ -39,14 +39,6 @@ type AuthorityImpl struct {
// New constructs a Policy Authority.
func New(identifierTypes map[identifier.IdentifierType]bool, challengeTypes map[core.AcmeChallenge]bool, log blog.Logger) (*AuthorityImpl, error) {
// If identifierTypes are not configured (i.e. nil), default to allowing DNS
// identifiers. This default is temporary, to improve deployability.
//
// TODO(#8184): Remove this default.
if identifierTypes == nil {
identifierTypes = map[identifier.IdentifierType]bool{identifier.TypeDNS: true}
}
return &AuthorityImpl{
log: log,
enabledChallenges: challengeTypes,
@ -179,7 +171,6 @@ var (
errNameTooLong = berrors.MalformedError("Domain name is longer than 253 bytes")
errIPAddressInDNS = berrors.MalformedError("Identifier type is DNS but value is an IP address")
errIPInvalid = berrors.MalformedError("IP address is invalid")
errIPReserved = berrors.MalformedError("IP address is in a reserved address block")
errTooManyLabels = berrors.MalformedError("Domain name has more than 10 labels (parts)")
errEmptyIdentifier = berrors.MalformedError("Identifier value (name) is empty")
errNameEndsInDot = berrors.MalformedError("Domain name ends in a dot")
@ -332,13 +323,13 @@ func ValidDomain(domain string) error {
return validNonWildcardDomain(baseDomain)
}
// validIP checks that an IP address:
// ValidIP checks that an IP address:
// - isn't empty
// - is an IPv4 or IPv6 address
// - isn't in an IANA special-purpose address registry
//
// It does NOT ensure that the IP address is absent from any PA blocked lists.
func validIP(ip string) error {
func ValidIP(ip string) error {
if ip == "" {
return errEmptyIdentifier
}
@ -353,7 +344,7 @@ func validIP(ip string) error {
return errIPInvalid
}
return IsReservedIP(parsedIP)
return iana.IsReservedAddr(parsedIP)
}
// forbiddenMailDomains is a map of domain names we do not allow after the
@ -436,7 +427,7 @@ func (pa *AuthorityImpl) WillingToIssue(idents identifier.ACMEIdentifiers) error
// Unsupported identifier types will have been caught by
// WellFormedIdentifiers().
//
// TODO(#7311): We may want to implement IP address blocklists too.
// TODO(#8237): We may want to implement IP address blocklists too.
if ident.Type == identifier.TypeDNS {
if strings.Count(ident.Value, "*") > 0 {
// The base domain is the wildcard request with the `*.` prefix removed
@ -500,7 +491,7 @@ func WellFormedIdentifiers(idents identifier.ACMEIdentifiers) error {
subErrors = append(subErrors, subError(ident, err))
}
case identifier.TypeIP:
err := validIP(ident.Value)
err := ValidIP(ident.Value)
if err != nil {
subErrors = append(subErrors, subError(ident, err))
}

View File

@ -136,24 +136,24 @@ func TestWellFormedIdentifiers(t *testing.T) {
{identifier.ACMEIdentifier{Type: "ip", Value: `1.1.168.192.in-addr.arpa`}, errIPInvalid}, // reverse DNS
// Unexpected IPv6 variants
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:c0ff:ee:a:bad:deed:ffff`}, errIPInvalid}, // extra octet
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:c0ff:ee:a:bad:mead`}, errIPInvalid}, // character out of range
{identifier.ACMEIdentifier{Type: "ip", Value: `2001:db8::/32`}, errIPInvalid}, // with CIDR
{identifier.ACMEIdentifier{Type: "ip", Value: `[3fff:aaa:a:c0ff:ee:a:bad:deed]`}, errIPInvalid}, // in brackets
{identifier.ACMEIdentifier{Type: "ip", Value: `[3fff:aaa:a:c0ff:ee:a:bad:deed]:443`}, errIPInvalid}, // in brackets, with port
{identifier.ACMEIdentifier{Type: "ip", Value: `0x3fff0aaa000ac0ff00ee000a0baddeed`}, errIPInvalid}, // as hex
{identifier.ACMEIdentifier{Type: "ip", Value: `d.e.e.d.d.a.b.0.a.0.0.0.e.e.0.0.f.f.0.c.a.0.0.0.a.a.a.0.f.f.f.3.ip6.arpa`}, errIPInvalid}, // reverse DNS
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:0aaa:a:c0ff:ee:a:bad:deed`}, errIPInvalid}, // leading 0 in 2nd octet (RFC 5952, Sec. 4.1)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:0:0:0:a:bad:deed`}, errIPInvalid}, // lone 0s in 3rd-5th octets, :: not used (RFC 5952, Sec. 4.2.1)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa::c0ff:ee:a:bad:deed`}, errIPInvalid}, // :: used for just one empty octet (RFC 5952, Sec. 4.2.2)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa::ee:0:0:0`}, errIPInvalid}, // :: used for the shorter of two possible collapses (RFC 5952, Sec. 4.2.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `fe80:0:0:0:a::`}, errIPInvalid}, // :: used for the last of two possible equal-length collapses (RFC 5952, Sec. 4.2.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:C0FF:EE:a:bad:deed`}, errIPInvalid}, // alpha characters capitalized (RFC 5952, Sec. 4.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `::ffff:192.168.1.1`}, errIPReserved}, // IPv6-encapsulated IPv4
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:c0ff:ee:a:bad:deed:ffff`}, errIPInvalid}, // extra octet
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:c0ff:ee:a:bad:mead`}, errIPInvalid}, // character out of range
{identifier.ACMEIdentifier{Type: "ip", Value: `2001:db8::/32`}, errIPInvalid}, // with CIDR
{identifier.ACMEIdentifier{Type: "ip", Value: `[3fff:aaa:a:c0ff:ee:a:bad:deed]`}, errIPInvalid}, // in brackets
{identifier.ACMEIdentifier{Type: "ip", Value: `[3fff:aaa:a:c0ff:ee:a:bad:deed]:443`}, errIPInvalid}, // in brackets, with port
{identifier.ACMEIdentifier{Type: "ip", Value: `0x3fff0aaa000ac0ff00ee000a0baddeed`}, errIPInvalid}, // as hex
{identifier.ACMEIdentifier{Type: "ip", Value: `d.e.e.d.d.a.b.0.a.0.0.0.e.e.0.0.f.f.0.c.a.0.0.0.a.a.a.0.f.f.f.3.ip6.arpa`}, errIPInvalid}, // reverse DNS
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:0aaa:a:c0ff:ee:a:bad:deed`}, errIPInvalid}, // leading 0 in 2nd octet (RFC 5952, Sec. 4.1)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:0:0:0:a:bad:deed`}, errIPInvalid}, // lone 0s in 3rd-5th octets, :: not used (RFC 5952, Sec. 4.2.1)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa::c0ff:ee:a:bad:deed`}, errIPInvalid}, // :: used for just one empty octet (RFC 5952, Sec. 4.2.2)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa::ee:0:0:0`}, errIPInvalid}, // :: used for the shorter of two possible collapses (RFC 5952, Sec. 4.2.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `fe80:0:0:0:a::`}, errIPInvalid}, // :: used for the last of two possible equal-length collapses (RFC 5952, Sec. 4.2.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `3fff:aaa:a:C0FF:EE:a:bad:deed`}, errIPInvalid}, // alpha characters capitalized (RFC 5952, Sec. 4.3)
{identifier.ACMEIdentifier{Type: "ip", Value: `::ffff:192.168.1.1`}, berrors.MalformedError("IP address is in a reserved address block")}, // IPv6-encapsulated IPv4
// IANA special-purpose address blocks
{identifier.NewIP(netip.MustParseAddr("192.0.2.129")), errIPReserved}, // Documentation (TEST-NET-1)
{identifier.NewIP(netip.MustParseAddr("2001:db8:eee:eeee:eeee:eeee:d01:f1")), errIPReserved}, // Documentation
{identifier.NewIP(netip.MustParseAddr("192.0.2.129")), berrors.MalformedError("IP address is in a reserved address block")}, // Documentation (TEST-NET-1)
{identifier.NewIP(netip.MustParseAddr("2001:db8:eee:eeee:eeee:eeee:d01:f1")), berrors.MalformedError("IP address is in a reserved address block")}, // Documentation
}
// Test syntax errors

View File

@ -303,8 +303,8 @@ type ValidationProfileConfig struct {
// exists but is empty, the profile is closed to all accounts.
AllowList string `validate:"omitempty"`
// IdentifierTypes is a list of identifier types that may be issued under
// this profile. If none are specified, it defaults to "dns".
IdentifierTypes []identifier.IdentifierType `validate:"omitempty,dive,oneof=dns ip"`
// this profile.
IdentifierTypes []identifier.IdentifierType `validate:"required,dive,oneof=dns ip"`
}
// validationProfile holds the attributes of a given validation profile.
@ -330,7 +330,7 @@ type validationProfile struct {
// nil, the profile is open to all accounts (everyone is allowed).
allowList *allowlist.List[int64]
// identifierTypes is a list of identifier types that may be issued under
// this profile. If none are specified, it defaults to "dns".
// this profile.
identifierTypes []identifier.IdentifierType
}
@ -384,22 +384,13 @@ func NewValidationProfiles(defaultName string, configs map[string]*ValidationPro
}
}
identifierTypes := config.IdentifierTypes
// If this profile has no identifier types configured, default to DNS.
// This default is temporary, to improve deployability.
//
// TODO(#8184): Remove this default and use config.IdentifierTypes below.
if len(identifierTypes) == 0 {
identifierTypes = []identifier.IdentifierType{identifier.TypeDNS}
}
profiles[name] = &validationProfile{
pendingAuthzLifetime: config.PendingAuthzLifetime.Duration,
validAuthzLifetime: config.ValidAuthzLifetime.Duration,
orderLifetime: config.OrderLifetime.Duration,
maxNames: config.MaxNames,
allowList: allowList,
identifierTypes: identifierTypes,
identifierTypes: config.IdentifierTypes,
}
}
@ -537,6 +528,7 @@ func (ra *RegistrationAuthorityImpl) NewRegistration(ctx context.Context, reques
}
// Check that contacts conform to our expectations.
// TODO(#8199): Remove this when no contacts are included in any requests.
err = ra.validateContacts(request.Contact)
if err != nil {
return nil, err
@ -594,7 +586,7 @@ func (ra *RegistrationAuthorityImpl) validateContacts(contacts []string) error {
}
parsed, err := url.Parse(contact)
if err != nil {
return berrors.InvalidEmailError("invalid contact")
return berrors.InvalidEmailError("unparsable contact")
}
if parsed.Scheme != "mailto" {
return berrors.UnsupportedContactError("only contact scheme 'mailto:' is supported")
@ -1284,26 +1276,17 @@ func (ra *RegistrationAuthorityImpl) issueCertificateOuter(
// account) and duplicate certificate rate limits. There is no reason to surface
// errors from this function to the Subscriber, spends against these limit are
// best effort.
//
// TODO(#7311): Handle IP address identifiers properly; don't just trust that
// the value will always make sense in context.
func (ra *RegistrationAuthorityImpl) countCertificateIssued(ctx context.Context, regId int64, orderIdents identifier.ACMEIdentifiers, isRenewal bool) {
names, err := orderIdents.ToDNSSlice()
if err != nil {
ra.log.Warningf("parsing identifiers at finalize: %s", err)
return
}
var transactions []ratelimits.Transaction
if !isRenewal {
txns, err := ra.txnBuilder.CertificatesPerDomainSpendOnlyTransactions(regId, names)
txns, err := ra.txnBuilder.CertificatesPerDomainSpendOnlyTransactions(regId, orderIdents)
if err != nil {
ra.log.Warningf("building rate limit transactions at finalize: %s", err)
}
transactions = append(transactions, txns...)
}
txn, err := ra.txnBuilder.CertificatesPerFQDNSetSpendOnlyTransaction(names)
txn, err := ra.txnBuilder.CertificatesPerFQDNSetSpendOnlyTransaction(orderIdents)
if err != nil {
ra.log.Warningf("building rate limit transaction at finalize: %s", err)
}
@ -1417,8 +1400,11 @@ func (ra *RegistrationAuthorityImpl) getSCTs(ctx context.Context, precertDER []b
return scts, nil
}
// UpdateRegistrationContact updates an existing Registration's contact.
// The updated contacts field may be empty.
// UpdateRegistrationContact updates an existing Registration's contact. The
// updated contacts field may be empty.
//
// Deprecated: This method has no callers. See
// https://github.com/letsencrypt/boulder/issues/8199 for removal.
func (ra *RegistrationAuthorityImpl) UpdateRegistrationContact(ctx context.Context, req *rapb.UpdateRegistrationContactRequest) (*corepb.Registration, error) {
if core.IsAnyNilOrZero(req.RegistrationID) {
return nil, errIncompleteGRPCRequest
@ -1492,11 +1478,8 @@ func (ra *RegistrationAuthorityImpl) recordValidation(ctx context.Context, authI
// countFailedValidations increments the FailedAuthorizationsPerDomainPerAccount limit.
// and the FailedAuthorizationsForPausingPerDomainPerAccountTransaction limit.
//
// TODO(#7311): Handle IP address identifiers properly; don't just trust that
// the value will always make sense in context.
func (ra *RegistrationAuthorityImpl) countFailedValidations(ctx context.Context, regId int64, ident identifier.ACMEIdentifier) error {
txn, err := ra.txnBuilder.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(regId, ident.Value)
txn, err := ra.txnBuilder.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(regId, ident)
if err != nil {
return fmt.Errorf("building rate limit transaction for the %s rate limit: %w", ratelimits.FailedAuthorizationsPerDomainPerAccount, err)
}
@ -1507,7 +1490,7 @@ func (ra *RegistrationAuthorityImpl) countFailedValidations(ctx context.Context,
}
if features.Get().AutomaticallyPauseZombieClients {
txn, err = ra.txnBuilder.FailedAuthorizationsForPausingPerDomainPerAccountTransaction(regId, ident.Value)
txn, err = ra.txnBuilder.FailedAuthorizationsForPausingPerDomainPerAccountTransaction(regId, ident)
if err != nil {
return fmt.Errorf("building rate limit transaction for the %s rate limit: %w", ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, err)
}
@ -1520,12 +1503,7 @@ func (ra *RegistrationAuthorityImpl) countFailedValidations(ctx context.Context,
if decision.Result(ra.clk.Now()) != nil {
resp, err := ra.SA.PauseIdentifiers(ctx, &sapb.PauseRequest{
RegistrationID: regId,
Identifiers: []*corepb.Identifier{
{
Type: string(ident.Type),
Value: ident.Value,
},
},
Identifiers: []*corepb.Identifier{ident.ToProto()},
})
if err != nil {
return fmt.Errorf("failed to pause %d/%q: %w", regId, ident.Value, err)
@ -1542,11 +1520,8 @@ func (ra *RegistrationAuthorityImpl) countFailedValidations(ctx context.Context,
// resetAccountPausingLimit resets bucket to maximum capacity for given account.
// There is no reason to surface errors from this function to the Subscriber.
//
// TODO(#7311): Handle IP address identifiers properly; don't just trust that
// the value will always make sense in context.
func (ra *RegistrationAuthorityImpl) resetAccountPausingLimit(ctx context.Context, regId int64, ident identifier.ACMEIdentifier) {
bucketKey := ratelimits.NewRegIdDomainBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, regId, ident.Value)
bucketKey := ratelimits.NewRegIdIdentValueBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, regId, ident.Value)
err := ra.limiter.Reset(ctx, bucketKey)
if err != nil {
ra.log.Warningf("resetting bucket for regID=[%d] identifier=[%s]: %s", regId, ident.Value, err)

View File

@ -486,8 +486,7 @@ func TestNewRegistration(t *testing.T) {
t.Fatalf("could not create new registration: %s", err)
}
test.AssertByteEquals(t, result.Key, acctKeyB)
test.Assert(t, len(result.Contact) == 1, "Wrong number of contacts")
test.Assert(t, mailto == (result.Contact)[0], "Contact didn't match")
test.Assert(t, len(result.Contact) == 0, "Wrong number of contacts")
test.Assert(t, result.Agreement == "", "Agreement didn't default empty")
reg, err := sa.GetRegistration(ctx, &sapb.RegistrationID{Id: result.Id})
@ -727,7 +726,7 @@ func TestPerformValidation_FailedValidationsTriggerPauseIdentifiersRatelimit(t *
domain := randomDomain()
ident := identifier.NewDNS(domain)
authzPB := createPendingAuthorization(t, sa, ident, fc.Now().Add(12*time.Hour))
bucketKey := ratelimits.NewRegIdDomainBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, authzPB.RegistrationID, domain)
bucketKey := ratelimits.NewRegIdIdentValueBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, authzPB.RegistrationID, ident.Value)
// Set the stored TAT to indicate that this bucket has exhausted its quota.
err = rl.BatchSet(context.Background(), map[string]time.Time{
@ -803,7 +802,7 @@ func TestPerformValidation_FailedThenSuccessfulValidationResetsPauseIdentifiersR
domain := randomDomain()
ident := identifier.NewDNS(domain)
authzPB := createPendingAuthorization(t, sa, ident, fc.Now().Add(12*time.Hour))
bucketKey := ratelimits.NewRegIdDomainBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, authzPB.RegistrationID, domain)
bucketKey := ratelimits.NewRegIdIdentValueBucketKey(ratelimits.FailedAuthorizationsForPausingPerDomainPerAccount, authzPB.RegistrationID, ident.Value)
// Set a stored TAT so that we can tell when it's been reset.
err = rl.BatchSet(context.Background(), map[string]time.Time{
@ -3135,14 +3134,13 @@ func TestIssueCertificateCAACheckLog(t *testing.T) {
// Make some valid authzs for four names. Half of them were validated
// recently and half were validated in excess of our CAA recheck time.
idents := identifier.ACMEIdentifiers{
identifier.NewDNS("not-example.com"),
identifier.NewDNS("www.not-example.com"),
identifier.NewDNS("still.not-example.com"),
identifier.NewDNS("definitely.not-example.com"),
names := []string{
"not-example.com",
"www.not-example.com",
"still.not-example.com",
"definitely.not-example.com",
}
names, err := idents.ToDNSSlice()
test.AssertNotError(t, err, "Converting identifiers to DNS names")
idents := identifier.NewDNSSlice(names)
var authzIDs []int64
for i, ident := range idents {
attemptedAt := older

View File

@ -91,17 +91,31 @@ An ACME account registration ID.
Example: `12345678`
#### domain
#### identValue
A valid eTLD+1 domain name.
A valid ACME identifier value, i.e. an FQDN or IP address.
Example: `example.com`
Examples:
- `www.example.com`
- `192.168.1.1`
- `2001:db8:eeee::1`
#### domainOrCIDR
A valid eTLD+1 domain name, or an IP address. IPv6 addresses must be the lowest
address in their /64, i.e. their last 64 bits must be zero; the override will
apply to the entire /64. Do not include the CIDR mask.
Examples:
- `example.com`
- `192.168.1.0`
- `2001:db8:eeee:eeee::`
#### fqdnSet
A comma-separated list of domain names.
A comma-separated list of identifier values.
Example: `example.com,example.org`
Example: `192.168.1.1,example.com,example.org`
## Bucket Key Definitions

View File

@ -3,10 +3,13 @@ package ratelimits
import (
"errors"
"fmt"
"net/netip"
"os"
"strings"
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/strictyaml"
)
@ -192,12 +195,37 @@ func parseOverrideLimits(newOverridesYAML overridesYAML) (limits, error) {
return nil, fmt.Errorf(
"validating name %s and id %q for override limit %q: %w", name, id, k, err)
}
if name == CertificatesPerFQDNSet {
// FQDNSet hashes are not a nice thing to ask for in a
// config file, so we allow the user to specify a
// comma-separated list of FQDNs and compute the hash here.
id = fmt.Sprintf("%x", hashNames(strings.Split(id, ",")))
// We interpret and compute the override values for two rate
// limits, since they're not nice to ask for in a config file.
switch name {
case CertificatesPerDomain:
// Convert IP addresses to their covering /32 (IPv4) or /64
// (IPv6) prefixes in CIDR notation.
ip, err := netip.ParseAddr(id)
if err == nil {
prefix, err := coveringPrefix(ip)
if err != nil {
return nil, fmt.Errorf(
"computing prefix for IP address %q: %w", id, err)
}
id = prefix.String()
}
case CertificatesPerFQDNSet:
// Compute the hash of a comma-separated list of identifier
// values.
var idents identifier.ACMEIdentifiers
for _, value := range strings.Split(id, ",") {
ip, err := netip.ParseAddr(value)
if err == nil {
idents = append(idents, identifier.NewIP(ip))
} else {
idents = append(idents, identifier.NewDNS(value))
}
}
id = fmt.Sprintf("%x", core.HashIdentifiers(idents))
}
parsed[joinWithColon(name.EnumString(), id)] = lim
}
}

View File

@ -1,11 +1,13 @@
package ratelimits
import (
"net/netip"
"os"
"testing"
"time"
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/test"
)
@ -45,10 +47,10 @@ func TestParseOverrideNameId(t *testing.T) {
// 'enum:ipv6range'
// Valid IPv6 address range.
name, id, err = parseOverrideNameId(NewRegistrationsPerIPv6Range.String() + ":2001:0db8:0000::/48")
name, id, err = parseOverrideNameId(NewRegistrationsPerIPv6Range.String() + ":2602:80a:6000::/48")
test.AssertNotError(t, err, "should not error")
test.AssertEquals(t, name, NewRegistrationsPerIPv6Range)
test.AssertEquals(t, id, "2001:0db8:0000::/48")
test.AssertEquals(t, id, "2602:80a:6000::/48")
// Missing colon (this should never happen but we should avoid panicking).
_, _, err = parseOverrideNameId(NewRegistrationsPerIPAddress.String() + "10.0.0.1")
@ -86,14 +88,14 @@ func TestLoadAndParseOverrideLimits(t *testing.T) {
// Load a single valid override limit with Id formatted as 'enum:RegId'.
l, err := loadAndParseOverrideLimits("testdata/working_override.yml")
test.AssertNotError(t, err, "valid single override limit")
expectKey := joinWithColon(NewRegistrationsPerIPAddress.EnumString(), "10.0.0.2")
expectKey := joinWithColon(NewRegistrationsPerIPAddress.EnumString(), "64.112.117.1")
test.AssertEquals(t, l[expectKey].burst, int64(40))
test.AssertEquals(t, l[expectKey].count, int64(40))
test.AssertEquals(t, l[expectKey].period.Duration, time.Second)
// Load single valid override limit with a 'domain' Id.
l, err = loadAndParseOverrideLimits("testdata/working_override_regid_domain.yml")
test.AssertNotError(t, err, "valid single override limit with Id of regId:domain")
// Load single valid override limit with a 'domainOrCIDR' Id.
l, err = loadAndParseOverrideLimits("testdata/working_override_regid_domainorcidr.yml")
test.AssertNotError(t, err, "valid single override limit with Id of regId:domainOrCIDR")
expectKey = joinWithColon(CertificatesPerDomain.EnumString(), "example.com")
test.AssertEquals(t, l[expectKey].burst, int64(40))
test.AssertEquals(t, l[expectKey].count, int64(40))
@ -102,11 +104,11 @@ func TestLoadAndParseOverrideLimits(t *testing.T) {
// Load multiple valid override limits with 'regId' Ids.
l, err = loadAndParseOverrideLimits("testdata/working_overrides.yml")
test.AssertNotError(t, err, "multiple valid override limits")
expectKey1 := joinWithColon(NewRegistrationsPerIPAddress.EnumString(), "10.0.0.2")
expectKey1 := joinWithColon(NewRegistrationsPerIPAddress.EnumString(), "64.112.117.1")
test.AssertEquals(t, l[expectKey1].burst, int64(40))
test.AssertEquals(t, l[expectKey1].count, int64(40))
test.AssertEquals(t, l[expectKey1].period.Duration, time.Second)
expectKey2 := joinWithColon(NewRegistrationsPerIPv6Range.EnumString(), "2001:0db8:0000::/48")
expectKey2 := joinWithColon(NewRegistrationsPerIPv6Range.EnumString(), "2602:80a:6000::/48")
test.AssertEquals(t, l[expectKey2].burst, int64(50))
test.AssertEquals(t, l[expectKey2].count, int64(50))
test.AssertEquals(t, l[expectKey2].period.Duration, time.Second*2)
@ -115,20 +117,29 @@ func TestLoadAndParseOverrideLimits(t *testing.T) {
// - CertificatesPerFQDNSet:example.com
// - CertificatesPerFQDNSet:example.com,example.net
// - CertificatesPerFQDNSet:example.com,example.net,example.org
firstEntryKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, []string{"example.com"})
secondEntryKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, []string{"example.com", "example.net"})
thirdEntryKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, []string{"example.com", "example.net", "example.org"})
entryKey1 := newFQDNSetBucketKey(CertificatesPerFQDNSet, identifier.NewDNSSlice([]string{"example.com"}))
entryKey2 := newFQDNSetBucketKey(CertificatesPerFQDNSet, identifier.NewDNSSlice([]string{"example.com", "example.net"}))
entryKey3 := newFQDNSetBucketKey(CertificatesPerFQDNSet, identifier.NewDNSSlice([]string{"example.com", "example.net", "example.org"}))
entryKey4 := newFQDNSetBucketKey(CertificatesPerFQDNSet, identifier.ACMEIdentifiers{
identifier.NewIP(netip.MustParseAddr("2602:80a:6000::1")),
identifier.NewIP(netip.MustParseAddr("9.9.9.9")),
identifier.NewDNS("example.com"),
})
l, err = loadAndParseOverrideLimits("testdata/working_overrides_regid_fqdnset.yml")
test.AssertNotError(t, err, "multiple valid override limits with 'fqdnSet' Ids")
test.AssertEquals(t, l[firstEntryKey].burst, int64(40))
test.AssertEquals(t, l[firstEntryKey].count, int64(40))
test.AssertEquals(t, l[firstEntryKey].period.Duration, time.Second)
test.AssertEquals(t, l[secondEntryKey].burst, int64(50))
test.AssertEquals(t, l[secondEntryKey].count, int64(50))
test.AssertEquals(t, l[secondEntryKey].period.Duration, time.Second*2)
test.AssertEquals(t, l[thirdEntryKey].burst, int64(60))
test.AssertEquals(t, l[thirdEntryKey].count, int64(60))
test.AssertEquals(t, l[thirdEntryKey].period.Duration, time.Second*3)
test.AssertEquals(t, l[entryKey1].burst, int64(40))
test.AssertEquals(t, l[entryKey1].count, int64(40))
test.AssertEquals(t, l[entryKey1].period.Duration, time.Second)
test.AssertEquals(t, l[entryKey2].burst, int64(50))
test.AssertEquals(t, l[entryKey2].count, int64(50))
test.AssertEquals(t, l[entryKey2].period.Duration, time.Second*2)
test.AssertEquals(t, l[entryKey3].burst, int64(60))
test.AssertEquals(t, l[entryKey3].count, int64(60))
test.AssertEquals(t, l[entryKey3].period.Duration, time.Second*3)
test.AssertEquals(t, l[entryKey4].burst, int64(60))
test.AssertEquals(t, l[entryKey4].count, int64(60))
test.AssertEquals(t, l[entryKey4].period.Duration, time.Second*4)
// Path is empty string.
_, err = loadAndParseOverrideLimits("")

View File

@ -132,33 +132,33 @@ func (d *Decision) Result(now time.Time) error {
)
case FailedAuthorizationsPerDomainPerAccount:
// Uses bucket key 'enum:regId:domain'.
// Uses bucket key 'enum:regId:identValue'.
idx := strings.LastIndex(d.transaction.bucketKey, ":")
if idx == -1 {
return berrors.InternalServerError("unrecognized bucket key while generating error")
}
domain := d.transaction.bucketKey[idx+1:]
identValue := d.transaction.bucketKey[idx+1:]
return berrors.FailedAuthorizationsPerDomainPerAccountError(
retryAfter,
"too many failed authorizations (%d) for %q in the last %s, retry after %s",
d.transaction.limit.burst,
domain,
identValue,
d.transaction.limit.period.Duration,
retryAfterTs,
)
case CertificatesPerDomain, CertificatesPerDomainPerAccount:
// Uses bucket key 'enum:domain' or 'enum:regId:domain' respectively.
// Uses bucket key 'enum:domainOrCIDR' or 'enum:regId:domainOrCIDR' respectively.
idx := strings.LastIndex(d.transaction.bucketKey, ":")
if idx == -1 {
return berrors.InternalServerError("unrecognized bucket key while generating error")
}
domain := d.transaction.bucketKey[idx+1:]
domainOrCIDR := d.transaction.bucketKey[idx+1:]
return berrors.CertificatesPerDomainError(
retryAfter,
"too many certificates (%d) already issued for %q in the last %s, retry after %s",
d.transaction.limit.burst,
domain,
domainOrCIDR,
d.transaction.limit.period.Duration,
retryAfterTs,
)
@ -166,7 +166,7 @@ func (d *Decision) Result(now time.Time) error {
case CertificatesPerFQDNSet:
return berrors.CertificatesPerFQDNSetError(
retryAfter,
"too many certificates (%d) already issued for this exact set of domains in the last %s, retry after %s",
"too many certificates (%d) already issued for this exact set of identifiers in the last %s, retry after %s",
d.transaction.limit.burst,
d.transaction.limit.period.Duration,
retryAfterTs,

View File

@ -16,9 +16,9 @@ import (
"github.com/letsencrypt/boulder/test"
)
// tenZeroZeroTwo is overridden in 'testdata/working_override.yml' to have
// higher burst and count values.
const tenZeroZeroTwo = "10.0.0.2"
// overriddenIP is overridden in 'testdata/working_override.yml' to have higher
// burst and count values.
const overriddenIP = "64.112.117.1"
// newTestLimiter constructs a new limiter.
func newTestLimiter(t *testing.T, s Source, clk clock.FakeClock) *Limiter {
@ -30,7 +30,7 @@ func newTestLimiter(t *testing.T, s Source, clk clock.FakeClock) *Limiter {
// newTestTransactionBuilder constructs a new *TransactionBuilder with the
// following configuration:
// - 'NewRegistrationsPerIPAddress' burst: 20 count: 20 period: 1s
// - 'NewRegistrationsPerIPAddress:10.0.0.2' burst: 40 count: 40 period: 1s
// - 'NewRegistrationsPerIPAddress:64.112.117.1' burst: 40 count: 40 period: 1s
func newTestTransactionBuilder(t *testing.T) *TransactionBuilder {
c, err := NewTransactionBuilderFromFiles("testdata/working_default.yml", "testdata/working_override.yml")
test.AssertNotError(t, err, "should not error")
@ -60,7 +60,7 @@ func TestLimiter_CheckWithLimitOverrides(t *testing.T) {
testCtx, limiters, txnBuilder, clk, testIP := setup(t)
for name, l := range limiters {
t.Run(name, func(t *testing.T) {
overriddenBucketKey := newIPAddressBucketKey(NewRegistrationsPerIPAddress, netip.MustParseAddr(tenZeroZeroTwo))
overriddenBucketKey := newIPAddressBucketKey(NewRegistrationsPerIPAddress, netip.MustParseAddr(overriddenIP))
overriddenLimit, err := txnBuilder.getLimit(NewRegistrationsPerIPAddress, overriddenBucketKey)
test.AssertNotError(t, err, "should not error")

View File

@ -2,12 +2,11 @@ package ratelimits
import (
"fmt"
"net"
"net/netip"
"strconv"
"strings"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/iana"
"github.com/letsencrypt/boulder/policy"
)
@ -47,13 +46,20 @@ const (
// depending on the context:
// - When referenced in an overrides file: uses bucket key 'enum:regId',
// where regId is the ACME registration Id of the account.
// - When referenced in a transaction: uses bucket key 'enum:regId:domain',
// where regId is the ACME registration Id of the account and domain is a
// domain name in the certificate.
// - When referenced in a transaction: uses bucket key
// 'enum:regId:identValue', where regId is the ACME registration Id of
// the account and identValue is the value of an identifier in the
// certificate.
FailedAuthorizationsPerDomainPerAccount
// CertificatesPerDomain uses bucket key 'enum:domain', where domain is a
// domain name in the certificate.
// CertificatesPerDomain uses bucket key 'enum:domainOrCIDR', where
// domainOrCIDR is a domain name or IP address in the certificate. It uses
// two different IP address formats depending on the context:
// - When referenced in an overrides file: uses a single IP address.
// - When referenced in a transaction: uses an IP address prefix in CIDR
// notation. IPv4 prefixes must be /32, and IPv6 prefixes must be /64.
// In both cases, IPv6 addresses must be the lowest address in their /64;
// i.e. their last 64 bits must be zero.
CertificatesPerDomain
// CertificatesPerDomainPerAccount is only used for per-account overrides to
@ -62,9 +68,11 @@ const (
// keys depending on the context:
// - When referenced in an overrides file: uses bucket key 'enum:regId',
// where regId is the ACME registration Id of the account.
// - When referenced in a transaction: uses bucket key 'enum:regId:domain',
// where regId is the ACME registration Id of the account and domain is a
// domain name in the certificate.
// - When referenced in a transaction: uses bucket key
// 'enum:regId:domainOrCIDR', where regId is the ACME registration Id of
// the account and domainOrCIDR is either a domain name in the
// certificate or an IP prefix in CIDR notation.
// - IP address formats vary by context, as for CertificatesPerDomain.
//
// When overrides to the CertificatesPerDomainPerAccount are configured for a
// subscriber, the cost:
@ -73,10 +81,10 @@ const (
CertificatesPerDomainPerAccount
// CertificatesPerFQDNSet uses bucket key 'enum:fqdnSet', where fqdnSet is a
// hashed set of unique eTLD+1 domain names in the certificate.
// hashed set of unique identifier values in the certificate.
//
// Note: When this is referenced in an overrides file, the fqdnSet MUST be
// passed as a comma-separated list of domain names.
// passed as a comma-separated list of identifier values.
CertificatesPerFQDNSet
// FailedAuthorizationsForPausingPerDomainPerAccount is similar to
@ -84,9 +92,10 @@ const (
// bucket keys depending on the context:
// - When referenced in an overrides file: uses bucket key 'enum:regId',
// where regId is the ACME registration Id of the account.
// - When referenced in a transaction: uses bucket key 'enum:regId:domain',
// where regId is the ACME registration Id of the account and domain is a
// domain name in the certificate.
// - When referenced in a transaction: uses bucket key
// 'enum:regId:identValue', where regId is the ACME registration Id of
// the account and identValue is the value of an identifier in the
// certificate.
FailedAuthorizationsForPausingPerDomainPerAccount
)
@ -127,29 +136,38 @@ func (n Name) EnumString() string {
// validIPAddress validates that the provided string is a valid IP address.
func validIPAddress(id string) error {
_, err := netip.ParseAddr(id)
ip, err := netip.ParseAddr(id)
if err != nil {
return fmt.Errorf("invalid IP address, %q must be an IP address", id)
}
return nil
canon := ip.String()
if canon != id {
return fmt.Errorf(
"invalid IP address, %q must be in canonical form (%q)", id, canon)
}
return iana.IsReservedAddr(ip)
}
// validIPv6RangeCIDR validates that the provided string is formatted is an IPv6
// CIDR range with a /48 mask.
// validIPv6RangeCIDR validates that the provided string is formatted as an IPv6
// prefix in CIDR notation, with a /48 mask.
func validIPv6RangeCIDR(id string) error {
_, ipNet, err := net.ParseCIDR(id)
prefix, err := netip.ParsePrefix(id)
if err != nil {
return fmt.Errorf(
"invalid CIDR, %q must be an IPv6 CIDR range", id)
}
ones, _ := ipNet.Mask.Size()
if ones != 48 {
if prefix.Bits() != 48 {
// This also catches the case where the range is an IPv4 CIDR, since an
// IPv4 CIDR can't have a /48 subnet mask - the maximum is /32.
return fmt.Errorf(
"invalid CIDR, %q must be /48", id)
}
return nil
canon := prefix.Masked().String()
if canon != id {
return fmt.Errorf(
"invalid CIDR, %q must be in canonical form (%q)", id, canon)
}
return iana.IsReservedPrefix(prefix)
}
// validateRegId validates that the provided string is a valid ACME regId.
@ -161,49 +179,100 @@ func validateRegId(id string) error {
return nil
}
// validateDomain validates that the provided string is formatted 'domain',
// where domain is a domain name.
func validateDomain(id string) error {
err := policy.ValidDomain(id)
// validateRegIdIdentValue validates that the provided string is formatted
// 'regId:identValue', where regId is an ACME registration Id and identValue is
// a valid identifier value.
func validateRegIdIdentValue(id string) error {
regIdIdentValue := strings.Split(id, ":")
if len(regIdIdentValue) != 2 {
return fmt.Errorf(
"invalid regId:identValue, %q must be formatted 'regId:identValue'", id)
}
err := validateRegId(regIdIdentValue[0])
if err != nil {
return fmt.Errorf("invalid domain, %q must be formatted 'domain': %w", id, err)
return fmt.Errorf(
"invalid regId, %q must be formatted 'regId:identValue'", id)
}
domainErr := policy.ValidDomain(regIdIdentValue[1])
if domainErr != nil {
ipErr := policy.ValidIP(regIdIdentValue[1])
if ipErr != nil {
return fmt.Errorf("invalid identValue, %q must be formatted 'regId:identValue': %w as domain, %w as IP", id, domainErr, ipErr)
}
}
return nil
}
// validateRegIdDomain validates that the provided string is formatted
// 'regId:domain', where regId is an ACME registration Id and domain is a domain
// name.
func validateRegIdDomain(id string) error {
regIdDomain := strings.Split(id, ":")
if len(regIdDomain) != 2 {
return fmt.Errorf(
"invalid regId:domain, %q must be formatted 'regId:domain'", id)
// validateDomainOrCIDR validates that the provided string is either a domain
// name or an IP address. IPv6 addresses must be the lowest address in their
// /64, i.e. their last 64 bits must be zero.
func validateDomainOrCIDR(id string) error {
domainErr := policy.ValidDomain(id)
if domainErr == nil {
// This is a valid domain.
return nil
}
err := validateRegId(regIdDomain[0])
ip, ipErr := netip.ParseAddr(id)
if ipErr != nil {
return fmt.Errorf("%q is neither a domain (%w) nor an IP address (%w)", id, domainErr, ipErr)
}
if ip.String() != id {
return fmt.Errorf("invalid IP address %q, must be in canonical form (%q)", id, ip.String())
}
prefix, prefixErr := coveringPrefix(ip)
if prefixErr != nil {
return fmt.Errorf("invalid IP address %q, couldn't determine prefix: %w", id, prefixErr)
}
if prefix.Addr() != ip {
return fmt.Errorf("invalid IP address %q, must be the lowest address in its prefix (%q)", id, prefix.Addr().String())
}
return iana.IsReservedPrefix(prefix)
}
// validateRegIdDomainOrCIDR validates that the provided string is formatted
// 'regId:domainOrCIDR', where domainOrCIDR is either a domain name or an IP
// address. IPv6 addresses must be the lowest address in their /64, i.e. their
// last 64 bits must be zero.
func validateRegIdDomainOrCIDR(id string) error {
regIdDomainOrCIDR := strings.Split(id, ":")
if len(regIdDomainOrCIDR) != 2 {
return fmt.Errorf(
"invalid regId:domainOrCIDR, %q must be formatted 'regId:domainOrCIDR'", id)
}
err := validateRegId(regIdDomainOrCIDR[0])
if err != nil {
return fmt.Errorf(
"invalid regId, %q must be formatted 'regId:domain'", id)
"invalid regId, %q must be formatted 'regId:domainOrCIDR'", id)
}
err = policy.ValidDomain(regIdDomain[1])
err = validateDomainOrCIDR(regIdDomainOrCIDR[1])
if err != nil {
return fmt.Errorf(
"invalid domain, %q must be formatted 'regId:domain': %w", id, err)
return fmt.Errorf("invalid domainOrCIDR, %q must be formatted 'regId:domainOrCIDR': %w", id, err)
}
return nil
}
// validateFQDNSet validates that the provided string is formatted 'fqdnSet',
// where fqdnSet is a comma-separated list of domain names.
//
// TODO(#7311): Support non-DNS identifiers.
// where fqdnSet is a comma-separated list of identifier values.
func validateFQDNSet(id string) error {
domains := strings.Split(id, ",")
if len(domains) == 0 {
values := strings.Split(id, ",")
if len(values) == 0 {
return fmt.Errorf(
"invalid fqdnSet, %q must be formatted 'fqdnSet'", id)
}
return policy.WellFormedIdentifiers(identifier.NewDNSSlice(domains))
for _, value := range values {
domainErr := policy.ValidDomain(value)
if domainErr != nil {
ipErr := policy.ValidIP(value)
if ipErr != nil {
return fmt.Errorf("invalid fqdnSet member %q: %w as domain, %w as IP", id, domainErr, ipErr)
}
}
}
return nil
}
func validateIdForName(name Name, id string) error {
@ -222,8 +291,8 @@ func validateIdForName(name Name, id string) error {
case FailedAuthorizationsPerDomainPerAccount:
if strings.Contains(id, ":") {
// 'enum:regId:domain' for transaction
return validateRegIdDomain(id)
// 'enum:regId:identValue' for transaction
return validateRegIdIdentValue(id)
} else {
// 'enum:regId' for overrides
return validateRegId(id)
@ -231,16 +300,16 @@ func validateIdForName(name Name, id string) error {
case CertificatesPerDomainPerAccount:
if strings.Contains(id, ":") {
// 'enum:regId:domain' for transaction
return validateRegIdDomain(id)
// 'enum:regId:domainOrCIDR' for transaction
return validateRegIdDomainOrCIDR(id)
} else {
// 'enum:regId' for overrides
return validateRegId(id)
}
case CertificatesPerDomain:
// 'enum:domain'
return validateDomain(id)
// 'enum:domainOrCIDR'
return validateDomainOrCIDR(id)
case CertificatesPerFQDNSet:
// 'enum:fqdnSet'
@ -248,8 +317,8 @@ func validateIdForName(name Name, id string) error {
case FailedAuthorizationsForPausingPerDomainPerAccount:
if strings.Contains(id, ":") {
// 'enum:regId:domain' for transaction
return validateRegIdDomain(id)
// 'enum:regId:identValue' for transaction
return validateRegIdIdentValue(id)
} else {
// 'enum:regId' for overrides
return validateRegId(id)

View File

@ -41,12 +41,24 @@ func TestValidateIdForName(t *testing.T) {
{
limit: NewRegistrationsPerIPAddress,
desc: "valid IPv4 address",
id: "64.112.117.1",
},
{
limit: NewRegistrationsPerIPAddress,
desc: "reserved IPv4 address",
id: "10.0.0.1",
err: "in a reserved address block",
},
{
limit: NewRegistrationsPerIPAddress,
desc: "valid IPv6 address",
id: "2602:80a:6000::42:42",
},
{
limit: NewRegistrationsPerIPAddress,
desc: "IPv6 address in non-canonical form",
id: "2001:0db8:85a3:0000:0000:8a2e:0370:7334",
err: "must be in canonical form",
},
{
limit: NewRegistrationsPerIPAddress,
@ -75,7 +87,19 @@ func TestValidateIdForName(t *testing.T) {
{
limit: NewRegistrationsPerIPv6Range,
desc: "valid IPv6 address range",
id: "2001:0db8:0000::/48",
id: "2602:80a:6000::/48",
},
{
limit: NewRegistrationsPerIPv6Range,
desc: "IPv6 address range in non-canonical form",
id: "2602:080a:6000::/48",
err: "must be in canonical form",
},
{
limit: NewRegistrationsPerIPv6Range,
desc: "IPv6 address range with low bits set",
id: "2602:080a:6000::1/48",
err: "must be in canonical form",
},
{
limit: NewRegistrationsPerIPv6Range,
@ -95,6 +119,12 @@ func TestValidateIdForName(t *testing.T) {
id: "10.0.0.0/16",
err: "must be /48",
},
{
limit: NewRegistrationsPerIPv6Range,
desc: "IPv4 CIDR with invalid long mask",
id: "10.0.0.0/48",
err: "must be an IPv6 CIDR range",
},
{
limit: NewOrdersPerAccount,
desc: "valid regId",
@ -195,6 +225,22 @@ func TestValidateIdForName(t *testing.T) {
desc: "valid domain",
id: "example.com",
},
{
limit: CertificatesPerDomain,
desc: "valid IPv4 address",
id: "64.112.117.1",
},
{
limit: CertificatesPerDomain,
desc: "valid IPv6 address",
id: "2602:80a:6000::",
},
{
limit: CertificatesPerDomain,
desc: "IPv6 address with subnet",
id: "2602:80a:6000::/64",
err: "nor an IP address",
},
{
limit: CertificatesPerDomain,
desc: "malformed domain",
@ -212,11 +258,26 @@ func TestValidateIdForName(t *testing.T) {
desc: "valid fqdnSet containing a single domain",
id: "example.com",
},
{
limit: CertificatesPerFQDNSet,
desc: "valid fqdnSet containing a single IPv4 address",
id: "64.112.117.1",
},
{
limit: CertificatesPerFQDNSet,
desc: "valid fqdnSet containing a single IPv6 address",
id: "2602:80a:6000::1",
},
{
limit: CertificatesPerFQDNSet,
desc: "valid fqdnSet containing multiple domains",
id: "example.com,example.org",
},
{
limit: CertificatesPerFQDNSet,
desc: "valid fqdnSet containing multiple domains and IPs",
id: "2602:80a:6000::1,64.112.117.1,example.com,example.org",
},
}
for _, tc := range testCases {

View File

@ -3,5 +3,5 @@
count: 40
period: 1s
ids:
- id: 10.0.0.2
- id: 64.112.117.1
comment: Foo

View File

@ -3,14 +3,14 @@
count: 40
period: 1s
ids:
- id: 10.0.0.2
- id: 64.112.117.1
comment: Foo
- NewRegistrationsPerIPv6Range:
burst: 50
count: 50
period: 2s
ids:
- id: 2001:0db8:0000::/48
- id: 2602:80a:6000::/48
comment: Foo
- FailedAuthorizationsPerDomainPerAccount:
burst: 60

View File

@ -19,3 +19,10 @@
ids:
- id: "example.com,example.net,example.org"
comment: Foo
- CertificatesPerFQDNSet:
burst: 60
count: 60
period: 4s
ids:
- id: "2602:80a:6000::1,9.9.9.9,example.com"
comment: Foo

View File

@ -5,6 +5,9 @@ import (
"fmt"
"net/netip"
"strconv"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/identifier"
)
// ErrInvalidCost indicates that the cost specified was < 0.
@ -38,23 +41,23 @@ func newRegIdBucketKey(name Name, regId int64) string {
return joinWithColon(name.EnumString(), strconv.FormatInt(regId, 10))
}
// newDomainBucketKey validates and returns a bucketKey for limits that use the
// 'enum:domain' bucket key format.
func newDomainBucketKey(name Name, orderName string) string {
return joinWithColon(name.EnumString(), orderName)
// newDomainOrCIDRBucketKey validates and returns a bucketKey for limits that use
// the 'enum:domainOrCIDR' bucket key formats.
func newDomainOrCIDRBucketKey(name Name, domainOrCIDR string) string {
return joinWithColon(name.EnumString(), domainOrCIDR)
}
// NewRegIdDomainBucketKey validates and returns a bucketKey for limits that use
// the 'enum:regId:domain' bucket key format. This function is exported for use
// in ra.resetAccountPausingLimit.
func NewRegIdDomainBucketKey(name Name, regId int64, orderName string) string {
return joinWithColon(name.EnumString(), strconv.FormatInt(regId, 10), orderName)
// NewRegIdIdentValueBucketKey returns a bucketKey for limits that use the
// 'enum:regId:identValue' bucket key format. This function is exported for use
// by the RA when resetting the account pausing limit.
func NewRegIdIdentValueBucketKey(name Name, regId int64, orderIdent string) string {
return joinWithColon(name.EnumString(), strconv.FormatInt(regId, 10), orderIdent)
}
// newFQDNSetBucketKey validates and returns a bucketKey for limits that use the
// 'enum:fqdnSet' bucket key format.
func newFQDNSetBucketKey(name Name, orderNames []string) string { //nolint: unparam // Only one named rate limit uses this helper
return joinWithColon(name.EnumString(), fmt.Sprintf("%x", hashNames(orderNames)))
func newFQDNSetBucketKey(name Name, orderIdents identifier.ACMEIdentifiers) string { //nolint: unparam // Only one named rate limit uses this helper
return joinWithColon(name.EnumString(), fmt.Sprintf("%x", core.HashIdentifiers(orderIdents)))
}
// Transaction represents a single rate limit operation. It includes a
@ -223,12 +226,12 @@ func (builder *TransactionBuilder) ordersPerAccountTransaction(regId int64) (Tra
}
// FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions returns a slice
// of Transactions for the provided order domain names. An error is returned if
// any of the order domain names are invalid. This method should be used for
// checking capacity, before allowing more authorizations to be created.
// of Transactions for the provided order identifiers. An error is returned if
// any of the order identifiers' values are invalid. This method should be used
// for checking capacity, before allowing more authorizations to be created.
//
// Precondition: len(orderDomains) < maxNames.
func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(regId int64, orderDomains []string) ([]Transaction, error) {
// Precondition: len(orderIdents) < maxNames.
func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(regId int64, orderIdents identifier.ACMEIdentifiers) ([]Transaction, error) {
// FailedAuthorizationsPerDomainPerAccount limit uses the 'enum:regId'
// bucket key format for overrides.
perAccountBucketKey := newRegIdBucketKey(FailedAuthorizationsPerDomainPerAccount, regId)
@ -241,15 +244,14 @@ func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountCheckO
}
var txns []Transaction
for _, name := range orderDomains {
for _, ident := range orderIdents {
// FailedAuthorizationsPerDomainPerAccount limit uses the
// 'enum:regId:domain' bucket key format for transactions.
perDomainPerAccountBucketKey := NewRegIdDomainBucketKey(FailedAuthorizationsPerDomainPerAccount, regId, name)
// 'enum:regId:identValue' bucket key format for transactions.
perIdentValuePerAccountBucketKey := NewRegIdIdentValueBucketKey(FailedAuthorizationsPerDomainPerAccount, regId, ident.Value)
// Add a check-only transaction for each per domain per account bucket.
// The cost is 0, as we are only checking that the account and domain
// pair aren't already over the limit.
txn, err := newCheckOnlyTransaction(limit, perDomainPerAccountBucketKey, 1)
// Add a check-only transaction for each per identValue per account
// bucket.
txn, err := newCheckOnlyTransaction(limit, perIdentValuePerAccountBucketKey, 1)
if err != nil {
return nil, err
}
@ -259,10 +261,10 @@ func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountCheckO
}
// FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction returns a spend-
// only Transaction for the provided order domain name. An error is returned if
// the order domain name is invalid. This method should be used for spending
// capacity, as a result of a failed authorization.
func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(regId int64, orderDomain string) (Transaction, error) {
// only Transaction for the provided order identifier. An error is returned if
// the order identifier's value is invalid. This method should be used for
// spending capacity, as a result of a failed authorization.
func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(regId int64, orderIdent identifier.ACMEIdentifier) (Transaction, error) {
// FailedAuthorizationsPerDomainPerAccount limit uses the 'enum:regId'
// bucket key format for overrides.
perAccountBucketKey := newRegIdBucketKey(FailedAuthorizationsPerDomainPerAccount, regId)
@ -275,9 +277,9 @@ func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountSpendO
}
// FailedAuthorizationsPerDomainPerAccount limit uses the
// 'enum:regId:domain' bucket key format for transactions.
perDomainPerAccountBucketKey := NewRegIdDomainBucketKey(FailedAuthorizationsPerDomainPerAccount, regId, orderDomain)
txn, err := newSpendOnlyTransaction(limit, perDomainPerAccountBucketKey, 1)
// 'enum:regId:identValue' bucket key format for transactions.
perIdentValuePerAccountBucketKey := NewRegIdIdentValueBucketKey(FailedAuthorizationsPerDomainPerAccount, regId, orderIdent.Value)
txn, err := newSpendOnlyTransaction(limit, perIdentValuePerAccountBucketKey, 1)
if err != nil {
return Transaction{}, err
}
@ -286,10 +288,10 @@ func (builder *TransactionBuilder) FailedAuthorizationsPerDomainPerAccountSpendO
}
// FailedAuthorizationsForPausingPerDomainPerAccountTransaction returns a
// Transaction for the provided order domain name. An error is returned if
// the order domain name is invalid. This method should be used for spending
// Transaction for the provided order identifier. An error is returned if the
// order identifier's value is invalid. This method should be used for spending
// capacity, as a result of a failed authorization.
func (builder *TransactionBuilder) FailedAuthorizationsForPausingPerDomainPerAccountTransaction(regId int64, orderDomain string) (Transaction, error) {
func (builder *TransactionBuilder) FailedAuthorizationsForPausingPerDomainPerAccountTransaction(regId int64, orderIdent identifier.ACMEIdentifier) (Transaction, error) {
// FailedAuthorizationsForPausingPerDomainPerAccount limit uses the 'enum:regId'
// bucket key format for overrides.
perAccountBucketKey := newRegIdBucketKey(FailedAuthorizationsForPausingPerDomainPerAccount, regId)
@ -302,9 +304,9 @@ func (builder *TransactionBuilder) FailedAuthorizationsForPausingPerDomainPerAcc
}
// FailedAuthorizationsForPausingPerDomainPerAccount limit uses the
// 'enum:regId:domain' bucket key format for transactions.
perDomainPerAccountBucketKey := NewRegIdDomainBucketKey(FailedAuthorizationsForPausingPerDomainPerAccount, regId, orderDomain)
txn, err := newTransaction(limit, perDomainPerAccountBucketKey, 1)
// 'enum:regId:identValue' bucket key format for transactions.
perIdentValuePerAccountBucketKey := NewRegIdIdentValueBucketKey(FailedAuthorizationsForPausingPerDomainPerAccount, regId, orderIdent.Value)
txn, err := newTransaction(limit, perIdentValuePerAccountBucketKey, 1)
if err != nil {
return Transaction{}, err
}
@ -313,18 +315,19 @@ func (builder *TransactionBuilder) FailedAuthorizationsForPausingPerDomainPerAcc
}
// certificatesPerDomainCheckOnlyTransactions returns a slice of Transactions
// for the provided order domain names. An error is returned if any of the order
// domain names are invalid. This method should be used for checking capacity,
// before allowing more orders to be created. If a CertificatesPerDomainPerAccount
// override is active, a check-only Transaction is created for each per account
// per domain bucket. Otherwise, a check-only Transaction is generated for each
// global per domain bucket. This method should be used for checking capacity,
// before allowing more orders to be created.
// for the provided order identifiers. It returns an error if any of the order
// identifiers' values are invalid. This method should be used for checking
// capacity, before allowing more orders to be created. If a
// CertificatesPerDomainPerAccount override is active, a check-only Transaction
// is created for each per account per domainOrCIDR bucket. Otherwise, a
// check-only Transaction is generated for each global per domainOrCIDR bucket.
// This method should be used for checking capacity, before allowing more orders
// to be created.
//
// Precondition: All orderDomains must comply with policy.WellFormedDomainNames.
func (builder *TransactionBuilder) certificatesPerDomainCheckOnlyTransactions(regId int64, orderDomains []string) ([]Transaction, error) {
if len(orderDomains) > 100 {
return nil, fmt.Errorf("unwilling to process more than 100 rate limit transactions, got %d", len(orderDomains))
// Precondition: All orderIdents must comply with policy.WellFormedIdentifiers.
func (builder *TransactionBuilder) certificatesPerDomainCheckOnlyTransactions(regId int64, orderIdents identifier.ACMEIdentifiers) ([]Transaction, error) {
if len(orderIdents) > 100 {
return nil, fmt.Errorf("unwilling to process more than 100 rate limit transactions, got %d", len(orderIdents))
}
perAccountLimitBucketKey := newRegIdBucketKey(CertificatesPerDomainPerAccount, regId)
@ -342,17 +345,22 @@ func (builder *TransactionBuilder) certificatesPerDomainCheckOnlyTransactions(re
}
}
coveringIdents, err := coveringIdentifiers(orderIdents)
if err != nil {
return nil, err
}
var txns []Transaction
for _, name := range FQDNsToETLDsPlusOne(orderDomains) {
perDomainBucketKey := newDomainBucketKey(CertificatesPerDomain, name)
for _, ident := range coveringIdents {
perDomainOrCIDRBucketKey := newDomainOrCIDRBucketKey(CertificatesPerDomain, ident)
if accountOverride {
if !perAccountLimit.isOverride {
return nil, fmt.Errorf("shouldn't happen: CertificatesPerDomainPerAccount limit is not an override")
}
perAccountPerDomainKey := NewRegIdDomainBucketKey(CertificatesPerDomainPerAccount, regId, name)
// Add a check-only transaction for each per account per domain
perAccountPerDomainOrCIDRBucketKey := NewRegIdIdentValueBucketKey(CertificatesPerDomainPerAccount, regId, ident)
// Add a check-only transaction for each per account per identValue
// bucket.
txn, err := newCheckOnlyTransaction(perAccountLimit, perAccountPerDomainKey, 1)
txn, err := newCheckOnlyTransaction(perAccountLimit, perAccountPerDomainOrCIDRBucketKey, 1)
if err != nil {
if errors.Is(err, errLimitDisabled) {
continue
@ -361,17 +369,17 @@ func (builder *TransactionBuilder) certificatesPerDomainCheckOnlyTransactions(re
}
txns = append(txns, txn)
} else {
// Use the per domain bucket key when no per account per domain override
// is configured.
perDomainLimit, err := builder.getLimit(CertificatesPerDomain, perDomainBucketKey)
// Use the per domainOrCIDR bucket key when no per account per
// domainOrCIDR override is configured.
perDomainOrCIDRLimit, err := builder.getLimit(CertificatesPerDomain, perDomainOrCIDRBucketKey)
if err != nil {
if errors.Is(err, errLimitDisabled) {
continue
}
return nil, err
}
// Add a check-only transaction for each per domain bucket.
txn, err := newCheckOnlyTransaction(perDomainLimit, perDomainBucketKey, 1)
// Add a check-only transaction for each per domainOrCIDR bucket.
txn, err := newCheckOnlyTransaction(perDomainOrCIDRLimit, perDomainOrCIDRBucketKey, 1)
if err != nil {
return nil, err
}
@ -382,22 +390,23 @@ func (builder *TransactionBuilder) certificatesPerDomainCheckOnlyTransactions(re
}
// CertificatesPerDomainSpendOnlyTransactions returns a slice of Transactions
// for the specified order domain names. It returns an error if any domain names
// are invalid. If a CertificatesPerDomainPerAccount override is configured, it
// generates two types of Transactions:
// - A spend-only Transaction for each per-account, per-domain bucket, which
// enforces the limit on certificates issued per domain for each account.
// - A spend-only Transaction for each per-domain bucket, which enforces the
// global limit on certificates issued per domain.
// for the provided order identifiers. It returns an error if any of the order
// identifiers' values are invalid. If a CertificatesPerDomainPerAccount
// override is configured, it generates two types of Transactions:
// - A spend-only Transaction for each per-account, per-domainOrCIDR bucket,
// which enforces the limit on certificates issued per domainOrCIDR for
// each account.
// - A spend-only Transaction for each per-domainOrCIDR bucket, which
// enforces the global limit on certificates issued per domainOrCIDR.
//
// If no CertificatesPerDomainPerAccount override is present, it returns a
// spend-only Transaction for each global per-domain bucket. This method should
// be used for spending capacity, when a certificate is issued.
// spend-only Transaction for each global per-domainOrCIDR bucket. This method
// should be used for spending capacity, when a certificate is issued.
//
// Precondition: orderDomains must all pass policy.WellFormedDomainNames.
func (builder *TransactionBuilder) CertificatesPerDomainSpendOnlyTransactions(regId int64, orderDomains []string) ([]Transaction, error) {
if len(orderDomains) > 100 {
return nil, fmt.Errorf("unwilling to process more than 100 rate limit transactions, got %d", len(orderDomains))
// Precondition: orderIdents must all pass policy.WellFormedIdentifiers.
func (builder *TransactionBuilder) CertificatesPerDomainSpendOnlyTransactions(regId int64, orderIdents identifier.ACMEIdentifiers) ([]Transaction, error) {
if len(orderIdents) > 100 {
return nil, fmt.Errorf("unwilling to process more than 100 rate limit transactions, got %d", len(orderIdents))
}
perAccountLimitBucketKey := newRegIdBucketKey(CertificatesPerDomainPerAccount, regId)
@ -415,23 +424,28 @@ func (builder *TransactionBuilder) CertificatesPerDomainSpendOnlyTransactions(re
}
}
coveringIdents, err := coveringIdentifiers(orderIdents)
if err != nil {
return nil, err
}
var txns []Transaction
for _, name := range FQDNsToETLDsPlusOne(orderDomains) {
perDomainBucketKey := newDomainBucketKey(CertificatesPerDomain, name)
for _, ident := range coveringIdents {
perDomainOrCIDRBucketKey := newDomainOrCIDRBucketKey(CertificatesPerDomain, ident)
if accountOverride {
if !perAccountLimit.isOverride {
return nil, fmt.Errorf("shouldn't happen: CertificatesPerDomainPerAccount limit is not an override")
}
perAccountPerDomainKey := NewRegIdDomainBucketKey(CertificatesPerDomainPerAccount, regId, name)
// Add a spend-only transaction for each per account per domain
// bucket.
txn, err := newSpendOnlyTransaction(perAccountLimit, perAccountPerDomainKey, 1)
perAccountPerDomainOrCIDRBucketKey := NewRegIdIdentValueBucketKey(CertificatesPerDomainPerAccount, regId, ident)
// Add a spend-only transaction for each per account per
// domainOrCIDR bucket.
txn, err := newSpendOnlyTransaction(perAccountLimit, perAccountPerDomainOrCIDRBucketKey, 1)
if err != nil {
return nil, err
}
txns = append(txns, txn)
perDomainLimit, err := builder.getLimit(CertificatesPerDomain, perDomainBucketKey)
perDomainOrCIDRLimit, err := builder.getLimit(CertificatesPerDomain, perDomainOrCIDRBucketKey)
if err != nil {
if errors.Is(err, errLimitDisabled) {
continue
@ -439,24 +453,24 @@ func (builder *TransactionBuilder) CertificatesPerDomainSpendOnlyTransactions(re
return nil, err
}
// Add a spend-only transaction for each per domain bucket.
txn, err = newSpendOnlyTransaction(perDomainLimit, perDomainBucketKey, 1)
// Add a spend-only transaction for each per domainOrCIDR bucket.
txn, err = newSpendOnlyTransaction(perDomainOrCIDRLimit, perDomainOrCIDRBucketKey, 1)
if err != nil {
return nil, err
}
txns = append(txns, txn)
} else {
// Use the per domain bucket key when no per account per domain
// override is configured.
perDomainLimit, err := builder.getLimit(CertificatesPerDomain, perDomainBucketKey)
// Use the per domainOrCIDR bucket key when no per account per
// domainOrCIDR override is configured.
perDomainOrCIDRLimit, err := builder.getLimit(CertificatesPerDomain, perDomainOrCIDRBucketKey)
if err != nil {
if errors.Is(err, errLimitDisabled) {
continue
}
return nil, err
}
// Add a spend-only transaction for each per domain bucket.
txn, err := newSpendOnlyTransaction(perDomainLimit, perDomainBucketKey, 1)
// Add a spend-only transaction for each per domainOrCIDR bucket.
txn, err := newSpendOnlyTransaction(perDomainOrCIDRLimit, perDomainOrCIDRBucketKey, 1)
if err != nil {
return nil, err
}
@ -467,10 +481,10 @@ func (builder *TransactionBuilder) CertificatesPerDomainSpendOnlyTransactions(re
}
// certificatesPerFQDNSetCheckOnlyTransaction returns a check-only Transaction
// for the provided order domain names. This method should only be used for
// for the provided order identifiers. This method should only be used for
// checking capacity, before allowing more orders to be created.
func (builder *TransactionBuilder) certificatesPerFQDNSetCheckOnlyTransaction(orderNames []string) (Transaction, error) {
bucketKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, orderNames)
func (builder *TransactionBuilder) certificatesPerFQDNSetCheckOnlyTransaction(orderIdents identifier.ACMEIdentifiers) (Transaction, error) {
bucketKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, orderIdents)
limit, err := builder.getLimit(CertificatesPerFQDNSet, bucketKey)
if err != nil {
if errors.Is(err, errLimitDisabled) {
@ -482,10 +496,10 @@ func (builder *TransactionBuilder) certificatesPerFQDNSetCheckOnlyTransaction(or
}
// CertificatesPerFQDNSetSpendOnlyTransaction returns a spend-only Transaction
// for the provided order domain names. This method should only be used for
// for the provided order identifiers. This method should only be used for
// spending capacity, when a certificate is issued.
func (builder *TransactionBuilder) CertificatesPerFQDNSetSpendOnlyTransaction(orderNames []string) (Transaction, error) {
bucketKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, orderNames)
func (builder *TransactionBuilder) CertificatesPerFQDNSetSpendOnlyTransaction(orderIdents identifier.ACMEIdentifiers) (Transaction, error) {
bucketKey := newFQDNSetBucketKey(CertificatesPerFQDNSet, orderIdents)
limit, err := builder.getLimit(CertificatesPerFQDNSet, bucketKey)
if err != nil {
if errors.Is(err, errLimitDisabled) {
@ -500,9 +514,9 @@ func (builder *TransactionBuilder) CertificatesPerFQDNSetSpendOnlyTransaction(or
// returns the set of rate limit transactions that should be evaluated before
// allowing the request to proceed.
//
// Precondition: names must be a list of DNS names that all pass
// policy.WellFormedDomainNames.
func (builder *TransactionBuilder) NewOrderLimitTransactions(regId int64, names []string, isRenewal bool) ([]Transaction, error) {
// Precondition: idents must be a list of identifiers that all pass
// policy.WellFormedIdentifiers.
func (builder *TransactionBuilder) NewOrderLimitTransactions(regId int64, idents identifier.ACMEIdentifiers, isRenewal bool) ([]Transaction, error) {
makeTxnError := func(err error, limit Name) error {
return fmt.Errorf("error constructing rate limit transaction for %s rate limit: %w", limit, err)
}
@ -516,21 +530,21 @@ func (builder *TransactionBuilder) NewOrderLimitTransactions(regId int64, names
transactions = append(transactions, txn)
}
txns, err := builder.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(regId, names)
txns, err := builder.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(regId, idents)
if err != nil {
return nil, makeTxnError(err, FailedAuthorizationsPerDomainPerAccount)
}
transactions = append(transactions, txns...)
if !isRenewal {
txns, err := builder.certificatesPerDomainCheckOnlyTransactions(regId, names)
txns, err := builder.certificatesPerDomainCheckOnlyTransactions(regId, idents)
if err != nil {
return nil, makeTxnError(err, CertificatesPerDomain)
}
transactions = append(transactions, txns...)
}
txn, err := builder.certificatesPerFQDNSetCheckOnlyTransaction(names)
txn, err := builder.certificatesPerFQDNSetCheckOnlyTransaction(idents)
if err != nil {
return nil, makeTxnError(err, CertificatesPerFQDNSet)
}

View File

@ -8,6 +8,8 @@ import (
"time"
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/test"
)
@ -73,7 +75,7 @@ func TestFailedAuthorizationsPerDomainPerAccountTransactions(t *testing.T) {
test.AssertNotError(t, err, "creating TransactionBuilder")
// A check-only transaction for the default per-account limit.
txns, err := tb.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(123456789, []string{"so.many.labels.here.example.com"})
txns, err := tb.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(123456789, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "4:123456789:so.many.labels.here.example.com")
@ -81,14 +83,14 @@ func TestFailedAuthorizationsPerDomainPerAccountTransactions(t *testing.T) {
test.Assert(t, !txns[0].limit.isOverride, "should not be an override")
// A spend-only transaction for the default per-account limit.
txn, err := tb.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(123456789, "so.many.labels.here.example.com")
txn, err := tb.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(123456789, identifier.NewDNS("so.many.labels.here.example.com"))
test.AssertNotError(t, err, "creating transaction")
test.AssertEquals(t, txn.bucketKey, "4:123456789:so.many.labels.here.example.com")
test.Assert(t, txn.spendOnly(), "should be spend-only")
test.Assert(t, !txn.limit.isOverride, "should not be an override")
// A check-only transaction for the per-account limit override.
txns, err = tb.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(13371338, []string{"so.many.labels.here.example.com"})
txns, err = tb.FailedAuthorizationsPerDomainPerAccountCheckOnlyTransactions(13371338, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "4:13371338:so.many.labels.here.example.com")
@ -96,7 +98,7 @@ func TestFailedAuthorizationsPerDomainPerAccountTransactions(t *testing.T) {
test.Assert(t, txns[0].limit.isOverride, "should be an override")
// A spend-only transaction for the per-account limit override.
txn, err = tb.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(13371338, "so.many.labels.here.example.com")
txn, err = tb.FailedAuthorizationsPerDomainPerAccountSpendOnlyTransaction(13371338, identifier.NewDNS("so.many.labels.here.example.com"))
test.AssertNotError(t, err, "creating transaction")
test.AssertEquals(t, txn.bucketKey, "4:13371338:so.many.labels.here.example.com")
test.Assert(t, txn.spendOnly(), "should be spend-only")
@ -110,7 +112,7 @@ func TestFailedAuthorizationsForPausingPerDomainPerAccountTransactions(t *testin
test.AssertNotError(t, err, "creating TransactionBuilder")
// A transaction for the per-account limit override.
txn, err := tb.FailedAuthorizationsForPausingPerDomainPerAccountTransaction(13371338, "so.many.labels.here.example.com")
txn, err := tb.FailedAuthorizationsForPausingPerDomainPerAccountTransaction(13371338, identifier.NewDNS("so.many.labels.here.example.com"))
test.AssertNotError(t, err, "creating transaction")
test.AssertEquals(t, txn.bucketKey, "8:13371338:so.many.labels.here.example.com")
test.Assert(t, txn.check && txn.spend, "should be check and spend")
@ -124,14 +126,14 @@ func TestCertificatesPerDomainTransactions(t *testing.T) {
test.AssertNotError(t, err, "creating TransactionBuilder")
// One check-only transaction for the global limit.
txns, err := tb.certificatesPerDomainCheckOnlyTransactions(123456789, []string{"so.many.labels.here.example.com"})
txns, err := tb.certificatesPerDomainCheckOnlyTransactions(123456789, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "5:example.com")
test.Assert(t, txns[0].checkOnly(), "should be check-only")
// One spend-only transaction for the global limit.
txns, err = tb.CertificatesPerDomainSpendOnlyTransactions(123456789, []string{"so.many.labels.here.example.com"})
txns, err = tb.CertificatesPerDomainSpendOnlyTransactions(123456789, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "5:example.com")
@ -147,7 +149,7 @@ func TestCertificatesPerDomainPerAccountTransactions(t *testing.T) {
// We only expect a single check-only transaction for the per-account limit
// override. We can safely ignore the global limit when an override is
// present.
txns, err := tb.certificatesPerDomainCheckOnlyTransactions(13371338, []string{"so.many.labels.here.example.com"})
txns, err := tb.certificatesPerDomainCheckOnlyTransactions(13371338, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "6:13371338:example.com")
@ -155,7 +157,7 @@ func TestCertificatesPerDomainPerAccountTransactions(t *testing.T) {
test.Assert(t, txns[0].limit.isOverride, "should be an override")
// Same as above, but with multiple example.com domains.
txns, err = tb.certificatesPerDomainCheckOnlyTransactions(13371338, []string{"so.many.labels.here.example.com", "z.example.com"})
txns, err = tb.certificatesPerDomainCheckOnlyTransactions(13371338, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com", "z.example.com"}))
test.AssertNotError(t, err, "creating transactions")
test.AssertEquals(t, len(txns), 1)
test.AssertEquals(t, txns[0].bucketKey, "6:13371338:example.com")
@ -163,7 +165,7 @@ func TestCertificatesPerDomainPerAccountTransactions(t *testing.T) {
test.Assert(t, txns[0].limit.isOverride, "should be an override")
// Same as above, but with different domains.
txns, err = tb.certificatesPerDomainCheckOnlyTransactions(13371338, []string{"so.many.labels.here.example.com", "z.example.net"})
txns, err = tb.certificatesPerDomainCheckOnlyTransactions(13371338, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com", "z.example.net"}))
test.AssertNotError(t, err, "creating transactions")
txns = sortTransactions(txns)
test.AssertEquals(t, len(txns), 2)
@ -176,7 +178,7 @@ func TestCertificatesPerDomainPerAccountTransactions(t *testing.T) {
// Two spend-only transactions, one for the global limit and one for the
// per-account limit override.
txns, err = tb.CertificatesPerDomainSpendOnlyTransactions(13371338, []string{"so.many.labels.here.example.com"})
txns, err = tb.CertificatesPerDomainSpendOnlyTransactions(13371338, identifier.NewDNSSlice([]string{"so.many.labels.here.example.com"}))
test.AssertNotError(t, err, "creating TransactionBuilder")
test.AssertEquals(t, len(txns), 2)
txns = sortTransactions(txns)
@ -196,9 +198,9 @@ func TestCertificatesPerFQDNSetTransactions(t *testing.T) {
test.AssertNotError(t, err, "creating TransactionBuilder")
// A single check-only transaction for the global limit.
txn, err := tb.certificatesPerFQDNSetCheckOnlyTransaction([]string{"example.com", "example.net", "example.org"})
txn, err := tb.certificatesPerFQDNSetCheckOnlyTransaction(identifier.NewDNSSlice([]string{"example.com", "example.net", "example.org"}))
test.AssertNotError(t, err, "creating transaction")
namesHash := fmt.Sprintf("%x", hashNames([]string{"example.com", "example.net", "example.org"}))
namesHash := fmt.Sprintf("%x", core.HashIdentifiers(identifier.NewDNSSlice([]string{"example.com", "example.net", "example.org"})))
test.AssertEquals(t, txn.bucketKey, "7:"+namesHash)
test.Assert(t, txn.checkOnly(), "should be check-only")
test.Assert(t, !txn.limit.isOverride, "should not be an override")

View File

@ -1,6 +1,8 @@
package ratelimits
import (
"fmt"
"net/netip"
"strings"
"github.com/weppos/publicsuffix-go/publicsuffix"
@ -14,30 +16,57 @@ func joinWithColon(args ...string) string {
return strings.Join(args, ":")
}
// FQDNsToETLDsPlusOne transforms a list of FQDNs into a list of eTLD+1's for
// the CertificatesPerDomain limit. It also de-duplicates the output domains.
// Exact public suffix matches are included.
func FQDNsToETLDsPlusOne(names []string) []string {
var domains []string
for _, name := range names {
domain, err := publicsuffix.Domain(name)
if err != nil {
// The only possible errors are:
// (1) publicsuffix.Domain is giving garbage values
// (2) the public suffix is the domain itself
// We assume 2 and include the original name in the result.
domains = append(domains, name)
} else {
domains = append(domains, domain)
// coveringIdentifiers transforms a slice of ACMEIdentifiers into strings of
// their "covering" identifiers, for the CertificatesPerDomain limit. It also
// de-duplicates the output. For DNS identifiers, this is eTLD+1's; exact public
// suffix matches are included. For IP address identifiers, this is the address
// (/32) for IPv4, or the /64 prefix for IPv6, in CIDR notation.
func coveringIdentifiers(idents identifier.ACMEIdentifiers) ([]string, error) {
var covers []string
for _, ident := range idents {
switch ident.Type {
case identifier.TypeDNS:
domain, err := publicsuffix.Domain(ident.Value)
if err != nil {
if err.Error() == fmt.Sprintf("%s is a suffix", ident.Value) {
// If the public suffix is the domain itself, that's fine.
// Include the original name in the result.
covers = append(covers, ident.Value)
continue
} else {
return nil, err
}
}
covers = append(covers, domain)
case identifier.TypeIP:
ip, err := netip.ParseAddr(ident.Value)
if err != nil {
return nil, err
}
prefix, err := coveringPrefix(ip)
if err != nil {
return nil, err
}
covers = append(covers, prefix.String())
}
}
return core.UniqueLowerNames(domains)
return core.UniqueLowerNames(covers), nil
}
// hashNames returns a hash of the names requested. This is intended for use
// when interacting with the orderFqdnSets table and rate limiting.
//
// Deprecated: TODO(#7311): Use HashIdentifiers instead.
func hashNames(names []string) []byte {
return core.HashIdentifiers(identifier.NewDNSSlice(names))
// coveringPrefix transforms a netip.Addr into its "covering" prefix, for the
// CertificatesPerDomain limit. For IPv4, this is the IP address (/32). For
// IPv6, this is the /64 that contains the address.
func coveringPrefix(addr netip.Addr) (netip.Prefix, error) {
var bits int
if addr.Is4() {
bits = 32
} else {
bits = 64
}
prefix, err := addr.Prefix(bits)
if err != nil {
// This should be impossible because bits is hardcoded.
return netip.Prefix{}, err
}
return prefix, nil
}

View File

@ -1,55 +1,93 @@
package ratelimits
import (
"bytes"
"net/netip"
"slices"
"testing"
"github.com/letsencrypt/boulder/test"
"github.com/letsencrypt/boulder/identifier"
)
func TestFQDNsToETLDsPlusOne(t *testing.T) {
domains := FQDNsToETLDsPlusOne([]string{})
test.AssertEquals(t, len(domains), 0)
func TestCoveringIdentifiers(t *testing.T) {
cases := []struct {
name string
idents identifier.ACMEIdentifiers
wantErr string
want []string
}{
{
name: "empty string",
idents: identifier.ACMEIdentifiers{
identifier.NewDNS(""),
},
wantErr: "name is blank",
want: nil,
},
{
name: "two subdomains of same domain",
idents: identifier.NewDNSSlice([]string{"www.example.com", "example.com"}),
want: []string{"example.com"},
},
{
name: "three subdomains across two domains",
idents: identifier.NewDNSSlice([]string{"www.example.com", "example.com", "www.example.co.uk"}),
want: []string{"example.co.uk", "example.com"},
},
{
name: "three subdomains across two domains, plus a bare TLD",
idents: identifier.NewDNSSlice([]string{"www.example.com", "example.com", "www.example.co.uk", "co.uk"}),
want: []string{"co.uk", "example.co.uk", "example.com"},
},
{
name: "two subdomains of same domain, one of them long",
idents: identifier.NewDNSSlice([]string{"foo.bar.baz.www.example.com", "baz.example.com"}),
want: []string{"example.com"},
},
{
name: "a domain and two of its subdomains",
idents: identifier.NewDNSSlice([]string{"github.io", "foo.github.io", "bar.github.io"}),
want: []string{"bar.github.io", "foo.github.io", "github.io"},
},
{
name: "a domain and an IPv4 address",
idents: identifier.ACMEIdentifiers{
identifier.NewDNS("example.com"),
identifier.NewIP(netip.MustParseAddr("127.0.0.1")),
},
want: []string{"127.0.0.1/32", "example.com"},
},
{
name: "an IPv6 address",
idents: identifier.ACMEIdentifiers{
identifier.NewIP(netip.MustParseAddr("3fff:aaa:aaaa:aaaa:abad:0ff1:cec0:ffee")),
},
want: []string{"3fff:aaa:aaaa:aaaa::/64"},
},
{
name: "four IP addresses in three prefixes",
idents: identifier.ACMEIdentifiers{
identifier.NewIP(netip.MustParseAddr("127.0.0.1")),
identifier.NewIP(netip.MustParseAddr("127.0.0.254")),
identifier.NewIP(netip.MustParseAddr("3fff:aaa:aaaa:aaaa:abad:0ff1:cec0:ffee")),
identifier.NewIP(netip.MustParseAddr("3fff:aaa:aaaa:ffff:abad:0ff1:cec0:ffee")),
},
want: []string{"127.0.0.1/32", "127.0.0.254/32", "3fff:aaa:aaaa:aaaa::/64", "3fff:aaa:aaaa:ffff::/64"},
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
domains = FQDNsToETLDsPlusOne([]string{"www.example.com", "example.com"})
test.AssertDeepEquals(t, domains, []string{"example.com"})
domains = FQDNsToETLDsPlusOne([]string{"www.example.com", "example.com", "www.example.co.uk"})
test.AssertDeepEquals(t, domains, []string{"example.co.uk", "example.com"})
domains = FQDNsToETLDsPlusOne([]string{"www.example.com", "example.com", "www.example.co.uk", "co.uk"})
test.AssertDeepEquals(t, domains, []string{"co.uk", "example.co.uk", "example.com"})
domains = FQDNsToETLDsPlusOne([]string{"foo.bar.baz.www.example.com", "baz.example.com"})
test.AssertDeepEquals(t, domains, []string{"example.com"})
domains = FQDNsToETLDsPlusOne([]string{"github.io", "foo.github.io", "bar.github.io"})
test.AssertDeepEquals(t, domains, []string{"bar.github.io", "foo.github.io", "github.io"})
}
func TestHashNames(t *testing.T) {
// Test that it is deterministic
h1 := hashNames([]string{"a"})
h2 := hashNames([]string{"a"})
test.AssertByteEquals(t, h1, h2)
// Test that it differentiates
h1 = hashNames([]string{"a"})
h2 = hashNames([]string{"b"})
test.Assert(t, !bytes.Equal(h1, h2), "Should have been different")
// Test that it is not subject to ordering
h1 = hashNames([]string{"a", "b"})
h2 = hashNames([]string{"b", "a"})
test.AssertByteEquals(t, h1, h2)
// Test that it is not subject to case
h1 = hashNames([]string{"a", "b"})
h2 = hashNames([]string{"A", "B"})
test.AssertByteEquals(t, h1, h2)
// Test that it is not subject to duplication
h1 = hashNames([]string{"a", "a"})
h2 = hashNames([]string{"a"})
test.AssertByteEquals(t, h1, h2)
got, err := coveringIdentifiers(tc.idents)
if err != nil && err.Error() != tc.wantErr {
t.Errorf("Got unwanted error %#v", err.Error())
}
if err == nil && tc.wantErr != "" {
t.Errorf("Got no error, wanted %#v", tc.wantErr)
}
if !slices.Equal(got, tc.want) {
t.Errorf("Got %#v, but want %#v", got, tc.want)
}
})
}
}

View File

@ -1,9 +0,0 @@
-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
ALTER TABLE `registrations` ALTER COLUMN `contact` SET DEFAULT '[]';
-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
ALTER TABLE `registrations` ALTER COLUMN `LockCol` DROP DEFAULT;

View File

@ -0,0 +1 @@
../../db/boulder_sa/20250519000000_NullRegistrationsContact.sql

View File

@ -0,0 +1,9 @@
-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
ALTER TABLE `registrations` DROP COLUMN `contact`;
-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
ALTER TABLE `registrations` ADD COLUMN `contact` varchar(191) CHARACTER SET utf8mb4 DEFAULT '[]';

View File

@ -0,0 +1,9 @@
-- +migrate Up
-- SQL in section 'Up' is executed when this migration is applied
ALTER TABLE `registrations` ALTER COLUMN `contact` SET DEFAULT '[]';
-- +migrate Down
-- SQL section 'Down' is executed when this migration is rolled back
ALTER TABLE `registrations` ALTER COLUMN `contact` DROP DEFAULT;

View File

@ -25,7 +25,6 @@ import (
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
berrors "github.com/letsencrypt/boulder/errors"
"github.com/letsencrypt/boulder/features"
"github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/probs"
@ -62,7 +61,7 @@ func badJSONError(msg string, jsonData []byte, err error) error {
}
}
const regFields = "id, jwk, jwk_sha256, contact, agreement, createdAt, LockCol, status"
const regFields = "id, jwk, jwk_sha256, agreement, createdAt, LockCol, status"
// ClearEmail removes the provided email address from one specified registration. If
// there are multiple email addresses present, it does not modify other ones. If the email
@ -274,7 +273,6 @@ type regModel struct {
ID int64 `db:"id"`
Key []byte `db:"jwk"`
KeySHA256 string `db:"jwk_sha256"`
Contact string `db:"contact"`
Agreement string `db:"agreement"`
CreatedAt time.Time `db:"createdAt"`
LockCol int64
@ -295,18 +293,6 @@ func registrationPbToModel(reg *corepb.Registration) (*regModel, error) {
return nil, err
}
// We don't want to write literal JSON "null" strings into the database if the
// list of contact addresses is empty. Replace any possibly-`nil` slice with
// an empty JSON array. We don't need to check reg.ContactPresent, because
// we're going to write the whole object to the database anyway.
jsonContact := []byte("[]")
if len(reg.Contact) != 0 && !features.Get().IgnoreAccountContacts {
jsonContact, err = json.Marshal(reg.Contact)
if err != nil {
return nil, err
}
}
var createdAt time.Time
if !core.IsAnyNilOrZero(reg.CreatedAt) {
createdAt = reg.CreatedAt.AsTime()
@ -316,7 +302,6 @@ func registrationPbToModel(reg *corepb.Registration) (*regModel, error) {
ID: reg.Id,
Key: reg.Key,
KeySHA256: sha,
Contact: string(jsonContact),
Agreement: reg.Agreement,
CreatedAt: createdAt,
Status: reg.Status,
@ -328,18 +313,9 @@ func registrationModelToPb(reg *regModel) (*corepb.Registration, error) {
return nil, errors.New("incomplete Registration retrieved from DB")
}
contact := []string{}
if len(reg.Contact) > 0 && !features.Get().IgnoreAccountContacts {
err := json.Unmarshal([]byte(reg.Contact), &contact)
if err != nil {
return nil, err
}
}
return &corepb.Registration{
Id: reg.ID,
Key: reg.Key,
Contact: contact,
Agreement: reg.Agreement,
CreatedAt: timestamppb.New(reg.CreatedAt.UTC()),
Status: reg.Status,
@ -582,12 +558,12 @@ func rehydrateHostPort(vr *core.ValidationRecord) error {
return fmt.Errorf("parsing validation record URL %q: %w", vr.URL, err)
}
if vr.DnsName == "" {
if vr.Hostname == "" {
hostname := parsedUrl.Hostname()
if hostname == "" {
return fmt.Errorf("hostname missing in URL %q", vr.URL)
}
vr.DnsName = hostname
vr.Hostname = hostname
}
if vr.Port == "" {

View File

@ -53,8 +53,6 @@ func TestRegistrationModelToPb(t *testing.T) {
test.AssertNotError(t, err, "Should pass")
}
func TestRegistrationPbToModel(t *testing.T) {}
func TestAuthzModel(t *testing.T) {
// newTestAuthzPB returns a new *corepb.Authorization for `example.com` that
// is valid, and contains a single valid HTTP-01 challenge. These are the

View File

@ -21,7 +21,6 @@ import (
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
berrors "github.com/letsencrypt/boulder/errors"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/identifier"
blog "github.com/letsencrypt/boulder/log"
@ -126,62 +125,12 @@ func (ssa *SQLStorageAuthority) NewRegistration(ctx context.Context, req *corepb
return registrationModelToPb(reg)
}
// UpdateRegistrationContact stores an updated contact in a Registration.
// The updated contacts field may be empty.
// UpdateRegistrationContact makes no changes, and simply returns the account
// as it exists in the database.
//
// Deprecated: See https://github.com/letsencrypt/boulder/issues/8199 for removal.
func (ssa *SQLStorageAuthority) UpdateRegistrationContact(ctx context.Context, req *sapb.UpdateRegistrationContactRequest) (*corepb.Registration, error) {
if core.IsAnyNilOrZero(req.RegistrationID) {
return nil, errIncompleteRequest
}
if features.Get().IgnoreAccountContacts {
return ssa.GetRegistration(ctx, &sapb.RegistrationID{Id: req.RegistrationID})
}
// We don't want to write literal JSON "null" strings into the database if the
// list of contact addresses is empty. Replace any possibly-`nil` slice with
// an empty JSON array.
jsonContact := []byte("[]")
var err error
if len(req.Contacts) != 0 {
jsonContact, err = json.Marshal(req.Contacts)
if err != nil {
return nil, fmt.Errorf("serializing contacts: %w", err)
}
}
result, overallError := db.WithTransaction(ctx, ssa.dbMap, func(tx db.Executor) (interface{}, error) {
result, err := tx.ExecContext(ctx,
"UPDATE registrations SET contact = ? WHERE id = ? LIMIT 1",
jsonContact,
req.RegistrationID,
)
if err != nil {
return nil, err
}
rowsAffected, err := result.RowsAffected()
if err != nil || rowsAffected != 1 {
return nil, berrors.InternalServerError("no registration ID '%d' updated with new contact field", req.RegistrationID)
}
updatedRegistrationModel, err := selectRegistration(ctx, tx, "id", req.RegistrationID)
if err != nil {
if db.IsNoRows(err) {
return nil, berrors.NotFoundError("registration with ID '%d' not found", req.RegistrationID)
}
return nil, err
}
updatedRegistration, err := registrationModelToPb(updatedRegistrationModel)
if err != nil {
return nil, err
}
return updatedRegistration, nil
})
if overallError != nil {
return nil, overallError
}
return result.(*corepb.Registration), nil
return ssa.GetRegistration(ctx, &sapb.RegistrationID{Id: req.RegistrationID})
}
// UpdateRegistrationKey stores an updated key in a Registration.
@ -466,7 +415,7 @@ func (ssa *SQLStorageAuthority) DeactivateRegistration(ctx context.Context, req
result, overallError := db.WithTransaction(ctx, ssa.dbMap, func(tx db.Executor) (any, error) {
result, err := tx.ExecContext(ctx,
"UPDATE registrations SET status = ?, contact = '[]' WHERE status = ? AND id = ? LIMIT 1",
"UPDATE registrations SET status = ? WHERE status = ? AND id = ? LIMIT 1",
string(core.StatusDeactivated),
string(core.StatusValid),
req.Id,
@ -806,7 +755,7 @@ func (ssa *SQLStorageAuthority) FinalizeAuthorization2(ctx context.Context, req
if req.Attempted == string(core.ChallengeTypeHTTP01) {
// Remove these fields because they can be rehydrated later
// on from the URL field.
record.DnsName = ""
record.Hostname = ""
record.Port = ""
}
validationRecords = append(validationRecords, record)

View File

@ -188,23 +188,18 @@ func TestAddRegistration(t *testing.T) {
sa, clk, cleanUp := initSA(t)
defer cleanUp()
jwk := goodTestJWK()
jwkJSON, _ := jwk.MarshalJSON()
contacts := []string{"mailto:foo@example.com"}
jwkJSON, _ := goodTestJWK().MarshalJSON()
reg, err := sa.NewRegistration(ctx, &corepb.Registration{
Key: jwkJSON,
Contact: contacts,
Contact: []string{"mailto:foo@example.com"},
})
if err != nil {
t.Fatalf("Couldn't create new registration: %s", err)
}
test.Assert(t, reg.Id != 0, "ID shouldn't be 0")
test.AssertDeepEquals(t, reg.Contact, contacts)
_, err = sa.GetRegistration(ctx, &sapb.RegistrationID{Id: 0})
test.AssertError(t, err, "Registration object for ID 0 was returned")
test.AssertEquals(t, len(reg.Contact), 0)
// Confirm that the registration can be retrieved by ID.
dbReg, err := sa.GetRegistration(ctx, &sapb.RegistrationID{Id: reg.Id})
test.AssertNotError(t, err, fmt.Sprintf("Couldn't get registration with ID %v", reg.Id))
@ -212,28 +207,22 @@ func TestAddRegistration(t *testing.T) {
test.AssertEquals(t, dbReg.Id, reg.Id)
test.AssertByteEquals(t, dbReg.Key, jwkJSON)
test.AssertDeepEquals(t, dbReg.CreatedAt.AsTime(), createdAt)
test.AssertEquals(t, len(dbReg.Contact), 0)
regUpdate := &sapb.UpdateRegistrationContactRequest{
RegistrationID: reg.Id,
Contacts: []string{"test.com"},
}
newReg, err := sa.UpdateRegistrationContact(ctx, regUpdate)
test.AssertNotError(t, err, fmt.Sprintf("Couldn't update registration with ID %v", reg.Id))
test.AssertEquals(t, dbReg.Id, newReg.Id)
test.AssertEquals(t, dbReg.Agreement, newReg.Agreement)
_, err = sa.GetRegistration(ctx, &sapb.RegistrationID{Id: 0})
test.AssertError(t, err, "Registration object for ID 0 was returned")
// Reconfirm that the updated registration was persisted to the database.
newReg, err = sa.GetRegistrationByKey(ctx, &sapb.JSONWebKey{Jwk: jwkJSON})
// Confirm that the registration can be retrieved by key.
dbReg, err = sa.GetRegistrationByKey(ctx, &sapb.JSONWebKey{Jwk: jwkJSON})
test.AssertNotError(t, err, "Couldn't get registration by key")
test.AssertEquals(t, dbReg.Id, newReg.Id)
test.AssertEquals(t, dbReg.Agreement, newReg.Agreement)
test.AssertEquals(t, dbReg.Id, dbReg.Id)
test.AssertEquals(t, dbReg.Agreement, dbReg.Agreement)
anotherKey := `{
"kty":"RSA",
"n": "vd7rZIoTLEe-z1_8G1FcXSw9CQFEJgV4g9V277sER7yx5Qjz_Pkf2YVth6wwwFJEmzc0hoKY-MMYFNwBE4hQHw",
"e":"AQAB"
}`
_, err = sa.GetRegistrationByKey(ctx, &sapb.JSONWebKey{Jwk: []byte(anotherKey)})
test.AssertError(t, err, "Registration object for invalid key was returned")
}
@ -4504,6 +4493,7 @@ func newAcctKey(t *testing.T) []byte {
}
func TestUpdateRegistrationContact(t *testing.T) {
// TODO(#8199): Delete this.
sa, _, cleanUp := initSA(t)
defer cleanUp()
@ -4560,13 +4550,12 @@ func TestUpdateRegistrationContact(t *testing.T) {
})
test.AssertNotError(t, err, "unexpected error for UpdateRegistrationContact()")
test.AssertEquals(t, updatedReg.Id, reg.Id)
test.AssertDeepEquals(t, updatedReg.Contact, tt.newContacts)
test.AssertEquals(t, len(updatedReg.Contact), 0)
refetchedReg, err := sa.GetRegistration(ctx, &sapb.RegistrationID{
Id: reg.Id,
})
refetchedReg, err := sa.GetRegistration(ctx, &sapb.RegistrationID{Id: reg.Id})
test.AssertNotError(t, err, "retrieving registration")
test.AssertDeepEquals(t, refetchedReg.Contact, tt.newContacts)
test.AssertEquals(t, refetchedReg.Id, reg.Id)
test.AssertEquals(t, len(refetchedReg.Contact), 0)
})
}
}

View File

@ -275,7 +275,7 @@ func (ssa *SQLStorageAuthorityRO) GetRevocationStatus(ctx context.Context, req *
}
// FQDNSetTimestampsForWindow returns the issuance timestamps for each
// certificate, issued for a set of domains, during a given window of time,
// certificate, issued for a set of identifiers, during a given window of time,
// starting from the most recent issuance.
//
// If req.Limit is nonzero, it returns only the most recent `Limit` results
@ -529,7 +529,7 @@ func (ssa *SQLStorageAuthorityRO) GetAuthorization2(ctx context.Context, req *sa
return modelToAuthzPB(*(obj.(*authzModel)))
}
// authzModelMapToPB converts a mapping of domain name to authzModels into a
// authzModelMapToPB converts a mapping of identifiers to authzModels into a
// protobuf authorizations map
func authzModelMapToPB(m map[identifier.ACMEIdentifier]authzModel) (*sapb.Authorizations, error) {
resp := &sapb.Authorizations{}

View File

@ -12,7 +12,7 @@ DOCKER_REPO="letsencrypt/boulder-tools"
# .github/workflows/release.yml,
# .github/workflows/try-release.yml if appropriate,
# and .github/workflows/boulder-ci.yml with the new container tag.
GO_CI_VERSIONS=( "1.24.1" )
GO_CI_VERSIONS=( "1.24.4" )
echo "Please login to allow push to DockerHub"
docker login

View File

@ -52,7 +52,6 @@ role of internal authentication between Let's Encrypt components:
- The IP-address certificate used by challtestsrv (which acts as the integration
test environment's recursive resolver) for DoH handshakes.
- The certificate presented by mail-test-srv's SMTP endpoint.
- The certificate presented by the test redis cluster.
- The certificate presented by the WFE's API TLS handler (which is usually
behind some other load-balancer like nginx).

View File

@ -17,11 +17,11 @@ ipki() (
mkdir ipki
cd ipki
# Create a generic cert which can be used by our test-only services (like
# mail-test-srv) that aren't sophisticated enough to present a different name.
# This first invocation also creates the issuer key, so the loops below can
# run in the background without racing to create it.
minica -domains localhost
# Create a generic cert which can be used by our test-only services that
# aren't sophisticated enough to present a different name. This first
# invocation also creates the issuer key, so the loops below can run in the
# background without racing to create it.
minica -domains localhost --ip-addresses 127.0.0.1
# Used by challtestsrv to negotiate DoH handshakes. Even though we think of
# challtestsrv as being external to our infrastructure (because it hosts the
@ -40,7 +40,7 @@ ipki() (
minica -domains redis -ip-addresses 10.77.77.2,10.77.77.3,10.77.77.4,10.77.77.5
# Used by Boulder gRPC services as both server and client mTLS certificates.
for SERVICE in admin expiration-mailer ocsp-responder consul \
for SERVICE in admin ocsp-responder consul \
wfe akamai-purger bad-key-revoker crl-updater crl-storer \
health-checker rocsp-tool sfe email-exporter; do
minica -domains "${SERVICE}.boulder" &

View File

@ -19,16 +19,6 @@
"noWaitForReady": true,
"timeout": "15s"
},
"mailer": {
"server": "localhost",
"port": "9380",
"username": "cert-manager@example.com",
"from": "bad key revoker <bad-key-revoker@test.org>",
"passwordFile": "test/secrets/smtp_password",
"SMTPTrustedRootFile": "test/certs/ipki/minica.pem",
"emailSubject": "Certificates you've issued have been revoked due to key compromise",
"emailTemplate": "test/example-bad-key-revoker-template"
},
"maximumRevocations": 15,
"findCertificatesBatchSize": 10,
"interval": "50ms",

Some files were not shown because too many files have changed in this diff Show More