Compare commits

...

92 Commits

Author SHA1 Message Date
Samantha Frank 8aafb31347
ratelimits: Small cleanup in transaction.go (#8275) 2025-06-26 17:43:02 -04:00
Aaron Gable 30eac83730
RFC 9773: Update ARI URL (#8274)
https://www.rfc-editor.org/rfc/rfc9773.html is no longer a draft; it
deserves a better-looking path!
2025-06-26 08:50:44 -07:00
Aaron Gable 4e74a25582
Restore TestAccountEmailError (#8273)
This integration test was removed in the early versions of
https://github.com/letsencrypt/boulder/pull/8245, because that PR had
removed all validation of contact addresses. However, later iterations
of that PR restored (most) contact validation, so this PR restores (most
of) the TestAccountEmailError integration test.
2025-06-25 16:35:52 -07:00
James Renken 21d022840b
Really fix GHA for IANA registries (#8271) 2025-06-25 15:58:44 -07:00
Aaron Gable e110ec9a03
Confine contact addresses to the WFE (#8245)
Change the WFE to stop populating the Contact field of the
NewRegistration requests it sends to the RA. Similarly change the WFE to
ignore the Contact field of any update-account requests it receives,
thereby removing all calls to the RA's UpdateRegistrationContact method.

Hoist the RA's contact validation logic into the WFE, so that we can
still return errors to clients which are presenting grossly malformed
contact fields, and have a first layer of protection against trying to
send malformed addresses to email-exporter.

A follow-up change (after a deploy cycle) will remove the deprecated RA
and SA methods.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-25 15:51:44 -07:00
James Renken ea23894910
Fix GHA for IANA registries (#8270)
Add `org: read` to the IANA GHA token's scope, so it can ask
boulder-developers for review.

Add a line break formatting change from IANA.
2025-06-25 13:30:47 -07:00
James Renken 9308392adf
iana: Embed & parse reserved IP registries from primary source (#8249)
Move `policy.IsReservedIP` to `iana.IsReservedAddr`.

Move `policy.IsReservedPrefix` to `iana.IsReservedPrefix`.

Embed & parse IANA's special-purpose address registries for IPv4 and
IPv6 in their original CSV format.

Fixes #8080
2025-06-25 12:05:25 -07:00
dependabot[bot] 901f2dba7c
build(deps): bump the aws group with 4 updates (#8263)
Bumps the aws group with 4 updates:
[github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2)
and [github.com/aws/smithy-go](https://github.com/aws/smithy-go).

Updates `github.com/aws/aws-sdk-go-v2` from 1.36.4 to 1.36.5
Updates `github.com/aws/aws-sdk-go-v2/config` from 1.29.16 to 1.29.17
Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.80.2 to 1.80.3
Updates `github.com/aws/smithy-go` from 1.22.2 to 1.22.4

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 14:08:41 -04:00
James Renken a29f2f37d6
va: Check for reserved IP addresses at dialer creation (#8257)
Fixes #8041
2025-06-25 10:09:47 -07:00
Aaron Gable c576a200d0
Remove id-kp-clientAuth from intermediate ceremony (#8265)
Fixes https://github.com/letsencrypt/boulder/issues/8264
2025-06-24 16:19:31 -07:00
Matthew McPherrin 5ddd5acf99
Print key hash as hex in admin tool. (#8266)
The ProtoText printing of this structure prints the binary string as
escaped
utf8 text, which is essentially gibberish for my processes.

---------

Co-authored-by: Aaron Gable <aaron@letsencrypt.org>
2025-06-23 17:36:06 -07:00
Jacob Hoffman-Andrews cd02caea99
Add verify-release-ancestry.sh (#8268)
And run it from the release workflow.
2025-06-23 17:22:47 -07:00
Samantha Frank ddc4c8683b
email-exporter: Don't waste limited attempts on cached entries (#8262)
Currently, we check the cache only immediately before attempting to send
an email address. However, we only reach that point if the rate limiter
(used to respect the daily API quota) permits it. As a result, around
40% of sends are wasted on email addresses that are ultimately skipped
due to cache hits.

Replace the pre-send cache `Seen` check with an atomic `StoreIfAbsent`
executed before the `limiter.Wait()` so that limiter tokens are consumed
only for email addresses that actually need sending. Skip the
`limiter.Wait()` on cache hits, remove cache entries only when a send
fails, and increment metrics only on successful sends.
2025-06-23 14:55:53 -07:00
Jacob Hoffman-Andrews f087d280be
Add a GitHub Action that only runs on main or hotfix (#8267)
It can be used by tag protection rules to ensure that tags may only be
pushed if their corresponding commit was first pushed to main or a
hotfix branch.
2025-06-23 12:16:01 -07:00
Samantha Frank 1bfc3186c8
grpc: Enable client-side health_v1 health checking (#8254)
- Configure all gRPC clients to check the overall serving status of each
endpoint via the `grpc_health_v1` service.
- Configure all gRPC servers to expose the `grpc_health_v1` service to
any client permitted to access one of the server’s services.
- Modify long-running, deep health checks to set and transition the
overall (empty string) health status of the gRPC server in addition to
the specific service they were configured for.

Fixes #8227
2025-06-18 10:37:20 -04:00
Aaron Gable b6c5ee69ed
Make ARI error messages clearer (#8260)
Fixes https://github.com/letsencrypt/boulder/issues/8259
2025-06-17 16:55:36 -07:00
Jacob Hoffman-Andrews 5ad5f85cfb
bdns: deprecate DOH feature flag (#8234)
Since the bdns unittests used a local DNS server via TCP, modify that
server to instead speak DoH.

Fixes #8120
2025-06-17 14:45:52 -07:00
Samantha Frank c97b312e65
integration: Move test_order_finalize_early to the Go tests (#8258)
Hyrum’s Law strikes again: our Python integration tests were implicitly
relying on behavior that was changed upstream in Certbot’s ACME client
(see https://github.com/certbot/certbot/pull/10239). To ensure continued
coverage, replicate this test in our Go integration test suite.
2025-06-17 17:19:34 -04:00
Aaron Gable aa3c9f0eee
Drop contact column from registrations table (#8201)
Drop the contact column from the Registrations table.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 14:58:53 -07:00
James Renken 61d2558b29
bad-key-revoker: Fix log message formatting (#8252)
Fixes #8251
2025-06-16 11:30:14 -07:00
Aaron Gable c68e27ea6f
Stop overwriting contact column upon account deactivation (#8248)
This fixes an oversight in
https://github.com/letsencrypt/boulder/pull/8200.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:57 -07:00
Aaron Gable fbf0c06427
Delete admin update-email subcommand (#8246)
Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:44 -07:00
Aaron Gable 24c385c1cc
Delete contact-auditor (#8244)
The contact-auditor's purpose was to scan the contact emails stored in
our database and identify invalid addresses which could be removed. As
of https://github.com/letsencrypt/boulder/pull/8201 we no longer have
any contacts in the database, so this tool no longer has a purpose.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-16 09:29:33 -07:00
dependabot[bot] 6872dfc63a
build(deps): bump the aws group with 4 updates (#8242)
Bumps the aws group with 4 updates:
[github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2),
[github.com/aws/aws-sdk-go-v2/service/s3](https://github.com/aws/aws-sdk-go-v2)
and [github.com/aws/smithy-go](https://github.com/aws/smithy-go).

Updates `github.com/aws/aws-sdk-go-v2` from 1.32.2 to 1.36.4
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/v1.32.2...v1.36.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/aws-sdk-go-v2/config` from 1.27.43 to 1.29.16
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.43...config/v1.29.16">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/aws-sdk-go-v2/service/s3` from 1.65.3 to 1.80.2
<details>
<summary>Commits</summary>
<ul>
<li><a
href="983f192608"><code>983f192</code></a>
Release 2025-06-10</li>
<li><a
href="a5c1277d48"><code>a5c1277</code></a>
Regenerated Clients</li>
<li><a
href="a42991177c"><code>a429911</code></a>
Update endpoints model</li>
<li><a
href="4ea1cecfb1"><code>4ea1cec</code></a>
Update API model</li>
<li><a
href="5b11c8d01f"><code>5b11c8d</code></a>
remove changelog directions for now because of <a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3107">#3107</a></li>
<li><a
href="79f492ceb2"><code>79f492c</code></a>
fixup changelog</li>
<li><a
href="4f82369def"><code>4f82369</code></a>
use UTC() in v4 event stream signing (<a
href="https://redirect.github.com/aws/aws-sdk-go-v2/issues/3105">#3105</a>)</li>
<li><a
href="755839b2ee"><code>755839b</code></a>
Release 2025-06-09</li>
<li><a
href="ba3d22d775"><code>ba3d22d</code></a>
Regenerated Clients</li>
<li><a
href="01587c6c41"><code>01587c6</code></a>
Update endpoints model</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.65.3...service/s3/v1.80.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `github.com/aws/smithy-go` from 1.22.0 to 1.22.2
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws/smithy-go/blob/main/CHANGELOG.md">github.com/aws/smithy-go's
changelog</a>.</em></p>
<blockquote>
<h1>Release (2025-02-17)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.3</li>
<li><strong>Dependency Update</strong>: Bump minimum Go version to 1.22
per our language support policy.</li>
</ul>
<h1>Release (2025-01-21)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.2
<ul>
<li><strong>Bug Fix</strong>: Fix HTTP metrics data race.</li>
<li><strong>Bug Fix</strong>: Replace usages of deprecated ioutil
package.</li>
</ul>
</li>
</ul>
<h1>Release (2024-11-15)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.1
<ul>
<li><strong>Bug Fix</strong>: Fix failure to replace URI path segments
when their names overlap.</li>
</ul>
</li>
</ul>
<h1>Release (2024-10-03)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.22.0
<ul>
<li><strong>Feature</strong>: Add HTTP client metrics.</li>
</ul>
</li>
</ul>
<h1>Release (2024-09-25)</h1>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go/aws-http-auth</code>: <a
href="https://github.com/aws/smithy-go/blob/main/aws-http-auth/CHANGELOG.md#v100-2024-09-25">v1.0.0</a>
<ul>
<li><strong>Release</strong>: Initial release of module aws-http-auth,
which implements generically consumable SigV4 and SigV4a request
signing.</li>
</ul>
</li>
</ul>
<h1>Release (2024-09-19)</h1>
<h2>General Highlights</h2>
<ul>
<li><strong>Dependency Update</strong>: Updated to the latest SDK module
versions</li>
</ul>
<h2>Module Highlights</h2>
<ul>
<li><code>github.com/aws/smithy-go</code>: v1.21.0</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f2ae388e50"><code>f2ae388</code></a>
Release 2025-01-21</li>
<li><a
href="d9b8ee9d55"><code>d9b8ee9</code></a>
refactor: fix deprecated for ioutil (<a
href="https://redirect.github.com/aws/smithy-go/issues/560">#560</a>)</li>
<li><a
href="ee8334e832"><code>ee8334e</code></a>
transport/http: fix metrics race condition (<a
href="https://redirect.github.com/aws/smithy-go/issues/555">#555</a>)</li>
<li><a
href="7e8149709c"><code>7e81497</code></a>
transport/http: fix go doc typo (<a
href="https://redirect.github.com/aws/smithy-go/issues/554">#554</a>)</li>
<li><a
href="a7d0f1ef5f"><code>a7d0f1e</code></a>
fix potential nil deref in waiter path matcher (<a
href="https://redirect.github.com/aws/smithy-go/issues/563">#563</a>)</li>
<li><a
href="e5c5ac3012"><code>e5c5ac3</code></a>
add changelog instructions and make recipe</li>
<li><a
href="5e16ee7648"><code>5e16ee7</code></a>
add missing waiter retry breakout on non-nil non-matched error (<a
href="https://redirect.github.com/aws/smithy-go/issues/561">#561</a>)</li>
<li><a
href="10fbeed6f8"><code>10fbeed</code></a>
Revert &quot;Change defaults when generating a client via smithy CLI (<a
href="https://redirect.github.com/aws/smithy-go/issues/558">#558</a>)&quot;
(<a
href="https://redirect.github.com/aws/smithy-go/issues/559">#559</a>)</li>
<li><a
href="95ba31879b"><code>95ba318</code></a>
Change defaults when generating a client via smithy CLI (<a
href="https://redirect.github.com/aws/smithy-go/issues/558">#558</a>)</li>
<li><a
href="bed421c3d7"><code>bed421c</code></a>
Release 2024-11-15</li>
<li>Additional commits viewable in <a
href="https://github.com/aws/smithy-go/compare/v1.22.0...v1.22.2">compare
view</a></li>
</ul>
</details>
<br />

<details>
<summary>Most Recent Ignore Conditions Applied to This Pull
Request</summary>

| Dependency Name | Ignore Conditions |
| --- | --- |
| github.com/aws/aws-sdk-go-v2/service/s3 | [< 1.28, > 1.27.1] |
| github.com/aws/aws-sdk-go-v2/config | [< 1.18, > 1.17.1] |
| github.com/aws/aws-sdk-go-v2/service/s3 | [< 1.31, > 1.30.5] |
</details>


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-13 22:40:08 -07:00
Aaron Gable 1ffa95d53d
Stop interacting with registration.contact column (#8200)
Deprecate the IgnoreAccountContacts feature flag. This causes the SA to
never query the contact column when reading registrations from the
database, and to never write a value for the contact column when
creating a new registration.

This requires updating or disabling several tests. These tests could be
deleted now, but I felt it was more appropriate for them to be fully
deleted when their corresponding services (e.g. expiration-mailer) are
also deleted.

Fixes https://github.com/letsencrypt/boulder/issues/8176
2025-06-13 14:40:19 -07:00
James Renken 7214b285e4
identifier: Remove helper funcs from PB identifiers migration (#8236)
Remove `ToDNSSlice`, `FromProtoWithDefault`, and
`FromProtoSliceWithDefault` now that all their callers are gone. All
protobufs but one have migrated from DnsNames to Identifiers.

Remove TODOs for the exception, `ValidationRecord`, where an identifier
type isn't appropriate and it really only needs a string.

Rename `corepb.ValidationRecord.DnsName` to `Hostname` for clarity, to
match the corresponding PB's field name.

Improve various comments and docs re: IP address identifiers.

Depends on #8221 (which removes the last callers)
Fixes #8023
2025-06-13 12:55:32 -07:00
Aaron Gable b9a681dbcc
Delete notify-mailer, expiration-mailer, and id-exporter (#8230)
These services existed solely for the purpose of sending emails, which
we no longer do.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-12 15:45:04 -07:00
James Renken 0a095e2f6b
policy, ra: Remove default allows for DNS identifiers (#8233)
Fixes #8184
2025-06-12 15:25:23 -07:00
James Renken 48d5ad3c19
ratelimits: Add IP address identifier support (#8221)
Change most functions in `ratelimits` to use full ACMEIdentifier(s) as
arguments, instead of using their values as strings. This makes the
plumbing from other packages more consistent, and allows us to:

Rename `FQDNsToETLDsPlusOne` to `coveringIdentifiers` and handle IP
identifiers, parsing IPv6 addresses into their covering /64 prefixes for
CertificatesPerDomain[PerAccount] bucket keys.

Port improved IP/CIDR validation logic to NewRegistrationsPerIPAddress &
PerIPv6Range.

Rename `domain` parts of bucket keys to either `identValue` or
`domainOrCIDR`.

Rename other internal functions to clarify that they now handle
identifier values, not just domains.

Add the new reserved IPv6 address range from RFC 9780.

For deployability, don't (yet) rename rate limits themselves; and
because it remains the name of the database table, preserve the term
`fqdnSets`.

Fixes #8223
Part of #7311
2025-06-12 11:47:32 -07:00
Aaron Gable 1f36d654ba
Update CI to mariadb 10.6.22 (#8239)
Fixes https://github.com/letsencrypt/boulder/issues/8238
2025-06-11 15:19:09 -07:00
Aaron Gable 44f75d6abd
Remove mail functionality from bad-key-revoker (#8229)
Simplify the main logic loop to simply revoke certs as soon as they're
identified, rather than jumping through hoops to identify and
deduplicate the associated accounts and emails. Make the Mailer portion
of the config optional for deployability.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-06-09 14:36:19 -07:00
Aaron Gable d4e706eeb8
Update CI to go1.24.4 (#8232)
Go 1.24.4 is a security release containing fixes to net/http,
os.OpenFile, and x509.Certificate.Verify, all of which we use. We appear
to be unaffected by the specific vulnerabilities described, however. See
the announcement here:
https://groups.google.com/g/golang-announce/c/ufZ8WpEsA3A
2025-06-09 09:30:33 -07:00
dependabot[bot] 426482781c
build(deps): bump the otel group (#7968)
Update:
- https://github.com/open-telemetry/opentelemetry-go-contrib from 0.55.0 to 0.61.0
- https://github.com/open-telemetry/opentelemetry-go from 1.30.0 to 1.36.0
- several golang.org/x/ packages
- their transitive dependencies
2025-06-06 17:22:48 -07:00
Aaron Gable 1d713ed8eb
Ignore IP CNs in CSRs (#8231)
If a finalize CSR contains a SAN which looks like an IP address, don't
actually include that CN in our IssuanceRequest, and don't promote any
other SAN to be the CN either. This is similar to how we ignore the
CSR's CN when it is too long.
2025-06-06 14:57:12 -07:00
Aaron Gable 83b6b05177
Update golangci-lint to v2 (#8228)
The golangci-lint project has released a v2, which is noticeably faster,
splits linters and formatters into separate categories, has greatly
improved support for staticcheck, and has an incompatible config file
format. Update our boulder-tools version of golangci-lint to v2, remove
our standalone staticcheck, and update our config file to match.
2025-06-06 14:38:15 -07:00
Aaron Gable d951304b54
Ratelimits: don't validate our own constructed bucket keys (#8225)
All of the identifiers being passed into the bucket construction helpers
have already passed through policy.WellFormedIdentifiers in the WFE. We
can trust that function, and our own ability to construct bucket keys,
to reduce the amount of revalidation we do before sending bucket keys to
redis.

The validateIdForName function is still used to validate override bucket
keys loaded from yaml.
2025-06-03 15:43:07 -07:00
Aaron Gable 474fc7f9a7
Partially revert "bdns, va: Remove DNSAllowLoopbackAddresses" (#8226)
This partially reverts https://github.com/letsencrypt/boulder/pull/8203,
which was landed as commit dea81c7381.

It leaves all of the boulder integration test environment changes in
place, while restoring the DNSAllowLoopbackAddresses config key and its
ability to influence the VA's behavior.
2025-06-03 14:52:46 -07:00
Samantha Frank 0d7ea60b2c
email-exporter: Add an LRU cache of seen hashed email addresses (#8219) 2025-05-30 17:04:35 -04:00
Aaron Gable 23608e19c5
Simplify docker-compose network setup (#8214)
Remove static IPs from services that can be reached by their service
name. Remove consulnet and redisnet, and have the services which
connected to those network connect directly to bouldernet instead.
Instruct docker-compose to only dynamically allocate IPs from the upper
half of the bouldernet subset, to avoid clashing with the few static IPs
we still specify.
2025-05-30 13:23:27 -07:00
Samantha Frank 69ba857d5e
ra: Allow rate limit overrides to be added/updated (#8218)
#8217
2025-05-30 14:07:58 -04:00
James Renken dea81c7381
bdns, va: Remove DNSAllowLoopbackAddresses (#8203)
We no longer need a code path to resolve reserved IP addresses during
integration tests.

Move to a public IP for the remaining tests, after #8187 did so for many
of them.

Depends on #8187
2025-05-28 10:08:03 -07:00
James Renken ac68828f43
Replace most uses of net.IP with netip.Addr (#8205)
Retain `net.IP` only where we directly work with `x509.Certificate` and
friends.

Fixes #5925
Depends on #8196
2025-05-27 15:05:35 -07:00
James Renken 9b9ed86c10
sa: Encode IP identifiers for issuedNames (#8210)
Move usage of `sa.ReverseName` to a new `sa.EncodeIssuedName`, which
detects IP addresses and exempts them from being reversed. Retain
`reverseName` as an internal helper function.

Update `id-exporter`, `reversed-hostname-checker`, and tests to use the
new function and handle IP addresses.

Part of #7311
2025-05-27 14:55:19 -07:00
James Renken b017c1b46d
bdns, policy: Move reserved IP checking from bdns to policy & refactor (#8196)
Move `IsReservedIP` and its supporting vars from `bdns` to `policy`.

Rewrite `IsReservedIP` to:
* Use `netip` because `netip.Prefix` can be used as a map key, allowing
us to define prefix lists more elegantly. This will enable future work
to import prefix lists from IANA's primary source data.
* Return an error including the reserved network's name.

Refactor `IsReservedIP` tests to be table-based.

Fixes #8040
2025-05-27 13:24:21 -07:00
James Renken 103ffb03d0
wfe, csr: Add IP address identifier support & integration test (#8187)
Permit all valid identifier types in `wfe.NewOrder` and `csr.VerifyCSR`.

Permit certs with just IP address identifiers to skip
`sa.addIssuedNames`.

Check that URI SANs are empty in `csr.VerifyCSR`, which was previously
missed.

Use a real (Let's Encrypt) IP address range in integration testing, to
let challtestsrv satisfy IP address challenges.

Fixes #8192
Depends on #8154
2025-05-27 13:17:47 -07:00
Aaron Gable 8a7c3193a9
SA: Use IgnoreAccountContacts flag to shortcut UpdateRegistrationContact (#8208)
If the IgnoreAccountContacts flag is set, don't bother writing the new
contacts to the database and instead just return the account object as
it stands. This does not require any test changes because
https://github.com/letsencrypt/boulder/pull/8198 already changed
registrationModelToPb to omit whatever contacts were retrieved from the
database before responding to the RA.

Part of https://github.com/letsencrypt/boulder/issues/8176
2025-05-23 13:01:51 -07:00
Aaron Gable 930e69b8f5
Remove expectation of contacts from id-exporter (#8209)
It appears that, in the past, we wanted id-exporter's "tell me all the
accounts with unexpired certificates" functionality to limit itself to
account that have contact info. The reasons for this limitation are
unclear, and are quickly becoming obsolete as we remove contact info
from the registrations table.

Remove this layer of filtering, so that id-exporter will retrieve all
accounts with active certificates, and not care whether the contact
column exists or not.

Part of https://github.com/letsencrypt/boulder/issues/8199
2025-05-23 13:01:27 -07:00
Aaron Gable d63f65c837
Give registrations.contact column a default value (#8207)
Alter the "registrations" table so that the "contact" column has a
default value of the JSON empty list "[]". This, once deployed to all
production environments, will allow Boulder to stop writing to and
reading from this column, in turn allowing it to be eventually wholly
dropped from the database.

IN-11365 tracks the corresponding production database changes
Part of https://github.com/letsencrypt/boulder/issues/8176
2025-05-22 15:04:37 -07:00
Aaron Gable 2eaa2fea64
SA: Stop storing and retrieving contacts (#8198)
Add a feature flag "IgnoreAccountContacts" which has two effects in the
SA:
- When a new account is created, don't insert any contacts provided; and
- When an account is retrieved, ignore any contacts already present.

This causes boulder to act as though all accounts have no associated
contacts, and is the first step towards being able to drop the contacts
from the database entirely.

Part of https://github.com/letsencrypt/boulder/issues/8176
2025-05-21 16:23:35 -07:00
Aaron Gable d662c0843d
Include Location: header in GET Order responses (#8202)
This causes the GET Order polling response to match the NewOrder and
FinalizeOrder responses.

Fixes https://github.com/letsencrypt/boulder/issues/8197
2025-05-21 16:21:54 -07:00
Phil Porada 7ea51e5f91
boulder-observer: check certificate status via CRL too (#8186)
Let's Encrypt [recently removed OCSP URLs from
certificates](https://community.letsencrypt.org/t/removing-ocsp-urls-from-certificates/236699)
which unfortunately caused the boulder-observer TLS prober to panic.
This change short circuits the OCSP checking logic if no OCSP URL exists
in the to-be-checked certificate.

Fixes https://github.com/letsencrypt/boulder/issues/8185

---------

Co-authored-by: Aaron Gable <aaron@letsencrypt.org>
2025-05-20 09:24:21 -07:00
Aaron Gable ac2dae70f2
cert-checker: add support for ipAddress SANs (#8188)
In cert-checker, inspect both the DNS Names and the IP Addresses
contained within the certificate being examined. Also add a check that
no other kinds of SANs exist in the certificate.

Fixes https://github.com/letsencrypt/boulder/issues/8183
2025-05-16 16:22:56 -07:00
James Renken aaaf623d49
va: Remove deprecated Domain from vapb.IsCAAValidRequest (#8193)
Part of #8023
2025-05-16 15:21:28 -07:00
James Renken 60033836db
ra: Add IdentifierTypes to profiles (#8154)
Add `IdentifierTypes` to validation profiles' config, defaulting to DNS
if not set.

In `NewOrder`, check that the order's profile permits each identifier's
type.

Fixes #8137
Depends on #8173
2025-05-16 13:57:02 -07:00
Aaron Gable c9e2f98b5d
Remove OCSP and MustStaple support from issuance (#8181)
Remove the ability for the issuance package to include the AIA OCSP URI
and the Must Staple (more properly known as the tlsRequest) extension in
certificates. Deprecate the "OmitOCSP" and "AllowMustStaple" profile
config keys, as they no longer have any effect. Similarly deprecate the
"OCSPURL" issuer config key, as it is no longer included in
certificates.

Update the tests to always include to CRLDP extension instead, and
remove some OCSP- or Stapling-specific test cases.

Fixes https://github.com/letsencrypt/boulder/issues/8179
2025-05-16 11:51:02 -07:00
Matthew McPherrin caa29b2937
Update to zlint 3.6.6 (#8194)
v3.6.5 and v3.6.6 include several new lints and bugfixes.
Release notes at https://github.com/zmap/zlint/releases
2025-05-16 11:48:31 -07:00
James Renken bef73f3c8b
va: Fix deployability of CAA change in #8153 (#8190)
In #8153, we started using identifiers in `vapb.IsCAAValidRequest`, and
added logic at the top of `va.DoCAA` to populate the `ident` variable
from the deprecated `Domain` value, in order to accommodate clients that
don't yet populate the `Identifier`.

Unfortunately, we didn't use the `ident` variable throughout the entire
function. Two places refer directly to `req.Identifier` and can't handle
it being nil.

Fixes #8189
2025-05-15 12:21:30 -07:00
Aaron Gable 4d7473e5ea
Remove support for OCSP Must-Staple allowlist (#8180)
Fixes https://github.com/letsencrypt/boulder/issues/8178
2025-05-14 16:20:05 -07:00
James Renken 648ab05b37
policy: Support IP address identifiers (#8173)
Add `pa.validIP` to test IP address validity & absence from IANA
reservations.

Modify `pa.WillingToIssue` and `pa.WellFormedIdentifiers` to support IP
address identifiers.

Add a map of allowed identifier types to the `pa` config.

Part of #8137
2025-05-14 13:49:51 -07:00
James Renken 4d28e010f6
Add more lints: asciicheck, bidichk, spancheck (#8182)
Remove a few trivial instances of trailing whitespace.
2025-05-13 11:56:40 -07:00
Jacob Hoffman-Andrews f0dfbfdb08
deps: update certificate-transparency-go (#8171)
This allows us to drop a transitive dependency on k8s.io/klog.
2025-05-12 14:55:09 -07:00
Jacob Hoffman-Andrews 388c68cb49
sa: use internal certificateStatusModel instead of core.CertificateStatus (#8159)
Part of https://github.com/letsencrypt/boulder/issues/8112
2025-05-12 14:53:08 -07:00
Jacob Hoffman-Andrews 01a299cd0f
Deprecate MPICFullResults feature flag (#8169)
Fixes https://github.com/letsencrypt/boulder/issues/8121
2025-05-12 14:47:32 -07:00
Samantha Frank b6887a945e
email-exporter: Count Pardot API errors encountered (#8175) 2025-05-12 14:43:09 -07:00
Aaron Gable faa07f5e36
Finish cleaning up unused CT config types (#8174)
The last use of these types was removed in
https://github.com/letsencrypt/boulder/pull/8156
2025-05-10 18:37:59 -07:00
Samantha Frank e625ff3534
sa: Store and manage rate limit overrides in the database (#8142)
Add support for managing and querying rate limit overrides in the
database.
- Add `sa.AddRateLimitOverride` to insert or update a rate limit
override. This will be used during Rate Limit Override Portal to commit
approved overrides to the database.
- Add `sa.DisableRateLimitOverride` and `sa.EnableRateLimitOverride` to
toggle override state. These will be used by the `admin` tool.
- Add `sa.GetRateLimitOverride` to retrieve a single override by limit
enum and bucket key. This will be used by the Rate Limit Portal to
prevent duplicate or downgrade requests but allow upgrade requests.
- Add `sa.GetEnabledRateLimitOverrides` to stream all currently enabled
overrides. This will be used by the rate limit consumers (`wfe` and
`ra`) to refresh the overrides in-memory.
- Implement test coverage for all new methods.
2025-05-08 14:50:30 -04:00
James Renken 650c269bf6
ra, va: Bypass CAA for IP identifiers & use Identifier in IsCAAValidRequest (#8153)
In `vapb.IsCAAValidRequest`, even though CAA is only for DNS names,
deprecate `Domain` in favour of `Identifier` for consistency.

In `va.DoCAA`, reject attempts to validate CAA for non-DNS identifiers.

Rename `identifier` to `ident` inside some VA functions, also for
consistency.

In `ra.checkDCVAndCAA` & `ra.checkAuthorizationsCAA`, bypass CAA checks
for IP address identifiers.

Part of #7995
2025-05-08 11:22:06 -07:00
Aaron Gable f86f88d563
Include supported algs in badSignatureAlgorithm problem doc (#8170)
Add an "algorithms" field to all problem documents, but tag it so it
won't be included in the serialized json unless populated. Populate it
only when the problem type is "badSignatureAlgorithm", as specified in
RFC 8555 Section 6.2.

The resulting problem document looks like this:
```json
{
    "type": "urn:ietf:params:acme:error:badSignatureAlgorithm",
    "detail": "Unable to validate JWS :: JWS signature header contains unsupported algorithm
 \"RS512\", expected one of [RS256 ES256 ES384 ES512]",
    "status": 400,
    "algorithms": [
        "RS256",
        "ES256",
        "ES384",
        "ES512"
    ]
}
```

Fixes https://github.com/letsencrypt/boulder/issues/8155
2025-05-07 18:29:14 -07:00
James Renken 52615d9060
ra: Fully support identifiers in NewOrder, PerformValidation & RevokeCertByApplicant (#8139)
In `ra.NewOrder`, improve safety of authz reuse logic by making it
explicit that only DNS identifiers might be wildcards. Also, now that
the conditional statements need to be more complicated, collapse them
for brevity.

In `vapb.PerformValidationRequest`, remove `DnsName`.

In `ra.PerformValidation`, pass an `Identifier` instead of a `DnsName`.

In `ra.RevokeCertByApplicant`, check that the requester controls
identifiers of all types (not just DNS).

Fixes #7995 (the RA now fully supports IP address identifiers, except
for rate limits)
Fixes #7647 
Part of #8023
2025-05-07 15:11:41 -07:00
Matthew McPherrin b26b116861
Update certificate-transparency-go for bugfix (#8160)
This updates to current `master`,
bc7acd89f703743d050f5cd4a3b9746808e0fdae

Notably, it includes a bug-fix to error handling in the HTTP client,
which we found was hiding errors from CT logs, hindering our debugging.

That fix is
https://github.com/google/certificate-transparency-go/pull/1695

No release has been tagged since this PR merged, so using the `master`
commit.

A few mutual dependencies used by both Boulder and ct-go are updated,
including mysql, otel, and grpc.
2025-05-06 12:10:53 -07:00
Matthew McPherrin 36bb6527e5
Remove obsolete informational CT config (#8156)
This field is unused. This has been configured in the CTLogs field for
years.

The field has been a no-op since #6485 and was removed from Let's
Encrypt prod configuration in 2022.
2025-05-05 14:18:35 -04:00
Aaron Gable 9102759f4e
Make CT log selection simpler and more robust (#8152)
Simplify the way we load and handle CT logs: rather than keeping them
grouped by operator, simply keep a flat list and annotate each log with
its operator's name. At submission time, instead of shuffling operator
groups and submitting to one log from each group, shuffle the whole set
of individual logs.

Support tiled logs by similarly annotating each log with whether it is
tiled or not.

Also make the way we know when to stop getting SCTs more robust.
Previously we would stop as soon as we had two, since we knew that they
would be from different operator groups and didn't care about tiled
logs. Instead, introduce an explicit CT policy compliance evaluation
function which tells us if the set of SCTs we have so far forms a
compliant set.

This is not our desired end-state for CT log submission. Ideally we'd
like to: simplify things even further (don't race all the logs, simply
try to submit to two at a time), improve selection (intelligently pick
the next log to submit to, rather than just a random shuffle), and
fine-tune latency (tiled logs should have longer timeouts than classic
ones). Those improvements will come in future PRs.

Part of https://github.com/letsencrypt/boulder/issues/7872
2025-05-01 17:24:19 -07:00
Aaron Gable e01bc22984
Update protoc-gen-go to match updated grpc libraries (#8151)
https://github.com/letsencrypt/boulder/pull/8150 updated our runtime
protobuf dependency from v1.34.1 to v1.36.5. This change does the same
for our build-time dependency, to keep them in sync.
2025-05-01 17:14:57 -07:00
Aaron Gable 1c1c4dcfef
Update certificate-transparency-go to get static/tiled log support (#8150)
Update github.com/google/certificate-transparency-go from v1.1.6 to
v1.3.1. This updates the loglist file schema to recognize logs which are
tagged as being tiled logs / implementing the static CT API.

Transitively update:
- github.com/go-sql-driver/mysql from v1.7.1 to v1.8.1
- github.com/prometheus/client_golang from v1.15.1 to v1.22.0
- github.com/prometheus/client_model from v0.4.0 to v0.6.1
- go.opentelemetry.io/otel from v1.30.0 to v1.31.0
- google.golang.org/grpc from v1.66.1 to v1.69.4
- google.golang.org/protobuf from v1.34.2 to v1.36.5
- and a variety of indirect dependencies

Remove one indirect dependency:
- github.com/matttproud/golang_protobuf_extensions

Add two new indirect dependencies:
- filippo.io/edwards25519@v1.1.0 (used by go-sql-driver to handle
mariadb's custom encryption implementation)
- github.com/munnerz/goautoneg@v0.0.0-20191010083416-a7dc8b61c822
(previously inlined into prometheus/common)

Also fix two unit tests which need minor modifications to work with
updated type signatures and behavior.

Part of https://github.com/letsencrypt/boulder/issues/7872
2025-04-30 15:56:31 -07:00
Samantha Frank 1274878d5e
integration: Fix second MPIC validation flake (#8146)
Break validation of length and content of expected User-Agents out into
two assertion functions. Make it so that DOH and MPICFullResults can be
deprecated in either order.

Fixes #8145
2025-04-28 11:14:38 -04:00
Aaron Gable 0038149c79
Fix profile comparison when looking for authzs to reuse (#8144)
Previously, if the request asked for a profile, we were comparing the
address of that requested profile to the address of the profile field of
the found authz. Obviously these addresses were never the same. Instead,
compare the actual values, with an added nil check for safety.

This fixes a bug reported on the community forum. The updated test fails
without the accompanying code change.
2025-04-25 15:24:50 -07:00
Aaron Gable 42138ff2da
Run .deb build on ubuntu 24.04 (#8143) 2025-04-24 17:33:13 -07:00
Aaron Gable bc899ac3ef
Update go-sql-driver/mysql from v1.5.0 to v1.7.1 (#8138)
Version v1.5.0 was released in January 2020, over five years ago. We
have attempted to update this package several times since then -- first
to v1.6.0, later to v1.7.1 -- but have reverted the change due to
nigh-inexplicable performance regressions each time. Since our last
attempt, we believe we have addressed the underlying issue by truncating
timestamps when we talk to the database (see
https://github.com/letsencrypt/boulder/pull/7556) so that our indices
don't try to track nanosecond precision.

We are now ready to reattempt updating this package to v1.7.1 again. If
that goes well, we will further update it to the newest version.

Fixes https://github.com/letsencrypt/boulder/issues/5437
Part of https://github.com/letsencrypt/boulder/issues/7872
2025-04-24 17:29:41 -07:00
James Renken dc8fa5a95f
ca: Add IP address issuance (#8117)
Refactor `ca.issuePrecertificateInner` away from the old `NamesFromCSR`
logic, and to our `identifier` functions.

Add `identifier.ToValues` to provide slices of identifier values, split
up by type.

Fixes #8135 
Part of #7311
2025-04-22 16:25:22 -07:00
dependabot[bot] 1ce439bc92
build(deps): bump golang.org/x/net from 0.37.0 to 0.38.0 (#8125)
Bumps https://github.com/golang/net from 0.37.0 to 0.38.0. This
resolves a minor vulnerability that does not directly affect Boulder.

Changelog: https://github.com/golang/net/compare/v0.37.0...v0.38.0
2025-04-21 13:56:26 -07:00
Jacob Hoffman-Andrews 726b3c91e8
test: copy some config-next settings to config (#8116)
Methodology:

 - Copy test/config-next/* to test/config/.
 - Review the diff, reverting things that should stay `next`-only.
 - When in doubt, check against prod configs (e.g. for feature flags).

In the process I noticed that config for the TCP prober in `observer`
had been added to test/config but not test/config-next, so I ported it
forward (and my IDE stripped some trailing spaces in both versions).
2025-04-21 13:54:31 -07:00
Jacob Hoffman-Andrews c95ab5c75f
crl-updater: UpdatePeriod safety check (#8131)
The current requirement is that CRLs must be published within 24 hours
after revoking a certificate.

Fixes #8110
2025-04-21 13:54:14 -07:00
Jacob Hoffman-Andrews 967d722cf4
sa: use internal certificateModel (#8130)
This follows the system we've used for other types, where the SA has a
model type that is converted to a proto message for use outside the SA.

Part of #8112.
2025-04-21 13:48:29 -07:00
Jacob Hoffman-Andrews 37147d4dfa
lint: add sqlclosecheck (#8129)
Picking up from #7709
2025-04-21 11:01:37 -07:00
Jacob Hoffman-Andrews e8eddc0d50
ca: remove capb.IssueCertificateForPrecertificateRequest (#8127)
Fixes #8039
2025-04-18 12:18:31 -07:00
Samantha Frank 6021d4b47d
docker: Update image to Ubuntu 24.04 (#8128)
#8109 updated CI to use 24.04 runners, now update the Docker image to
build 24.04 and CI to use it.

Build fixes:
- Unpin mariadb-client-core, 10.3 is no longer provided in 24.04 apt
repositories
- Use new pip flag --break-system-packages to comply with PEP 668, which
is now enforced in Python 3.12+

Runtime fixes:
- Start rsyslogd directly due to missing symlink (see:
https://github.com/rsyslog/rsyslog/issues/5611)
- Fix SyntaxWarning: invalid escape sequence '\w' error.
- Replace OpenSSL.crypto.load_certificate with
x509.load_pem_x509_certificate due to
d73d0ed417
2025-04-17 13:41:20 -04:00
Jacob Hoffman-Andrews 3e8ccdb8ba
Build deb in docker (#8126)
This allows us to build on Ubuntu 20.04 a little longer.
2025-04-17 11:15:52 -04:00
Jacob Hoffman-Andrews 585319f247
issuance: remove profile hashes (#8118)
Part of #8039
2025-04-16 16:57:24 -07:00
James Renken 23e14f1149
Update CI to Ubuntu 24.04 (#8109)
Fixes #7775
2025-04-16 14:32:55 -07:00
Samantha Frank b2eaabb4e1
test: Fix integration tests sensitive to MPICFullResults (#8122) 2025-04-16 10:08:17 -04:00
Jacob Hoffman-Andrews 3ddaa6770f
ca: make orderID mandatory (#8119)
It was allowed to be empty for ACMEv1 requests, but those are long gone.

Also, move the IsAnyNilOrZero checks up to the RPC entry point.
2025-04-15 14:56:28 -07:00
Samantha Frank 7a3feb2ceb
va/rva: Validate user-agent for http-01 and DoH requests (#8114)
Plumb the userAgent field, used to set http-01 User-Agent headers, from
va/rva configuration through to where User-Agent headers can be set for
DoH queries. Use integration tests to validate that the User-Agent is
set for http-01 challenges, dns-01 challenges over DoH, and CAA checks
over DoH.

Fixes #7963.
2025-04-15 16:31:08 -04:00
1166 changed files with 71413 additions and 51354 deletions

View File

@ -36,7 +36,7 @@ jobs:
matrix:
# Add additional docker image tags here and all tests will be run with the additional image.
BOULDER_TOOLS_TAG:
- go1.24.1_2025-04-02
- go1.24.4_2025-06-06
# Tests command definitions. Use the entire "docker compose" command you want to run.
tests:
# Run ./test.sh --help for a description of each of the flags.

View File

@ -0,0 +1,53 @@
name: Check for IANA special-purpose address registry updates
on:
schedule:
- cron: "20 16 * * *"
workflow_dispatch:
jobs:
check-iana-registries:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- name: Checkout iana/data from main branch
uses: actions/checkout@v4
with:
sparse-checkout: iana/data
# If the branch already exists, this will fail, which will remind us about
# the outstanding PR.
- name: Create an iana-registries-gha branch
run: |
git checkout --track origin/main -b iana-registries-gha
- name: Retrieve the IANA special-purpose address registries
run: |
IANA_IPV4="https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry-1.csv"
IANA_IPV6="https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry-1.csv"
REPO_IPV4="iana/data/iana-ipv4-special-registry-1.csv"
REPO_IPV6="iana/data/iana-ipv6-special-registry-1.csv"
curl --fail --location --show-error --silent --output "${REPO_IPV4}" "${IANA_IPV4}"
curl --fail --location --show-error --silent --output "${REPO_IPV6}" "${IANA_IPV6}"
- name: Create a commit and pull request
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
shell:
bash
# `git diff --exit-code` returns an error code if there are any changes.
run: |
if ! git diff --exit-code; then
git add iana/data/
git config user.name "Irwin the IANA Bot"
git commit \
--message "Update IANA special-purpose address registries"
git push origin HEAD
gh pr create --fill
fi

View File

@ -0,0 +1,17 @@
# This GitHub Action runs only on pushes to main or a hotfix branch. It can
# be used by tag protection rules to ensure that tags may only be pushed if
# their corresponding commit was first pushed to one of those branches.
name: Merged to main (or hotfix)
on:
push:
branches:
- main
- release-branch-*
jobs:
merged-to-main:
name: Merged to main (or hotfix)
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false

View File

@ -15,8 +15,8 @@ jobs:
fail-fast: false
matrix:
GO_VERSION:
- "1.24.1"
runs-on: ubuntu-20.04
- "1.24.4"
runs-on: ubuntu-24.04
permissions:
contents: write
packages: write
@ -24,12 +24,16 @@ jobs:
- uses: actions/checkout@v4
with:
persist-credentials: false
fetch-depth: '0' # Needed for verify-release-ancestry.sh to see origin/main
- name: Verify release ancestry
run: ./tools/verify-release-ancestry.sh "$GITHUB_SHA"
- name: Build .deb
id: build
env:
GO_VERSION: ${{ matrix.GO_VERSION }}
run: ./tools/make-assets.sh
run: docker run -v $PWD:/boulder -e GO_VERSION=$GO_VERSION -e COMMIT_ID="$(git rev-parse --short=8 HEAD)" ubuntu:24.04 bash -c 'apt update && apt -y install gnupg2 curl sudo git gcc && cd /boulder/ && ./tools/make-assets.sh'
- name: Compute checksums
id: checksums

View File

@ -16,8 +16,8 @@ jobs:
fail-fast: false
matrix:
GO_VERSION:
- "1.24.1"
runs-on: ubuntu-20.04
- "1.24.4"
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
with:
@ -27,7 +27,7 @@ jobs:
id: build
env:
GO_VERSION: ${{ matrix.GO_VERSION }}
run: ./tools/make-assets.sh
run: docker run -v $PWD:/boulder -e GO_VERSION=$GO_VERSION -e COMMIT_ID="$(git rev-parse --short=8 HEAD)" ubuntu:24.04 bash -c 'apt update && apt -y install gnupg2 curl sudo git gcc && cd /boulder/ && ./tools/make-assets.sh'
- name: Compute checksums
id: checksums

View File

@ -1,20 +1,23 @@
version: "2"
linters:
disable-all: true
default: none
enable:
- asciicheck
- bidichk
- errcheck
- gofmt
- gosec
- gosimple
- govet
- ineffassign
- misspell
- nolintlint
- typecheck
- spancheck
- sqlclosecheck
- staticcheck
- unconvert
- unparam
- unused
- wastedassign
linters-settings:
settings:
errcheck:
exclude-functions:
- (net/http.ResponseWriter).Write
@ -24,14 +27,11 @@ linters-settings:
- net/http.Write
- os.Remove
- github.com/miekg/dns.WriteMsg
gosimple:
# S1029: Range over the string directly
checks: ["all", "-S1029"]
govet:
enable-all: true
disable:
- fieldalignment
- shadow
enable-all: true
settings:
printf:
funcs:
@ -48,15 +48,42 @@ linters-settings:
# TODO: Identify, fix, and remove violations of most of these rules
- G101 # Potential hardcoded credentials
- G102 # Binds to all network interfaces
- G104 # Errors unhandled
- G107 # Potential HTTP request made with variable url
- G201 # SQL string formatting
- G202 # SQL string concatenation
- G204 # Subprocess launched with variable
- G302 # Expect file permissions to be 0600 or less
- G306 # Expect WriteFile permissions to be 0600 or less
- G304 # Potential file inclusion via variable
- G401 # Use of weak cryptographic primitive
- G402 # TLS InsecureSkipVerify set true.
- G403 # RSA keys should be at least 2048 bits
- G404 # Use of weak random number generator (math/rand instead of crypto/rand)
- G404 # Use of weak random number generator
nolintlint:
allow-unused: false
require-explanation: true
require-specific: true
allow-unused: false
staticcheck:
checks:
- all
# TODO: Identify, fix, and remove violations of most of these rules
- -S1029 # Range over the string directly
- -SA1019 # Using a deprecated function, variable, constant or field
- -SA6003 # Converting a string to a slice of runes before ranging over it
- -ST1000 # Incorrect or missing package comment
- -ST1003 # Poorly chosen identifier
- -ST1005 # Incorrectly formatted error string
- -QF1001 # Could apply De Morgan's law
- -QF1003 # Could use tagged switch
- -QF1004 # Could use strings.Split instead
- -QF1007 # Could merge conditional assignment into variable declaration
- -QF1008 # Could remove embedded field from selector
- -QF1009 # Probably want to use time.Time.Equal
- -QF1012 # Use fmt.Fprintf(...) instead of Write(fmt.Sprintf(...))
exclusions:
presets:
- std-error-handling
formatters:
enable:
- gofmt

View File

@ -3,10 +3,10 @@
[![Build Status](https://github.com/letsencrypt/boulder/actions/workflows/boulder-ci.yml/badge.svg?branch=main)](https://github.com/letsencrypt/boulder/actions/workflows/boulder-ci.yml?query=branch%3Amain)
This is an implementation of an ACME-based CA. The [ACME
protocol](https://github.com/ietf-wg-acme/acme/) allows the CA to
automatically verify that an applicant for a certificate actually controls an
identifier, and allows domain holders to issue and revoke certificates for
their domains. Boulder is the software that runs [Let's
protocol](https://github.com/ietf-wg-acme/acme/) allows the CA to automatically
verify that an applicant for a certificate actually controls an identifier, and
allows subscribers to issue and revoke certificates for the identifiers they
control. Boulder is the software that runs [Let's
Encrypt](https://letsencrypt.org).
## Contents

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: akamai.proto
@ -12,6 +12,7 @@ import (
emptypb "google.golang.org/protobuf/types/known/emptypb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -22,21 +23,18 @@ const (
)
type PurgeRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Urls []string `protobuf:"bytes,1,rep,name=urls,proto3" json:"urls,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *PurgeRequest) Reset() {
*x = PurgeRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_akamai_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PurgeRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -46,7 +44,7 @@ func (*PurgeRequest) ProtoMessage() {}
func (x *PurgeRequest) ProtoReflect() protoreflect.Message {
mi := &file_akamai_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -70,7 +68,7 @@ func (x *PurgeRequest) GetUrls() []string {
var File_akamai_proto protoreflect.FileDescriptor
var file_akamai_proto_rawDesc = []byte{
var file_akamai_proto_rawDesc = string([]byte{
0x0a, 0x0c, 0x61, 0x6b, 0x61, 0x6d, 0x61, 0x69, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06,
0x61, 0x6b, 0x61, 0x6d, 0x61, 0x69, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72,
@ -85,22 +83,22 @@ var file_akamai_proto_rawDesc = []byte{
0x65, 0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x2f, 0x62, 0x6f, 0x75, 0x6c, 0x64,
0x65, 0x72, 0x2f, 0x61, 0x6b, 0x61, 0x6d, 0x61, 0x69, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
})
var (
file_akamai_proto_rawDescOnce sync.Once
file_akamai_proto_rawDescData = file_akamai_proto_rawDesc
file_akamai_proto_rawDescData []byte
)
func file_akamai_proto_rawDescGZIP() []byte {
file_akamai_proto_rawDescOnce.Do(func() {
file_akamai_proto_rawDescData = protoimpl.X.CompressGZIP(file_akamai_proto_rawDescData)
file_akamai_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_akamai_proto_rawDesc), len(file_akamai_proto_rawDesc)))
})
return file_akamai_proto_rawDescData
}
var file_akamai_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_akamai_proto_goTypes = []interface{}{
var file_akamai_proto_goTypes = []any{
(*PurgeRequest)(nil), // 0: akamai.PurgeRequest
(*emptypb.Empty)(nil), // 1: google.protobuf.Empty
}
@ -119,25 +117,11 @@ func file_akamai_proto_init() {
if File_akamai_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_akamai_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PurgeRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_akamai_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_akamai_proto_rawDesc), len(file_akamai_proto_rawDesc)),
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
@ -148,7 +132,6 @@ func file_akamai_proto_init() {
MessageInfos: file_akamai_proto_msgTypes,
}.Build()
File_akamai_proto = out.File
file_akamai_proto_rawDesc = nil
file_akamai_proto_goTypes = nil
file_akamai_proto_depIdxs = nil
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.1
// source: akamai.proto
@ -50,20 +50,24 @@ func (c *akamaiPurgerClient) Purge(ctx context.Context, in *PurgeRequest, opts .
// AkamaiPurgerServer is the server API for AkamaiPurger service.
// All implementations must embed UnimplementedAkamaiPurgerServer
// for forward compatibility
// for forward compatibility.
type AkamaiPurgerServer interface {
Purge(context.Context, *PurgeRequest) (*emptypb.Empty, error)
mustEmbedUnimplementedAkamaiPurgerServer()
}
// UnimplementedAkamaiPurgerServer must be embedded to have forward compatible implementations.
type UnimplementedAkamaiPurgerServer struct {
}
// UnimplementedAkamaiPurgerServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedAkamaiPurgerServer struct{}
func (UnimplementedAkamaiPurgerServer) Purge(context.Context, *PurgeRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method Purge not implemented")
}
func (UnimplementedAkamaiPurgerServer) mustEmbedUnimplementedAkamaiPurgerServer() {}
func (UnimplementedAkamaiPurgerServer) testEmbeddedByValue() {}
// UnsafeAkamaiPurgerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to AkamaiPurgerServer will
@ -73,6 +77,13 @@ type UnsafeAkamaiPurgerServer interface {
}
func RegisterAkamaiPurgerServer(s grpc.ServiceRegistrar, srv AkamaiPurgerServer) {
// If the following call pancis, it indicates UnimplementedAkamaiPurgerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&AkamaiPurger_ServiceDesc, srv)
}

View File

@ -9,6 +9,7 @@ import (
"io"
"net"
"net/http"
"net/netip"
"net/url"
"slices"
"strconv"
@ -20,88 +21,11 @@ import (
"github.com/miekg/dns"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/features"
"github.com/letsencrypt/boulder/iana"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
)
func parseCidr(network string, comment string) net.IPNet {
_, net, err := net.ParseCIDR(network)
if err != nil {
panic(fmt.Sprintf("error parsing %s (%s): %s", network, comment, err))
}
return *net
}
var (
// TODO(#8040): Rebuild these as structs that track the structure of IANA's
// CSV files, for better automated handling.
//
// Private CIDRs to ignore. Sourced from:
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
privateV4Networks = []net.IPNet{
parseCidr("0.0.0.0/8", "RFC 791, Section 3.2: This network"),
parseCidr("0.0.0.0/32", "RFC 1122, Section 3.2.1.3: This host on this network"),
parseCidr("10.0.0.0/8", "RFC 1918: Private-Use"),
parseCidr("100.64.0.0/10", "RFC 6598: Shared Address Space"),
parseCidr("127.0.0.0/8", "RFC 1122, Section 3.2.1.3: Loopback"),
parseCidr("169.254.0.0/16", "RFC 3927: Link Local"),
parseCidr("172.16.0.0/12", "RFC 1918: Private-Use"),
parseCidr("192.0.0.0/24", "RFC 6890, Section 2.1: IETF Protocol Assignments"),
parseCidr("192.0.0.0/29", "RFC 7335: IPv4 Service Continuity Prefix"),
parseCidr("192.0.0.8/32", "RFC 7600: IPv4 dummy address"),
parseCidr("192.0.0.9/32", "RFC 7723: Port Control Protocol Anycast"),
parseCidr("192.0.0.10/32", "RFC 8155: Traversal Using Relays around NAT Anycast"),
parseCidr("192.0.0.170/32", "RFC 8880 & RFC 7050, Section 2.2: NAT64/DNS64 Discovery"),
parseCidr("192.0.0.171/32", "RFC 8880 & RFC 7050, Section 2.2: NAT64/DNS64 Discovery"),
parseCidr("192.0.2.0/24", "RFC 5737: Documentation (TEST-NET-1)"),
parseCidr("192.31.196.0/24", "RFC 7535: AS112-v4"),
parseCidr("192.52.193.0/24", "RFC 7450: AMT"),
parseCidr("192.88.99.0/24", "RFC 7526: Deprecated (6to4 Relay Anycast)"),
parseCidr("192.168.0.0/16", "RFC 1918: Private-Use"),
parseCidr("192.175.48.0/24", "RFC 7534: Direct Delegation AS112 Service"),
parseCidr("198.18.0.0/15", "RFC 2544: Benchmarking"),
parseCidr("198.51.100.0/24", "RFC 5737: Documentation (TEST-NET-2)"),
parseCidr("203.0.113.0/24", "RFC 5737: Documentation (TEST-NET-3)"),
parseCidr("240.0.0.0/4", "RFC1112, Section 4: Reserved"),
parseCidr("255.255.255.255/32", "RFC 8190 & RFC 919, Section 7: Limited Broadcast"),
// 224.0.0.0/4 are multicast addresses as per RFC 3171. They are not
// present in the IANA registry.
parseCidr("224.0.0.0/4", "RFC 3171: Multicast Addresses"),
}
// Sourced from:
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
privateV6Networks = []net.IPNet{
parseCidr("::/128", "RFC 4291: Unspecified Address"),
parseCidr("::1/128", "RFC 4291: Loopback Address"),
parseCidr("::ffff:0:0/96", "RFC 4291: IPv4-mapped Address"),
parseCidr("64:ff9b::/96", "RFC 6052: IPv4-IPv6 Translat."),
parseCidr("64:ff9b:1::/48", "RFC 8215: IPv4-IPv6 Translat."),
parseCidr("100::/64", "RFC 6666: Discard-Only Address Block"),
parseCidr("2001::/23", "RFC 2928: IETF Protocol Assignments"),
parseCidr("2001::/32", "RFC 4380 & RFC 8190: TEREDO"),
parseCidr("2001:1::1/128", "RFC 7723: Port Control Protocol Anycast"),
parseCidr("2001:1::2/128", "RFC 8155: Traversal Using Relays around NAT Anycast"),
parseCidr("2001:1::3/128", "RFC-ietf-dnssd-srp-25: DNS-SD Service Registration Protocol Anycast"),
parseCidr("2001:2::/48", "RFC 5180 & RFC Errata 1752: Benchmarking"),
parseCidr("2001:3::/32", "RFC 7450: AMT"),
parseCidr("2001:4:112::/48", "RFC 7535: AS112-v6"),
parseCidr("2001:10::/28", "RFC 4843: Deprecated (previously ORCHID)"),
parseCidr("2001:20::/28", "RFC 7343: ORCHIDv2"),
parseCidr("2001:30::/28", "RFC 9374: Drone Remote ID Protocol Entity Tags (DETs) Prefix"),
parseCidr("2001:db8::/32", "RFC 3849: Documentation"),
parseCidr("2002::/16", "RFC 3056: 6to4"),
parseCidr("2620:4f:8000::/48", "RFC 7534: Direct Delegation AS112 Service"),
parseCidr("3fff::/20", "RFC 9637: Documentation"),
parseCidr("5f00::/16", "RFC 9602: Segment Routing (SRv6) SIDs"),
parseCidr("fc00::/7", "RFC 4193 & RFC 8190: Unique-Local"),
parseCidr("fe80::/10", "RFC 4291: Link-Local Unicast"),
// ff00::/8 are multicast addresses as per RFC 4291, Sections 2.4 & 2.7.
// They are not present in the IANA registry.
parseCidr("ff00::/8", "RFC 4291: Multicast Addresses"),
}
)
// ResolverAddrs contains DNS resolver(s) that were chosen to perform a
// validation request or CAA recheck. A ResolverAddr will be in the form of
// host:port, A:host:port, or AAAA:host:port depending on which type of lookup
@ -111,7 +35,7 @@ type ResolverAddrs []string
// Client queries for DNS records
type Client interface {
LookupTXT(context.Context, string) (txts []string, resolver ResolverAddrs, err error)
LookupHost(context.Context, string) ([]net.IP, ResolverAddrs, error)
LookupHost(context.Context, string) ([]netip.Addr, ResolverAddrs, error)
LookupCAA(context.Context, string) ([]*dns.CAA, string, ResolverAddrs, error)
}
@ -147,11 +71,12 @@ func New(
stats prometheus.Registerer,
clk clock.Clock,
maxTries int,
userAgent string,
log blog.Logger,
tlsConfig *tls.Config,
) Client {
var client exchanger
if features.Get().DOH {
// Clone the default transport because it comes with various settings
// that we like, which are different from the zero value of an
// `http.Transport`.
@ -167,13 +92,7 @@ func New(
Timeout: readTimeout,
Transport: transport,
},
}
} else {
client = &dns.Client{
// Set timeout for underlying net.Conn
ReadTimeout: readTimeout,
Net: "udp",
}
userAgent: userAgent,
}
queryTime := prometheus.NewHistogramVec(
@ -230,10 +149,11 @@ func NewTest(
stats prometheus.Registerer,
clk clock.Clock,
maxTries int,
userAgent string,
log blog.Logger,
tlsConfig *tls.Config,
) Client {
resolver := New(readTimeout, servers, stats, clk, maxTries, log, tlsConfig)
resolver := New(readTimeout, servers, stats, clk, maxTries, userAgent, log, tlsConfig)
resolver.(*impl).allowRestrictedAddresses = true
return resolver
}
@ -353,17 +273,10 @@ func (dnsClient *impl) exchangeOne(ctx context.Context, hostname string, qtype u
case r := <-ch:
if r.err != nil {
var isRetryable bool
if features.Get().DOH {
// According to the http package documentation, retryable
// errors emitted by the http package are of type *url.Error.
var urlErr *url.Error
isRetryable = errors.As(r.err, &urlErr) && urlErr.Temporary()
} else {
// According to the net package documentation, retryable
// errors emitted by the net package are of type *net.OpError.
var opErr *net.OpError
isRetryable = errors.As(r.err, &opErr) && opErr.Temporary()
}
hasRetriesLeft := tries < dnsClient.maxTries
if isRetryable && hasRetriesLeft {
tries++
@ -388,7 +301,6 @@ func (dnsClient *impl) exchangeOne(ctx context.Context, hostname string, qtype u
return
}
}
}
// isTLD returns a simplified view of whether something is a TLD: does it have
@ -430,24 +342,6 @@ func (dnsClient *impl) LookupTXT(ctx context.Context, hostname string) ([]string
return txt, ResolverAddrs{resolver}, err
}
func isPrivateV4(ip net.IP) bool {
for _, net := range privateV4Networks {
if net.Contains(ip) {
return true
}
}
return false
}
func isPrivateV6(ip net.IP) bool {
for _, net := range privateV6Networks {
if net.Contains(ip) {
return true
}
}
return false
}
func (dnsClient *impl) lookupIP(ctx context.Context, hostname string, ipType uint16) ([]dns.RR, string, error) {
resp, resolver, err := dnsClient.exchangeOne(ctx, hostname, ipType)
switch ipType {
@ -472,7 +366,7 @@ func (dnsClient *impl) lookupIP(ctx context.Context, hostname string, ipType uin
// chase CNAME/DNAME aliases and return relevant records. It will retry
// requests in the case of temporary network errors. It returns an error if
// both the A and AAAA lookups fail or are empty, but succeeds otherwise.
func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]net.IP, ResolverAddrs, error) {
func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]netip.Addr, ResolverAddrs, error) {
var recordsA, recordsAAAA []dns.RR
var errA, errAAAA error
var resolverA, resolverAAAA string
@ -495,13 +389,16 @@ func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]net.I
return a == ""
})
var addrsA []net.IP
var addrsA []netip.Addr
if errA == nil {
for _, answer := range recordsA {
if answer.Header().Rrtype == dns.TypeA {
a, ok := answer.(*dns.A)
if ok && a.A.To4() != nil && (!isPrivateV4(a.A) || dnsClient.allowRestrictedAddresses) {
addrsA = append(addrsA, a.A)
if ok && a.A.To4() != nil {
netIP, ok := netip.AddrFromSlice(a.A)
if ok && (iana.IsReservedAddr(netIP) == nil || dnsClient.allowRestrictedAddresses) {
addrsA = append(addrsA, netIP)
}
}
}
}
@ -510,13 +407,16 @@ func (dnsClient *impl) LookupHost(ctx context.Context, hostname string) ([]net.I
}
}
var addrsAAAA []net.IP
var addrsAAAA []netip.Addr
if errAAAA == nil {
for _, answer := range recordsAAAA {
if answer.Header().Rrtype == dns.TypeAAAA {
aaaa, ok := answer.(*dns.AAAA)
if ok && aaaa.AAAA.To16() != nil && (!isPrivateV6(aaaa.AAAA) || dnsClient.allowRestrictedAddresses) {
addrsAAAA = append(addrsAAAA, aaaa.AAAA)
if ok && aaaa.AAAA.To16() != nil {
netIP, ok := netip.AddrFromSlice(aaaa.AAAA)
if ok && (iana.IsReservedAddr(netIP) == nil || dnsClient.allowRestrictedAddresses) {
addrsAAAA = append(addrsAAAA, netIP)
}
}
}
}
@ -638,6 +538,7 @@ func logDNSError(
type dohExchanger struct {
clk clock.Clock
hc http.Client
userAgent string
}
// Exchange sends a DoH query to the provided DoH server and returns the response.
@ -655,6 +556,9 @@ func (d *dohExchanger) Exchange(query *dns.Msg, server string) (*dns.Msg, time.D
}
req.Header.Set("Content-Type", "application/dns-message")
req.Header.Set("Accept", "application/dns-message")
if len(d.userAgent) > 0 {
req.Header.Set("User-Agent", d.userAgent)
}
start := d.clk.Now()
resp, err := d.hc.Do(req)
@ -680,20 +584,3 @@ func (d *dohExchanger) Exchange(query *dns.Msg, server string) (*dns.Msg, time.D
return response, d.clk.Since(start), nil
}
// IsReservedIP reports whether an IP address is part of a reserved range.
//
// TODO(#7311): Once we're fully ready to issue for IP address identifiers, dev
// environments should have a way to bypass this check for their own Private-Use
// IP addresses. Maybe plumb the DNSAllowLoopbackAddresses feature flag through
// to here.
//
// TODO(#8040): Move this and its dependencies into the policy package. As part
// of this, consider changing it to return an error and/or the description of
// the reserved network.
func IsReservedIP(ip net.IP) bool {
if ip.To4() == nil {
return isPrivateV6(ip)
}
return isPrivateV4(ip)
}

View File

@ -2,10 +2,15 @@ package bdns
import (
"context"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"net/netip"
"net/url"
"os"
"regexp"
@ -19,7 +24,6 @@ import (
"github.com/miekg/dns"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/features"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/test"
@ -27,7 +31,30 @@ import (
const dnsLoopbackAddr = "127.0.0.1:4053"
func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
func mockDNSQuery(w http.ResponseWriter, httpReq *http.Request) {
if httpReq.Header.Get("Content-Type") != "application/dns-message" {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "client didn't send Content-Type: application/dns-message")
}
if httpReq.Header.Get("Accept") != "application/dns-message" {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "client didn't accept Content-Type: application/dns-message")
}
requestBody, err := io.ReadAll(httpReq.Body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "reading body: %s", err)
}
httpReq.Body.Close()
r := new(dns.Msg)
err = r.Unpack(requestBody)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
fmt.Fprintf(w, "unpacking request: %s", err)
}
m := new(dns.Msg)
m.SetReply(r)
m.Compress = false
@ -57,19 +84,19 @@ func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
if q.Name == "v6.letsencrypt.org." {
record := new(dns.AAAA)
record.Hdr = dns.RR_Header{Name: "v6.letsencrypt.org.", Rrtype: dns.TypeAAAA, Class: dns.ClassINET, Ttl: 0}
record.AAAA = net.ParseIP("::1")
record.AAAA = net.ParseIP("2602:80a:6000:abad:cafe::1")
appendAnswer(record)
}
if q.Name == "dualstack.letsencrypt.org." {
record := new(dns.AAAA)
record.Hdr = dns.RR_Header{Name: "dualstack.letsencrypt.org.", Rrtype: dns.TypeAAAA, Class: dns.ClassINET, Ttl: 0}
record.AAAA = net.ParseIP("::1")
record.AAAA = net.ParseIP("2602:80a:6000:abad:cafe::1")
appendAnswer(record)
}
if q.Name == "v4error.letsencrypt.org." {
record := new(dns.AAAA)
record.Hdr = dns.RR_Header{Name: "v4error.letsencrypt.org.", Rrtype: dns.TypeAAAA, Class: dns.ClassINET, Ttl: 0}
record.AAAA = net.ParseIP("::1")
record.AAAA = net.ParseIP("2602:80a:6000:abad:cafe::1")
appendAnswer(record)
}
if q.Name == "v6error.letsencrypt.org." {
@ -85,19 +112,19 @@ func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
if q.Name == "cps.letsencrypt.org." {
record := new(dns.A)
record.Hdr = dns.RR_Header{Name: "cps.letsencrypt.org.", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}
record.A = net.ParseIP("127.0.0.1")
record.A = net.ParseIP("64.112.117.1")
appendAnswer(record)
}
if q.Name == "dualstack.letsencrypt.org." {
record := new(dns.A)
record.Hdr = dns.RR_Header{Name: "dualstack.letsencrypt.org.", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}
record.A = net.ParseIP("127.0.0.1")
record.A = net.ParseIP("64.112.117.1")
appendAnswer(record)
}
if q.Name == "v6error.letsencrypt.org." {
record := new(dns.A)
record.Hdr = dns.RR_Header{Name: "dualstack.letsencrypt.org.", Rrtype: dns.TypeA, Class: dns.ClassINET, Ttl: 0}
record.A = net.ParseIP("127.0.0.1")
record.A = net.ParseIP("64.112.117.1")
appendAnswer(record)
}
if q.Name == "v4error.letsencrypt.org." {
@ -173,45 +200,37 @@ func mockDNSQuery(w dns.ResponseWriter, r *dns.Msg) {
}
}
err := w.WriteMsg(m)
body, err := m.Pack()
if err != nil {
fmt.Fprintf(os.Stderr, "packing reply: %s\n", err)
}
w.Header().Set("Content-Type", "application/dns-message")
_, err = w.Write(body)
if err != nil {
panic(err) // running tests, so panic is OK
}
}
func serveLoopResolver(stopChan chan bool) {
dns.HandleFunc(".", mockDNSQuery)
tcpServer := &dns.Server{
m := http.NewServeMux()
m.HandleFunc("/dns-query", mockDNSQuery)
httpServer := &http.Server{
Addr: dnsLoopbackAddr,
Net: "tcp",
ReadTimeout: time.Second,
WriteTimeout: time.Second,
}
udpServer := &dns.Server{
Addr: dnsLoopbackAddr,
Net: "udp",
Handler: m,
ReadTimeout: time.Second,
WriteTimeout: time.Second,
}
go func() {
err := tcpServer.ListenAndServe()
if err != nil {
fmt.Println(err)
}
}()
go func() {
err := udpServer.ListenAndServe()
cert := "../test/certs/ipki/localhost/cert.pem"
key := "../test/certs/ipki/localhost/key.pem"
err := httpServer.ListenAndServeTLS(cert, key)
if err != nil {
fmt.Println(err)
}
}()
go func() {
<-stopChan
err := tcpServer.Shutdown()
if err != nil {
log.Fatal(err)
}
err = udpServer.Shutdown()
err := httpServer.Shutdown(context.Background())
if err != nil {
log.Fatal(err)
}
@ -239,7 +258,21 @@ func pollServer() {
}
}
// tlsConfig is used for the TLS config of client instances that talk to the
// DoH server set up in TestMain.
var tlsConfig *tls.Config
func TestMain(m *testing.M) {
root, err := os.ReadFile("../test/certs/ipki/minica.pem")
if err != nil {
log.Fatal(err)
}
pool := x509.NewCertPool()
pool.AppendCertsFromPEM(root)
tlsConfig = &tls.Config{
RootCAs: pool,
}
stop := make(chan bool, 1)
serveLoopResolver(stop)
pollServer()
@ -252,7 +285,7 @@ func TestDNSNoServers(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Hour, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Hour, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "letsencrypt.org")
test.AssertEquals(t, len(resolvers), 0)
@ -269,7 +302,7 @@ func TestDNSOneServer(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "cps.letsencrypt.org")
test.AssertEquals(t, len(resolvers), 2)
@ -282,7 +315,7 @@ func TestDNSDuplicateServers(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr, dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
_, resolvers, err := obj.LookupHost(context.Background(), "cps.letsencrypt.org")
test.AssertEquals(t, len(resolvers), 2)
@ -295,7 +328,7 @@ func TestDNSServFail(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
bad := "servfail.com"
_, _, err = obj.LookupTXT(context.Background(), bad)
@ -313,7 +346,7 @@ func TestDNSLookupTXT(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
a, _, err := obj.LookupTXT(context.Background(), "letsencrypt.org")
t.Logf("A: %v", a)
@ -326,11 +359,12 @@ func TestDNSLookupTXT(t *testing.T) {
test.AssertEquals(t, a[0], "abc")
}
// TODO(#8213): Convert this to a table test.
func TestDNSLookupHost(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
ip, resolvers, err := obj.LookupHost(context.Background(), "servfail.com")
t.Logf("servfail.com - IP: %s, Err: %s", ip, err)
@ -373,10 +407,10 @@ func TestDNSLookupHost(t *testing.T) {
t.Logf("dualstack.letsencrypt.org - IP: %s, Err: %s", ip, err)
test.AssertNotError(t, err, "Not an error to exist")
test.Assert(t, len(ip) == 2, "Should have 2 IPs")
expected := net.ParseIP("127.0.0.1")
test.Assert(t, ip[0].To4().Equal(expected), "wrong ipv4 address")
expected = net.ParseIP("::1")
test.Assert(t, ip[1].To16().Equal(expected), "wrong ipv6 address")
expected := netip.MustParseAddr("64.112.117.1")
test.Assert(t, ip[0] == expected, "wrong ipv4 address")
expected = netip.MustParseAddr("2602:80a:6000:abad:cafe::1")
test.Assert(t, ip[1] == expected, "wrong ipv6 address")
slices.Sort(resolvers)
test.AssertDeepEquals(t, resolvers, ResolverAddrs{"A:127.0.0.1:4053", "AAAA:127.0.0.1:4053"})
@ -385,8 +419,8 @@ func TestDNSLookupHost(t *testing.T) {
t.Logf("v6error.letsencrypt.org - IP: %s, Err: %s", ip, err)
test.AssertNotError(t, err, "Not an error to exist")
test.Assert(t, len(ip) == 1, "Should have 1 IP")
expected = net.ParseIP("127.0.0.1")
test.Assert(t, ip[0].To4().Equal(expected), "wrong ipv4 address")
expected = netip.MustParseAddr("64.112.117.1")
test.Assert(t, ip[0] == expected, "wrong ipv4 address")
slices.Sort(resolvers)
test.AssertDeepEquals(t, resolvers, ResolverAddrs{"A:127.0.0.1:4053", "AAAA:127.0.0.1:4053"})
@ -395,8 +429,8 @@ func TestDNSLookupHost(t *testing.T) {
t.Logf("v4error.letsencrypt.org - IP: %s, Err: %s", ip, err)
test.AssertNotError(t, err, "Not an error to exist")
test.Assert(t, len(ip) == 1, "Should have 1 IP")
expected = net.ParseIP("::1")
test.Assert(t, ip[0].To16().Equal(expected), "wrong ipv6 address")
expected = netip.MustParseAddr("2602:80a:6000:abad:cafe::1")
test.Assert(t, ip[0] == expected, "wrong ipv6 address")
slices.Sort(resolvers)
test.AssertDeepEquals(t, resolvers, ResolverAddrs{"A:127.0.0.1:4053", "AAAA:127.0.0.1:4053"})
@ -416,7 +450,7 @@ func TestDNSNXDOMAIN(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
hostname := "nxdomain.letsencrypt.org"
_, _, err = obj.LookupHost(context.Background(), hostname)
@ -432,7 +466,7 @@ func TestDNSLookupCAA(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
obj := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, blog.UseMock(), nil)
obj := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 1, "", blog.UseMock(), tlsConfig)
removeIDExp := regexp.MustCompile(" id: [[:digit:]]+")
caas, resp, resolvers, err := obj.LookupCAA(context.Background(), "bracewel.net")
@ -487,37 +521,6 @@ caa.example.com. 0 IN CAA 1 issue "letsencrypt.org"
test.AssertEquals(t, resolvers[0], "127.0.0.1:4053")
}
func TestIsPrivateIP(t *testing.T) {
test.Assert(t, isPrivateV4(net.ParseIP("127.0.0.1")), "should be private")
test.Assert(t, isPrivateV4(net.ParseIP("192.168.254.254")), "should be private")
test.Assert(t, isPrivateV4(net.ParseIP("10.255.0.3")), "should be private")
test.Assert(t, isPrivateV4(net.ParseIP("172.16.255.255")), "should be private")
test.Assert(t, isPrivateV4(net.ParseIP("172.31.255.255")), "should be private")
test.Assert(t, !isPrivateV4(net.ParseIP("128.0.0.1")), "should be private")
test.Assert(t, !isPrivateV4(net.ParseIP("192.169.255.255")), "should not be private")
test.Assert(t, !isPrivateV4(net.ParseIP("9.255.0.255")), "should not be private")
test.Assert(t, !isPrivateV4(net.ParseIP("172.32.255.255")), "should not be private")
test.Assert(t, isPrivateV6(net.ParseIP("::0")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("::1")), "should be private")
test.Assert(t, !isPrivateV6(net.ParseIP("::2")), "should not be private")
test.Assert(t, isPrivateV6(net.ParseIP("fe80::1")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("febf::1")), "should be private")
test.Assert(t, !isPrivateV6(net.ParseIP("fec0::1")), "should not be private")
test.Assert(t, !isPrivateV6(net.ParseIP("feff::1")), "should not be private")
test.Assert(t, isPrivateV6(net.ParseIP("ff00::1")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("ff10::1")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("2002::")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("2002:ffff:ffff:ffff:ffff:ffff:ffff:ffff")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("0100::")), "should be private")
test.Assert(t, isPrivateV6(net.ParseIP("0100::0000:ffff:ffff:ffff:ffff")), "should be private")
test.Assert(t, !isPrivateV6(net.ParseIP("0100::0001:0000:0000:0000:0000")), "should be private")
}
type testExchanger struct {
sync.Mutex
count int
@ -542,10 +545,9 @@ func (te *testExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, time.Duration
}
func TestRetry(t *testing.T) {
isTempErr := &net.OpError{Op: "read", Err: tempError(true)}
nonTempErr := &net.OpError{Op: "read", Err: tempError(false)}
isTempErr := &url.Error{Op: "read", Err: tempError(true)}
nonTempErr := &url.Error{Op: "read", Err: tempError(false)}
servFailError := errors.New("DNS problem: server failure at resolver looking up TXT for example.com")
netError := errors.New("DNS problem: networking error looking up TXT for example.com")
type testCase struct {
name string
maxTries int
@ -596,7 +598,7 @@ func TestRetry(t *testing.T) {
isTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 3,
metricsAllRetries: 1,
},
@ -649,7 +651,7 @@ func TestRetry(t *testing.T) {
isTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 3,
metricsAllRetries: 1,
},
@ -663,7 +665,7 @@ func TestRetry(t *testing.T) {
nonTempErr,
},
},
expected: netError,
expected: servFailError,
expectedCount: 2,
},
}
@ -673,7 +675,7 @@ func TestRetry(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), tc.maxTries, blog.UseMock(), nil)
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), tc.maxTries, "", blog.UseMock(), tlsConfig)
dr := testClient.(*impl)
dr.dnsClient = tc.te
_, _, err = dr.LookupTXT(context.Background(), "example.com")
@ -704,7 +706,7 @@ func TestRetry(t *testing.T) {
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 3, blog.UseMock(), nil)
testClient := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 3, "", blog.UseMock(), tlsConfig)
dr := testClient.(*impl)
dr.dnsClient = &testExchanger{errs: []error{isTempErr, isTempErr, nil}}
ctx, cancel := context.WithCancel(context.Background())
@ -783,7 +785,7 @@ func (e *rotateFailureExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, time.
// If its a broken server, return a retryable error
if e.brokenAddresses[a] {
isTempErr := &net.OpError{Op: "read", Err: tempError(true)}
isTempErr := &url.Error{Op: "read", Err: tempError(true)}
return nil, 2 * time.Millisecond, isTempErr
}
@ -805,10 +807,9 @@ func TestRotateServerOnErr(t *testing.T) {
// working server
staticProvider, err := NewStaticProvider(dnsServers)
test.AssertNotError(t, err, "Got error creating StaticProvider")
fmt.Println(staticProvider.servers)
maxTries := 5
client := NewTest(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), maxTries, blog.UseMock(), nil)
client := New(time.Second*10, staticProvider, metrics.NoopRegisterer, clock.NewFake(), maxTries, "", blog.UseMock(), tlsConfig)
// Configure a mock exchanger that will always return a retryable error for
// servers A and B. This will force server "[2606:4700:4700::1111]:53" to do
@ -872,13 +873,10 @@ func (dohE *dohAlwaysRetryExchanger) Exchange(m *dns.Msg, a string) (*dns.Msg, t
}
func TestDOHMetric(t *testing.T) {
features.Set(features.Config{DOH: true})
defer features.Reset()
staticProvider, err := NewStaticProvider([]string{dnsLoopbackAddr})
test.AssertNotError(t, err, "Got error creating StaticProvider")
testClient := NewTest(time.Second*11, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 0, blog.UseMock(), nil)
testClient := New(time.Second*11, staticProvider, metrics.NoopRegisterer, clock.NewFake(), 0, "", blog.UseMock(), tlsConfig)
resolver := testClient.(*impl)
resolver.dnsClient = &dohAlwaysRetryExchanger{err: &url.Error{Op: "read", Err: tempError(true)}}

View File

@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"net"
"net/netip"
"os"
"github.com/miekg/dns"
@ -67,13 +68,13 @@ func (t timeoutError) Timeout() bool {
}
// LookupHost is a mock
func (mock *MockClient) LookupHost(_ context.Context, hostname string) ([]net.IP, ResolverAddrs, error) {
func (mock *MockClient) LookupHost(_ context.Context, hostname string) ([]netip.Addr, ResolverAddrs, error) {
if hostname == "always.invalid" ||
hostname == "invalid.invalid" {
return []net.IP{}, ResolverAddrs{"MockClient"}, nil
return []netip.Addr{}, ResolverAddrs{"MockClient"}, nil
}
if hostname == "always.timeout" {
return []net.IP{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, "always.timeout", makeTimeoutError(), -1, nil}
return []netip.Addr{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, "always.timeout", makeTimeoutError(), -1, nil}
}
if hostname == "always.error" {
err := &net.OpError{
@ -86,7 +87,7 @@ func (mock *MockClient) LookupHost(_ context.Context, hostname string) ([]net.IP
m.AuthenticatedData = true
m.SetEdns0(4096, false)
logDNSError(mock.Log, "mock.server", hostname, m, nil, err)
return []net.IP{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, hostname, err, -1, nil}
return []netip.Addr{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, hostname, err, -1, nil}
}
if hostname == "id.mismatch" {
err := dns.ErrId
@ -100,22 +101,21 @@ func (mock *MockClient) LookupHost(_ context.Context, hostname string) ([]net.IP
record.A = net.ParseIP("127.0.0.1")
r.Answer = append(r.Answer, record)
logDNSError(mock.Log, "mock.server", hostname, m, r, err)
return []net.IP{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, hostname, err, -1, nil}
return []netip.Addr{}, ResolverAddrs{"MockClient"}, &Error{dns.TypeA, hostname, err, -1, nil}
}
// dual-homed host with an IPv6 and an IPv4 address
if hostname == "ipv4.and.ipv6.localhost" {
return []net.IP{
net.ParseIP("::1"),
net.ParseIP("127.0.0.1"),
return []netip.Addr{
netip.MustParseAddr("::1"),
netip.MustParseAddr("127.0.0.1"),
}, ResolverAddrs{"MockClient"}, nil
}
if hostname == "ipv6.localhost" {
return []net.IP{
net.ParseIP("::1"),
return []netip.Addr{
netip.MustParseAddr("::1"),
}, ResolverAddrs{"MockClient"}, nil
}
ip := net.ParseIP("127.0.0.1")
return []net.IP{ip}, ResolverAddrs{"MockClient"}, nil
return []netip.Addr{netip.MustParseAddr("127.0.0.1")}, ResolverAddrs{"MockClient"}, nil
}
// LookupCAA returns mock records for use in tests.

View File

@ -6,6 +6,7 @@ import (
"fmt"
"math/rand/v2"
"net"
"net/netip"
"strconv"
"sync"
"time"
@ -61,10 +62,9 @@ func validateServerAddress(address string) error {
}
// Ensure the `host` portion of `address` is a valid FQDN or IP address.
IPv6 := net.ParseIP(host).To16()
IPv4 := net.ParseIP(host).To4()
_, err = netip.ParseAddr(host)
FQDN := dns.IsFqdn(dns.Fqdn(host))
if IPv6 == nil && IPv4 == nil && !FQDN {
if err != nil && !FQDN {
return errors.New("host is not an FQDN or IP address")
}
return nil

View File

@ -35,6 +35,7 @@ import (
csrlib "github.com/letsencrypt/boulder/csr"
berrors "github.com/letsencrypt/boulder/errors"
"github.com/letsencrypt/boulder/goodkey"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/issuance"
"github.com/letsencrypt/boulder/linter"
blog "github.com/letsencrypt/boulder/log"
@ -60,7 +61,6 @@ type issuanceEvent struct {
Issuer string
OrderID int64
Profile string
ProfileHash string
Requester int64
Result struct {
Precertificate string `json:",omitempty"`
@ -80,19 +80,9 @@ type issuerMaps struct {
type certProfileWithID struct {
// name is a human readable name used to refer to the certificate profile.
name string
// hash is SHA256 sum over every exported field of an issuance.ProfileConfig
// used to generate the embedded *issuance.Profile.
hash [32]byte
profile *issuance.Profile
}
// certProfilesMaps allows looking up the human-readable name of a certificate
// profile to retrieve the actual profile.
type certProfilesMaps struct {
profileByHash map[[32]byte]*certProfileWithID
profileByName map[string]*certProfileWithID
}
// caMetrics holds various metrics which are shared between caImpl, ocspImpl,
// and crlImpl.
type caMetrics struct {
@ -150,7 +140,7 @@ type certificateAuthorityImpl struct {
sctClient rapb.SCTProviderClient
pa core.PolicyAuthority
issuers issuerMaps
certProfiles certProfilesMaps
certProfiles map[string]*certProfileWithID
// The prefix is prepended to the serial number.
prefix byte
@ -190,46 +180,27 @@ func makeIssuerMaps(issuers []*issuance.Issuer) (issuerMaps, error) {
}
// makeCertificateProfilesMap processes a set of named certificate issuance
// profile configs into a two pre-computed maps: 1) a human-readable name to the
// profile and 2) a unique hash over contents of the profile to the profile
// itself. It returns the maps or an error if a duplicate name or hash is found.
//
// The unique hash is used in the case of
// - RA instructs CA1 to issue a precertificate
// - CA1 returns the precertificate DER bytes and profile hash to the RA
// - RA instructs CA2 to issue a final certificate, but CA2 does not contain a
// profile corresponding to that hash and an issuance is prevented.
func makeCertificateProfilesMap(profiles map[string]*issuance.ProfileConfig) (certProfilesMaps, error) {
// profile configs into a map from name to profile.
func makeCertificateProfilesMap(profiles map[string]*issuance.ProfileConfig) (map[string]*certProfileWithID, error) {
if len(profiles) <= 0 {
return certProfilesMaps{}, fmt.Errorf("must pass at least one certificate profile")
return nil, fmt.Errorf("must pass at least one certificate profile")
}
profilesByName := make(map[string]*certProfileWithID, len(profiles))
profilesByHash := make(map[[32]byte]*certProfileWithID, len(profiles))
for name, profileConfig := range profiles {
profile, err := issuance.NewProfile(profileConfig)
if err != nil {
return certProfilesMaps{}, err
return nil, err
}
hash := profile.Hash()
withID := certProfileWithID{
profilesByName[name] = &certProfileWithID{
name: name,
hash: hash,
profile: profile,
}
profilesByName[name] = &withID
_, found := profilesByHash[hash]
if found {
return certProfilesMaps{}, fmt.Errorf("duplicate certificate profile hash %d", hash)
}
profilesByHash[hash] = &withID
}
return certProfilesMaps{profilesByHash, profilesByName}, nil
return profilesByName, nil
}
// NewCertificateAuthorityImpl creates a CA instance that can sign certificates
@ -300,18 +271,12 @@ var ocspStatusToCode = map[string]int{
// precertificate.
//
// Subsequent final issuance based on this precertificate must happen at most once, and must use the same
// certificate profile. The certificate profile is identified by a hash to ensure an exact match even if
// the configuration for a specific profile _name_ changes.
// certificate profile.
//
// Returns precertificate DER.
//
// [issuance cycle]: https://github.com/letsencrypt/boulder/blob/main/docs/ISSUANCE-CYCLE.md
func (ca *certificateAuthorityImpl) issuePrecertificate(ctx context.Context, certProfile *certProfileWithID, issueReq *capb.IssueCertificateRequest) ([]byte, error) {
// issueReq.orderID may be zero, for ACMEv1 requests.
if core.IsAnyNilOrZero(issueReq, issueReq.Csr, issueReq.RegistrationID) {
return nil, berrors.InternalServerError("Incomplete issue certificate request")
}
serialBigInt, err := ca.generateSerialNumber()
if err != nil {
return nil, err
@ -345,12 +310,16 @@ func (ca *certificateAuthorityImpl) issuePrecertificate(ctx context.Context, cer
}
func (ca *certificateAuthorityImpl) IssueCertificate(ctx context.Context, issueReq *capb.IssueCertificateRequest) (*capb.IssueCertificateResponse, error) {
if core.IsAnyNilOrZero(issueReq, issueReq.Csr, issueReq.RegistrationID, issueReq.OrderID) {
return nil, berrors.InternalServerError("Incomplete issue certificate request")
}
if ca.sctClient == nil {
return nil, errors.New("IssueCertificate called with a nil SCT service")
}
// All issuance requests must come with a profile name, and the RA handles selecting the default.
certProfile, ok := ca.certProfiles.profileByName[issueReq.CertProfileName]
certProfile, ok := ca.certProfiles[issueReq.CertProfileName]
if !ok {
return nil, fmt.Errorf("the CA is incapable of using a profile named %s", issueReq.CertProfileName)
}
@ -398,13 +367,9 @@ func (ca *certificateAuthorityImpl) issueCertificateForPrecertificate(ctx contex
certProfile *certProfileWithID,
precertDER []byte,
sctBytes [][]byte,
regID int64, //nolint: unparam // unparam says "regID` always receives `arbitraryRegID` (`1001`)", which is wrong; that's just what happens in the unittests.
orderID int64, //nolint: unparam // same as above
regID int64,
orderID int64,
) ([]byte, error) {
if core.IsAnyNilOrZero(certProfile, precertDER, sctBytes, regID) {
return nil, berrors.InternalServerError("Incomplete cert for precertificate request")
}
precert, err := x509.ParseCertificate(precertDER)
if err != nil {
return nil, err
@ -449,16 +414,21 @@ func (ca *certificateAuthorityImpl) issueCertificateForPrecertificate(ctx contex
Issuer: issuer.Name(),
OrderID: orderID,
Profile: certProfile.name,
ProfileHash: hex.EncodeToString(certProfile.hash[:]),
Requester: regID,
}
ca.log.AuditObject("Signing cert", logEvent)
var ipStrings []string
for _, ip := range issuanceReq.IPAddresses {
ipStrings = append(ipStrings, ip.String())
}
_, span := ca.tracer.Start(ctx, "signing cert", trace.WithAttributes(
attribute.String("serial", serialHex),
attribute.String("issuer", issuer.Name()),
attribute.String("certProfileName", certProfile.name),
attribute.StringSlice("names", issuanceReq.DNSNames),
attribute.StringSlice("ipAddresses", ipStrings),
))
certDER, err := issuer.Issue(issuanceToken)
if err != nil {
@ -573,15 +543,19 @@ func (ca *certificateAuthorityImpl) issuePrecertificateInner(ctx context.Context
serialHex := core.SerialToString(serialBigInt)
names := csrlib.NamesFromCSR(csr)
dnsNames, ipAddresses, err := identifier.FromCSR(csr).ToValues()
if err != nil {
return nil, nil, err
}
req := &issuance.IssuanceRequest{
PublicKey: issuance.MarshalablePublicKey{PublicKey: csr.PublicKey},
SubjectKeyId: subjectKeyId,
Serial: serialBigInt.Bytes(),
DNSNames: names.SANs,
CommonName: names.CN,
DNSNames: dnsNames,
IPAddresses: ipAddresses,
CommonName: csrlib.CNFromCSR(csr),
IncludeCTPoison: true,
IncludeMustStaple: issuance.ContainsMustStaple(csr.Extensions),
NotBefore: notBefore,
NotAfter: notAfter,
}
@ -616,17 +590,22 @@ func (ca *certificateAuthorityImpl) issuePrecertificateInner(ctx context.Context
IssuanceRequest: req,
Issuer: issuer.Name(),
Profile: certProfile.name,
ProfileHash: hex.EncodeToString(certProfile.hash[:]),
Requester: issueReq.RegistrationID,
OrderID: issueReq.OrderID,
}
ca.log.AuditObject("Signing precert", logEvent)
var ipStrings []string
for _, ip := range csr.IPAddresses {
ipStrings = append(ipStrings, ip.String())
}
_, span := ca.tracer.Start(ctx, "signing precert", trace.WithAttributes(
attribute.String("serial", serialHex),
attribute.String("issuer", issuer.Name()),
attribute.String("certProfileName", certProfile.name),
attribute.StringSlice("names", csr.DNSNames),
attribute.StringSlice("ipAddresses", ipStrings),
))
certDER, err := issuer.Issue(issuanceToken)
if err != nil {
@ -650,7 +629,7 @@ func (ca *certificateAuthorityImpl) issuePrecertificateInner(ctx context.Context
logEvent.CSR = ""
ca.log.AuditObject("Signing precert success", logEvent)
return certDER, &certProfileWithID{certProfile.name, certProfile.hash, nil}, nil
return certDER, &certProfileWithID{certProfile.name, nil}, nil
}
// verifyTBSCertIsDeterministic verifies that x509.CreateCertificate signing

View File

@ -11,6 +11,7 @@ import (
"errors"
"fmt"
"math/big"
mrand "math/rand"
"os"
"strings"
"testing"
@ -32,6 +33,7 @@ import (
berrors "github.com/letsencrypt/boulder/errors"
"github.com/letsencrypt/boulder/features"
"github.com/letsencrypt/boulder/goodkey"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/issuance"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
@ -92,8 +94,6 @@ var (
OIDExtensionSCTList = asn1.ObjectIdentifier{1, 3, 6, 1, 4, 1, 11129, 2, 4, 2}
)
const arbitraryRegID int64 = 1001
func mustRead(path string) []byte {
return must.Do(os.ReadFile(path))
}
@ -148,24 +148,24 @@ func setup(t *testing.T) *testCtx {
fc := clock.NewFake()
fc.Add(1 * time.Hour)
pa, err := policy.New(nil, blog.NewMock())
pa, err := policy.New(map[identifier.IdentifierType]bool{"dns": true}, nil, blog.NewMock())
test.AssertNotError(t, err, "Couldn't create PA")
err = pa.LoadHostnamePolicyFile("../test/hostname-policy.yaml")
test.AssertNotError(t, err, "Couldn't set hostname policy")
certProfiles := make(map[string]*issuance.ProfileConfig, 0)
certProfiles["legacy"] = &issuance.ProfileConfig{
AllowMustStaple: true,
IncludeCRLDistributionPoints: true,
MaxValidityPeriod: config.Duration{Duration: time.Hour * 24 * 90},
MaxValidityBackdate: config.Duration{Duration: time.Hour},
IgnoredLints: []string{"w_subject_common_name_included"},
}
certProfiles["modern"] = &issuance.ProfileConfig{
AllowMustStaple: true,
OmitCommonName: true,
OmitKeyEncipherment: true,
OmitClientAuth: true,
OmitSKID: true,
IncludeCRLDistributionPoints: true,
MaxValidityPeriod: config.Duration{Duration: time.Hour * 24 * 6},
MaxValidityBackdate: config.Duration{Duration: time.Hour},
IgnoredLints: []string{"w_ext_subject_key_identifier_missing_sub_cert"},
@ -179,6 +179,7 @@ func setup(t *testing.T) *testCtx {
IssuerURL: fmt.Sprintf("http://not-example.com/i/%s", name),
OCSPURL: "http://not-example.com/o",
CRLURLBase: fmt.Sprintf("http://not-example.com/c/%s/", name),
CRLShards: 10,
Location: issuance.IssuerLoc{
File: fmt.Sprintf("../test/hierarchy/%s.key.pem", name),
CertFile: fmt.Sprintf("../test/hierarchy/%s.cert.pem", name),
@ -315,7 +316,6 @@ func TestIssuePrecertificate(t *testing.T) {
{"IssuePrecertificate", CNandSANCSR, issueCertificateSubTestIssuePrecertificate},
{"ProfileSelectionRSA", CNandSANCSR, issueCertificateSubTestProfileSelectionRSA},
{"ProfileSelectionECDSA", ECDSACSR, issueCertificateSubTestProfileSelectionECDSA},
{"MustStaple", MustStapleCSR, issueCertificateSubTestMustStaple},
{"UnknownExtension", UnsupportedExtensionCSR, issueCertificateSubTestUnknownExtension},
{"CTPoisonExtension", CTPoisonExtensionCSR, issueCertificateSubTestCTPoisonExtension},
{"CTPoisonExtensionEmpty", CTPoisonExtensionEmptyCSR, issueCertificateSubTestCTPoisonExtension},
@ -332,9 +332,9 @@ func TestIssuePrecertificate(t *testing.T) {
t.Parallel()
req, err := x509.ParseCertificateRequest(testCase.csr)
test.AssertNotError(t, err, "Certificate request failed to parse")
issueReq := &capb.IssueCertificateRequest{Csr: testCase.csr, RegistrationID: arbitraryRegID}
issueReq := &capb.IssueCertificateRequest{Csr: testCase.csr, RegistrationID: mrand.Int63(), OrderID: mrand.Int63()}
profile := ca.certProfiles.profileByName["legacy"]
profile := ca.certProfiles["legacy"]
certDER, err := ca.issuePrecertificate(ctx, profile, issueReq)
test.AssertNotError(t, err, "Failed to issue precertificate")
@ -445,8 +445,8 @@ func TestMultipleIssuers(t *testing.T) {
test.AssertNotError(t, err, "Failed to remake CA")
// Test that an RSA CSR gets issuance from an RSA issuer.
profile := ca.certProfiles.profileByName["legacy"]
issuedCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: arbitraryRegID})
profile := ca.certProfiles["legacy"]
issuedCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63()})
test.AssertNotError(t, err, "Failed to issue certificate")
cert, err := x509.ParseCertificate(issuedCertDER)
test.AssertNotError(t, err, "Certificate failed to parse")
@ -462,7 +462,7 @@ func TestMultipleIssuers(t *testing.T) {
test.AssertMetricWithLabelsEquals(t, ca.metrics.signatureCount, prometheus.Labels{"purpose": "precertificate", "status": "success"}, 1)
// Test that an ECDSA CSR gets issuance from an ECDSA issuer.
issuedCertDER, err = ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"})
issuedCertDER, err = ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"})
test.AssertNotError(t, err, "Failed to issue certificate")
cert, err = x509.ParseCertificate(issuedCertDER)
test.AssertNotError(t, err, "Certificate failed to parse")
@ -493,6 +493,7 @@ func TestUnpredictableIssuance(t *testing.T) {
IssuerURL: fmt.Sprintf("http://not-example.com/i/%s", name),
OCSPURL: "http://not-example.com/o",
CRLURLBase: fmt.Sprintf("http://not-example.com/c/%s/", name),
CRLShards: 10,
Location: issuance.IssuerLoc{
File: fmt.Sprintf("../test/hierarchy/%s.key.pem", name),
CertFile: fmt.Sprintf("../test/hierarchy/%s.cert.pem", name),
@ -527,10 +528,10 @@ func TestUnpredictableIssuance(t *testing.T) {
// trials, the probability that all 20 issuances come from the same issuer is
// 0.5 ^ 20 = 9.5e-7 ~= 1e-6 = 1 in a million, so we do not consider this test
// to be flaky.
req := &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"}
req := &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63()}
seenE2 := false
seenR3 := false
profile := ca.certProfiles.profileByName["legacy"]
profile := ca.certProfiles["legacy"]
for i := 0; i < 20; i++ {
precertDER, err := ca.issuePrecertificate(ctx, profile, req)
test.AssertNotError(t, err, "Failed to issue test certificate")
@ -553,22 +554,11 @@ func TestMakeCertificateProfilesMap(t *testing.T) {
testCtx := setup(t)
test.AssertEquals(t, len(testCtx.certProfiles), 2)
testProfile := issuance.ProfileConfig{
AllowMustStaple: false,
MaxValidityPeriod: config.Duration{Duration: time.Hour * 24 * 90},
MaxValidityBackdate: config.Duration{Duration: time.Hour},
}
type nameToHash struct {
name string
hash [32]byte
}
testCases := []struct {
name string
profileConfigs map[string]*issuance.ProfileConfig
expectedErrSubstr string
expectedProfiles []nameToHash
expectedProfiles []string
}{
{
name: "nil profile map",
@ -580,39 +570,24 @@ func TestMakeCertificateProfilesMap(t *testing.T) {
profileConfigs: map[string]*issuance.ProfileConfig{},
expectedErrSubstr: "at least one certificate profile",
},
{
name: "duplicate hash",
profileConfigs: map[string]*issuance.ProfileConfig{
"default": &testProfile,
"default2": &testProfile,
},
expectedErrSubstr: "duplicate certificate profile hash",
},
{
name: "empty profile config",
profileConfigs: map[string]*issuance.ProfileConfig{
"empty": {},
},
expectedProfiles: []nameToHash{
expectedErrSubstr: "at least one revocation mechanism must be included",
},
{
name: "empty",
hash: [32]byte{0xe4, 0xf6, 0xd, 0xa, 0xa6, 0xd7, 0xf3, 0xd3, 0xb6, 0xa6, 0x49, 0x4b, 0x1c, 0x86, 0x1b, 0x99, 0xf6, 0x49, 0xc6, 0xf9, 0xec, 0x51, 0xab, 0xaf, 0x20, 0x1b, 0x20, 0xf2, 0x97, 0x32, 0x7c, 0x95},
},
name: "minimal profile config",
profileConfigs: map[string]*issuance.ProfileConfig{
"empty": {IncludeCRLDistributionPoints: true},
},
expectedProfiles: []string{"empty"},
},
{
name: "default profiles from setup func",
profileConfigs: testCtx.certProfiles,
expectedProfiles: []nameToHash{
{
name: "legacy",
hash: [32]byte{0xb7, 0xd9, 0x7e, 0xfc, 0x5a, 0xdd, 0xc7, 0xfe, 0xc, 0xea, 0xed, 0x7b, 0x8c, 0xf5, 0x4, 0x57, 0x71, 0x97, 0x42, 0x80, 0xbe, 0x4d, 0x14, 0xa, 0x35, 0x9a, 0x89, 0xc3, 0x7a, 0x57, 0x41, 0xb7},
},
{
name: "modern",
hash: [32]byte{0x2e, 0x82, 0x9b, 0xe4, 0x4d, 0xac, 0xfc, 0x2d, 0x83, 0xbf, 0x62, 0xe5, 0xe1, 0x50, 0xe8, 0xba, 0xd2, 0x66, 0x1a, 0xb3, 0xf2, 0xe7, 0xb5, 0xf2, 0x24, 0x94, 0x1f, 0x83, 0xc6, 0x57, 0xe, 0x58},
},
},
expectedProfiles: []string{"legacy", "modern"},
},
}
@ -629,17 +604,14 @@ func TestMakeCertificateProfilesMap(t *testing.T) {
}
if tc.expectedProfiles != nil {
test.AssertEquals(t, len(profiles.profileByName), len(tc.expectedProfiles))
test.AssertEquals(t, len(profiles), len(tc.expectedProfiles))
}
for _, expected := range tc.expectedProfiles {
cpwid, ok := profiles.profileByName[expected.name]
test.Assert(t, ok, fmt.Sprintf("expected profile %q not found", expected.name))
test.AssertEquals(t, cpwid.hash, expected.hash)
cpwid, ok := profiles[expected]
test.Assert(t, ok, fmt.Sprintf("expected profile %q not found", expected))
cpwid, ok = profiles.profileByHash[expected.hash]
test.Assert(t, ok, fmt.Sprintf("expected profile %q not found", expected.hash))
test.AssertEquals(t, cpwid.name, expected.name)
test.AssertEquals(t, cpwid.name, expected)
}
})
}
@ -712,8 +684,8 @@ func TestInvalidCSRs(t *testing.T) {
t.Run(testCase.name, func(t *testing.T) {
t.Parallel()
serializedCSR := mustRead(testCase.csrPath)
profile := ca.certProfiles.profileByName["legacy"]
issueReq := &capb.IssueCertificateRequest{Csr: serializedCSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"}
profile := ca.certProfiles["legacy"]
issueReq := &capb.IssueCertificateRequest{Csr: serializedCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"}
_, err = ca.issuePrecertificate(ctx, profile, issueReq)
test.AssertErrorIs(t, err, testCase.errorType)
@ -750,8 +722,8 @@ func TestRejectValidityTooLong(t *testing.T) {
test.AssertNotError(t, err, "Failed to create CA")
// Test that the CA rejects CSRs that would expire after the intermediate cert
profile := ca.certProfiles.profileByName["legacy"]
_, err = ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"})
profile := ca.certProfiles["legacy"]
_, err = ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"})
test.AssertError(t, err, "Cannot issue a certificate that expires after the intermediate certificate")
test.AssertErrorIs(t, err, berrors.InternalServer)
}
@ -770,30 +742,12 @@ func issueCertificateSubTestProfileSelectionECDSA(t *testing.T, i *TestCertifica
test.AssertEquals(t, i.cert.KeyUsage, expectedKeyUsage)
}
func countMustStaple(t *testing.T, cert *x509.Certificate) (count int) {
oidTLSFeature := asn1.ObjectIdentifier{1, 3, 6, 1, 5, 5, 7, 1, 24}
mustStapleFeatureValue := []byte{0x30, 0x03, 0x02, 0x01, 0x05}
for _, ext := range cert.Extensions {
if ext.Id.Equal(oidTLSFeature) {
test.Assert(t, !ext.Critical, "Extension was marked critical")
test.AssertByteEquals(t, ext.Value, mustStapleFeatureValue)
count++
}
}
return count
}
func issueCertificateSubTestMustStaple(t *testing.T, i *TestCertificateIssuance) {
test.AssertMetricWithLabelsEquals(t, i.ca.metrics.signatureCount, prometheus.Labels{"purpose": "precertificate"}, 1)
test.AssertEquals(t, countMustStaple(t, i.cert), 1)
}
func issueCertificateSubTestUnknownExtension(t *testing.T, i *TestCertificateIssuance) {
test.AssertMetricWithLabelsEquals(t, i.ca.metrics.signatureCount, prometheus.Labels{"purpose": "precertificate"}, 1)
// NOTE: The hard-coded value here will have to change over time as Boulder
// adds or removes (unrequested/default) extensions in certificates.
expectedExtensionCount := 9
expectedExtensionCount := 10
test.AssertEquals(t, len(i.cert.Extensions), expectedExtensionCount)
}
@ -843,8 +797,8 @@ func TestIssueCertificateForPrecertificate(t *testing.T) {
testCtx.fc)
test.AssertNotError(t, err, "Failed to create CA")
profile := ca.certProfiles.profileByName["legacy"]
issueReq := capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: arbitraryRegID, OrderID: 0, CertProfileName: "legacy"}
profile := ca.certProfiles["legacy"]
issueReq := capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"}
precertDER, err := ca.issuePrecertificate(ctx, profile, &issueReq)
test.AssertNotError(t, err, "Failed to issue precert")
parsedPrecert, err := x509.ParseCertificate(precertDER)
@ -868,8 +822,8 @@ func TestIssueCertificateForPrecertificate(t *testing.T) {
profile,
precertDER,
sctBytes,
arbitraryRegID,
0)
mrand.Int63(),
mrand.Int63())
test.AssertNotError(t, err, "Failed to issue cert from precert")
parsedCert, err := x509.ParseCertificate(certDER)
test.AssertNotError(t, err, "Failed to parse cert")
@ -906,13 +860,13 @@ func TestIssueCertificateForPrecertificateWithSpecificCertificateProfile(t *test
test.AssertNotError(t, err, "Failed to create CA")
selectedProfile := "modern"
certProfile, ok := ca.certProfiles.profileByName[selectedProfile]
certProfile, ok := ca.certProfiles[selectedProfile]
test.Assert(t, ok, "Certificate profile was expected to exist")
issueReq := capb.IssueCertificateRequest{
Csr: CNandSANCSR,
RegistrationID: arbitraryRegID,
OrderID: 0,
RegistrationID: mrand.Int63(),
OrderID: mrand.Int63(),
CertProfileName: selectedProfile,
}
precertDER, err := ca.issuePrecertificate(ctx, certProfile, &issueReq)
@ -938,8 +892,8 @@ func TestIssueCertificateForPrecertificateWithSpecificCertificateProfile(t *test
certProfile,
precertDER,
sctBytes,
arbitraryRegID,
0)
mrand.Int63(),
mrand.Int63())
test.AssertNotError(t, err, "Failed to issue cert from precert")
parsedCert, err := x509.ParseCertificate(certDER)
test.AssertNotError(t, err, "Failed to parse cert")
@ -1025,8 +979,8 @@ func TestIssueCertificateForPrecertificateDuplicateSerial(t *testing.T) {
t.Fatal(err)
}
profile := ca.certProfiles.profileByName["legacy"]
issueReq := capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: arbitraryRegID, OrderID: 0, CertProfileName: "legacy"}
profile := ca.certProfiles["legacy"]
issueReq := capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"}
precertDER, err := ca.issuePrecertificate(ctx, profile, &issueReq)
test.AssertNotError(t, err, "Failed to issue precert")
test.AssertMetricWithLabelsEquals(t, ca.metrics.signatureCount, prometheus.Labels{"purpose": "precertificate", "status": "success"}, 1)
@ -1034,9 +988,8 @@ func TestIssueCertificateForPrecertificateDuplicateSerial(t *testing.T) {
profile,
precertDER,
sctBytes,
arbitraryRegID,
0,
)
mrand.Int63(),
mrand.Int63())
if err == nil {
t.Error("Expected error issuing duplicate serial but got none.")
}
@ -1068,8 +1021,8 @@ func TestIssueCertificateForPrecertificateDuplicateSerial(t *testing.T) {
profile,
precertDER,
sctBytes,
arbitraryRegID,
0)
mrand.Int63(),
mrand.Int63())
if err == nil {
t.Fatal("Expected error issuing duplicate serial but got none.")
}

View File

@ -4,6 +4,7 @@ import (
"context"
"crypto/x509"
"encoding/hex"
mrand "math/rand"
"testing"
"time"
@ -44,10 +45,10 @@ func TestOCSP(t *testing.T) {
test.AssertNotError(t, err, "Failed to create CA")
ocspi := testCtx.ocsp
profile := ca.certProfiles.profileByName["legacy"]
profile := ca.certProfiles["legacy"]
// Issue a certificate from an RSA issuer, request OCSP from the same issuer,
// and make sure it works.
rsaCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"})
rsaCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: CNandSANCSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"})
test.AssertNotError(t, err, "Failed to issue certificate")
rsaCert, err := x509.ParseCertificate(rsaCertDER)
test.AssertNotError(t, err, "Failed to parse rsaCert")
@ -70,7 +71,7 @@ func TestOCSP(t *testing.T) {
// Issue a certificate from an ECDSA issuer, request OCSP from the same issuer,
// and make sure it works.
ecdsaCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: arbitraryRegID, CertProfileName: "legacy"})
ecdsaCertDER, err := ca.issuePrecertificate(ctx, profile, &capb.IssueCertificateRequest{Csr: ECDSACSR, RegistrationID: mrand.Int63(), OrderID: mrand.Int63(), CertProfileName: "legacy"})
test.AssertNotError(t, err, "Failed to issue certificate")
ecdsaCert, err := x509.ParseCertificate(ecdsaCertDER)
test.AssertNotError(t, err, "Failed to parse ecdsaCert")

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: ca.proto
@ -13,6 +13,7 @@ import (
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -23,10 +24,7 @@ const (
)
type IssueCertificateRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 6
Csr []byte `protobuf:"bytes,1,opt,name=csr,proto3" json:"csr,omitempty"`
RegistrationID int64 `protobuf:"varint,2,opt,name=registrationID,proto3" json:"registrationID,omitempty"`
@ -36,16 +34,16 @@ type IssueCertificateRequest struct {
// assigned inside the CA during *Profile construction if no name is provided.
// The value of this field should not be relied upon inside the RA.
CertProfileName string `protobuf:"bytes,5,opt,name=certProfileName,proto3" json:"certProfileName,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *IssueCertificateRequest) Reset() {
*x = IssueCertificateRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *IssueCertificateRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -55,7 +53,7 @@ func (*IssueCertificateRequest) ProtoMessage() {}
func (x *IssueCertificateRequest) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -99,21 +97,18 @@ func (x *IssueCertificateRequest) GetCertProfileName() string {
}
type IssueCertificateResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
DER []byte `protobuf:"bytes,1,opt,name=DER,proto3" json:"DER,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *IssueCertificateResponse) Reset() {
*x = IssueCertificateResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *IssueCertificateResponse) String() string {
return protoimpl.X.MessageStringOf(x)
@ -123,7 +118,7 @@ func (*IssueCertificateResponse) ProtoMessage() {}
func (x *IssueCertificateResponse) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -145,111 +140,25 @@ func (x *IssueCertificateResponse) GetDER() []byte {
return nil
}
type IssueCertificateForPrecertificateRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Next unused field number: 6
DER []byte `protobuf:"bytes,1,opt,name=DER,proto3" json:"DER,omitempty"`
SCTs [][]byte `protobuf:"bytes,2,rep,name=SCTs,proto3" json:"SCTs,omitempty"`
RegistrationID int64 `protobuf:"varint,3,opt,name=registrationID,proto3" json:"registrationID,omitempty"`
OrderID int64 `protobuf:"varint,4,opt,name=orderID,proto3" json:"orderID,omitempty"`
// certProfileHash is a hash over the exported fields of a certificate profile
// to ensure that the profile remains unchanged after multiple roundtrips
// through the RA and CA.
CertProfileHash []byte `protobuf:"bytes,5,opt,name=certProfileHash,proto3" json:"certProfileHash,omitempty"`
}
func (x *IssueCertificateForPrecertificateRequest) Reset() {
*x = IssueCertificateForPrecertificateRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *IssueCertificateForPrecertificateRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*IssueCertificateForPrecertificateRequest) ProtoMessage() {}
func (x *IssueCertificateForPrecertificateRequest) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use IssueCertificateForPrecertificateRequest.ProtoReflect.Descriptor instead.
func (*IssueCertificateForPrecertificateRequest) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{2}
}
func (x *IssueCertificateForPrecertificateRequest) GetDER() []byte {
if x != nil {
return x.DER
}
return nil
}
func (x *IssueCertificateForPrecertificateRequest) GetSCTs() [][]byte {
if x != nil {
return x.SCTs
}
return nil
}
func (x *IssueCertificateForPrecertificateRequest) GetRegistrationID() int64 {
if x != nil {
return x.RegistrationID
}
return 0
}
func (x *IssueCertificateForPrecertificateRequest) GetOrderID() int64 {
if x != nil {
return x.OrderID
}
return 0
}
func (x *IssueCertificateForPrecertificateRequest) GetCertProfileHash() []byte {
if x != nil {
return x.CertProfileHash
}
return nil
}
// Exactly one of certDER or [serial and issuerID] must be set.
type GenerateOCSPRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 8
Status string `protobuf:"bytes,2,opt,name=status,proto3" json:"status,omitempty"`
Reason int32 `protobuf:"varint,3,opt,name=reason,proto3" json:"reason,omitempty"`
RevokedAt *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=revokedAt,proto3" json:"revokedAt,omitempty"`
Serial string `protobuf:"bytes,5,opt,name=serial,proto3" json:"serial,omitempty"`
IssuerID int64 `protobuf:"varint,6,opt,name=issuerID,proto3" json:"issuerID,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GenerateOCSPRequest) Reset() {
*x = GenerateOCSPRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[3]
mi := &file_ca_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GenerateOCSPRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -258,8 +167,8 @@ func (x *GenerateOCSPRequest) String() string {
func (*GenerateOCSPRequest) ProtoMessage() {}
func (x *GenerateOCSPRequest) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_ca_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -271,7 +180,7 @@ func (x *GenerateOCSPRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use GenerateOCSPRequest.ProtoReflect.Descriptor instead.
func (*GenerateOCSPRequest) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{3}
return file_ca_proto_rawDescGZIP(), []int{2}
}
func (x *GenerateOCSPRequest) GetStatus() string {
@ -310,21 +219,18 @@ func (x *GenerateOCSPRequest) GetIssuerID() int64 {
}
type OCSPResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Response []byte `protobuf:"bytes,1,opt,name=response,proto3" json:"response,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *OCSPResponse) Reset() {
*x = OCSPResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[4]
mi := &file_ca_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *OCSPResponse) String() string {
return protoimpl.X.MessageStringOf(x)
@ -333,8 +239,8 @@ func (x *OCSPResponse) String() string {
func (*OCSPResponse) ProtoMessage() {}
func (x *OCSPResponse) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_ca_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -346,7 +252,7 @@ func (x *OCSPResponse) ProtoReflect() protoreflect.Message {
// Deprecated: Use OCSPResponse.ProtoReflect.Descriptor instead.
func (*OCSPResponse) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{4}
return file_ca_proto_rawDescGZIP(), []int{3}
}
func (x *OCSPResponse) GetResponse() []byte {
@ -357,25 +263,22 @@ func (x *OCSPResponse) GetResponse() []byte {
}
type GenerateCRLRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Types that are assignable to Payload:
state protoimpl.MessageState `protogen:"open.v1"`
// Types that are valid to be assigned to Payload:
//
// *GenerateCRLRequest_Metadata
// *GenerateCRLRequest_Entry
Payload isGenerateCRLRequest_Payload `protobuf_oneof:"payload"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GenerateCRLRequest) Reset() {
*x = GenerateCRLRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[5]
mi := &file_ca_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GenerateCRLRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -384,8 +287,8 @@ func (x *GenerateCRLRequest) String() string {
func (*GenerateCRLRequest) ProtoMessage() {}
func (x *GenerateCRLRequest) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_ca_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -397,27 +300,31 @@ func (x *GenerateCRLRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use GenerateCRLRequest.ProtoReflect.Descriptor instead.
func (*GenerateCRLRequest) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{5}
return file_ca_proto_rawDescGZIP(), []int{4}
}
func (m *GenerateCRLRequest) GetPayload() isGenerateCRLRequest_Payload {
if m != nil {
return m.Payload
func (x *GenerateCRLRequest) GetPayload() isGenerateCRLRequest_Payload {
if x != nil {
return x.Payload
}
return nil
}
func (x *GenerateCRLRequest) GetMetadata() *CRLMetadata {
if x, ok := x.GetPayload().(*GenerateCRLRequest_Metadata); ok {
if x != nil {
if x, ok := x.Payload.(*GenerateCRLRequest_Metadata); ok {
return x.Metadata
}
}
return nil
}
func (x *GenerateCRLRequest) GetEntry() *proto.CRLEntry {
if x, ok := x.GetPayload().(*GenerateCRLRequest_Entry); ok {
if x != nil {
if x, ok := x.Payload.(*GenerateCRLRequest_Entry); ok {
return x.Entry
}
}
return nil
}
@ -438,24 +345,21 @@ func (*GenerateCRLRequest_Metadata) isGenerateCRLRequest_Payload() {}
func (*GenerateCRLRequest_Entry) isGenerateCRLRequest_Payload() {}
type CRLMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 5
IssuerNameID int64 `protobuf:"varint,1,opt,name=issuerNameID,proto3" json:"issuerNameID,omitempty"`
ThisUpdate *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=thisUpdate,proto3" json:"thisUpdate,omitempty"`
ShardIdx int64 `protobuf:"varint,3,opt,name=shardIdx,proto3" json:"shardIdx,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CRLMetadata) Reset() {
*x = CRLMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[6]
mi := &file_ca_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CRLMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
@ -464,8 +368,8 @@ func (x *CRLMetadata) String() string {
func (*CRLMetadata) ProtoMessage() {}
func (x *CRLMetadata) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_ca_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -477,7 +381,7 @@ func (x *CRLMetadata) ProtoReflect() protoreflect.Message {
// Deprecated: Use CRLMetadata.ProtoReflect.Descriptor instead.
func (*CRLMetadata) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{6}
return file_ca_proto_rawDescGZIP(), []int{5}
}
func (x *CRLMetadata) GetIssuerNameID() int64 {
@ -502,21 +406,18 @@ func (x *CRLMetadata) GetShardIdx() int64 {
}
type GenerateCRLResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *GenerateCRLResponse) Reset() {
*x = GenerateCRLResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_ca_proto_msgTypes[7]
mi := &file_ca_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *GenerateCRLResponse) String() string {
return protoimpl.X.MessageStringOf(x)
@ -525,8 +426,8 @@ func (x *GenerateCRLResponse) String() string {
func (*GenerateCRLResponse) ProtoMessage() {}
func (x *GenerateCRLResponse) ProtoReflect() protoreflect.Message {
mi := &file_ca_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
mi := &file_ca_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -538,7 +439,7 @@ func (x *GenerateCRLResponse) ProtoReflect() protoreflect.Message {
// Deprecated: Use GenerateCRLResponse.ProtoReflect.Descriptor instead.
func (*GenerateCRLResponse) Descriptor() ([]byte, []int) {
return file_ca_proto_rawDescGZIP(), []int{7}
return file_ca_proto_rawDescGZIP(), []int{6}
}
func (x *GenerateCRLResponse) GetChunk() []byte {
@ -550,7 +451,7 @@ func (x *GenerateCRLResponse) GetChunk() []byte {
var File_ca_proto protoreflect.FileDescriptor
var file_ca_proto_rawDesc = []byte{
var file_ca_proto_rawDesc = string([]byte{
0x0a, 0x08, 0x63, 0x61, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x02, 0x63, 0x61, 0x1a, 0x15,
0x63, 0x6f, 0x72, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x63, 0x6f, 0x72, 0x65, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72,
@ -568,111 +469,98 @@ var file_ca_proto_rawDesc = []byte{
0x4a, 0x04, 0x08, 0x04, 0x10, 0x05, 0x22, 0x2c, 0x0a, 0x18, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43,
0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x12, 0x10, 0x0a, 0x03, 0x44, 0x45, 0x52, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x03, 0x44, 0x45, 0x52, 0x22, 0xbc, 0x01, 0x0a, 0x28, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65,
0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x46, 0x6f, 0x72, 0x50, 0x72, 0x65, 0x63,
0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x12, 0x10, 0x0a, 0x03, 0x44, 0x45, 0x52, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x03,
0x44, 0x45, 0x52, 0x12, 0x12, 0x0a, 0x04, 0x53, 0x43, 0x54, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28,
0x0c, 0x52, 0x04, 0x53, 0x43, 0x54, 0x73, 0x12, 0x26, 0x0a, 0x0e, 0x72, 0x65, 0x67, 0x69, 0x73,
0x74, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52,
0x0e, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x44, 0x12,
0x18, 0x0a, 0x07, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x49, 0x44, 0x18, 0x04, 0x20, 0x01, 0x28, 0x03,
0x52, 0x07, 0x6f, 0x72, 0x64, 0x65, 0x72, 0x49, 0x44, 0x12, 0x28, 0x0a, 0x0f, 0x63, 0x65, 0x72,
0x74, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c, 0x65, 0x48, 0x61, 0x73, 0x68, 0x18, 0x05, 0x20, 0x01,
0x28, 0x0c, 0x52, 0x0f, 0x63, 0x65, 0x72, 0x74, 0x50, 0x72, 0x6f, 0x66, 0x69, 0x6c, 0x65, 0x48,
0x61, 0x73, 0x68, 0x22, 0xb9, 0x01, 0x0a, 0x13, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65,
0x4f, 0x43, 0x53, 0x50, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x73,
0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74, 0x61,
0x74, 0x75, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x03, 0x20,
0x01, 0x28, 0x05, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x38, 0x0a, 0x09, 0x72,
0x65, 0x76, 0x6f, 0x6b, 0x65, 0x64, 0x41, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a,
0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66,
0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x72, 0x65, 0x76, 0x6f,
0x6b, 0x65, 0x64, 0x41, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x18,
0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x12, 0x1a, 0x0a,
0x08, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x49, 0x44, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03, 0x52,
0x08, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x49, 0x44, 0x4a, 0x04, 0x08, 0x04, 0x10, 0x05, 0x22,
0x2a, 0x0a, 0x0c, 0x4f, 0x43, 0x53, 0x50, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12,
0x1a, 0x0a, 0x08, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x08, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x76, 0x0a, 0x12, 0x47,
0x03, 0x44, 0x45, 0x52, 0x22, 0xb9, 0x01, 0x0a, 0x13, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74,
0x65, 0x4f, 0x43, 0x53, 0x50, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06,
0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x74,
0x61, 0x74, 0x75, 0x73, 0x12, 0x16, 0x0a, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x18, 0x03,
0x20, 0x01, 0x28, 0x05, 0x52, 0x06, 0x72, 0x65, 0x61, 0x73, 0x6f, 0x6e, 0x12, 0x38, 0x0a, 0x09,
0x72, 0x65, 0x76, 0x6f, 0x6b, 0x65, 0x64, 0x41, 0x74, 0x18, 0x07, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x72, 0x65, 0x76,
0x6f, 0x6b, 0x65, 0x64, 0x41, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c,
0x18, 0x05, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x12, 0x1a,
0x0a, 0x08, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x49, 0x44, 0x18, 0x06, 0x20, 0x01, 0x28, 0x03,
0x52, 0x08, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x49, 0x44, 0x4a, 0x04, 0x08, 0x04, 0x10, 0x05,
0x22, 0x2a, 0x0a, 0x0c, 0x4f, 0x43, 0x53, 0x50, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x12, 0x1a, 0x0a, 0x08, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x18, 0x01, 0x20, 0x01,
0x28, 0x0c, 0x52, 0x08, 0x72, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x76, 0x0a, 0x12,
0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52, 0x4c, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x12, 0x2d, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01,
0x20, 0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x63, 0x61, 0x2e, 0x43, 0x52, 0x4c, 0x4d, 0x65, 0x74,
0x61, 0x64, 0x61, 0x74, 0x61, 0x48, 0x00, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
0x61, 0x12, 0x26, 0x0a, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b,
0x32, 0x0e, 0x2e, 0x63, 0x6f, 0x72, 0x65, 0x2e, 0x43, 0x52, 0x4c, 0x45, 0x6e, 0x74, 0x72, 0x79,
0x48, 0x00, 0x52, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x42, 0x09, 0x0a, 0x07, 0x70, 0x61, 0x79,
0x6c, 0x6f, 0x61, 0x64, 0x22, 0x8f, 0x01, 0x0a, 0x0b, 0x43, 0x52, 0x4c, 0x4d, 0x65, 0x74, 0x61,
0x64, 0x61, 0x74, 0x61, 0x12, 0x22, 0x0a, 0x0c, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x4e, 0x61,
0x6d, 0x65, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c, 0x69, 0x73, 0x73, 0x75,
0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x49, 0x44, 0x12, 0x3a, 0x0a, 0x0a, 0x74, 0x68, 0x69, 0x73,
0x55, 0x70, 0x64, 0x61, 0x74, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67,
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54,
0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0a, 0x74, 0x68, 0x69, 0x73, 0x55, 0x70,
0x64, 0x61, 0x74, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x68, 0x61, 0x72, 0x64, 0x49, 0x64, 0x78,
0x18, 0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x73, 0x68, 0x61, 0x72, 0x64, 0x49, 0x64, 0x78,
0x4a, 0x04, 0x08, 0x02, 0x10, 0x03, 0x22, 0x2b, 0x0a, 0x13, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61,
0x74, 0x65, 0x43, 0x52, 0x4c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a,
0x05, 0x63, 0x68, 0x75, 0x6e, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x63, 0x68,
0x75, 0x6e, 0x6b, 0x32, 0x67, 0x0a, 0x14, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61,
0x74, 0x65, 0x41, 0x75, 0x74, 0x68, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x4f, 0x0a, 0x10, 0x49,
0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12,
0x1b, 0x2e, 0x63, 0x61, 0x2e, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66,
0x69, 0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1c, 0x2e, 0x63,
0x61, 0x2e, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61,
0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x32, 0x4c, 0x0a, 0x0d,
0x4f, 0x43, 0x53, 0x50, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x6f, 0x72, 0x12, 0x3b, 0x0a,
0x0c, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x4f, 0x43, 0x53, 0x50, 0x12, 0x17, 0x2e,
0x63, 0x61, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x4f, 0x43, 0x53, 0x50, 0x52,
0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x10, 0x2e, 0x63, 0x61, 0x2e, 0x4f, 0x43, 0x53, 0x50,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x32, 0x54, 0x0a, 0x0c, 0x43, 0x52,
0x4c, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x6f, 0x72, 0x12, 0x44, 0x0a, 0x0b, 0x47, 0x65,
0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52, 0x4c, 0x12, 0x16, 0x2e, 0x63, 0x61, 0x2e, 0x47,
0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52, 0x4c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x12, 0x2d, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0b, 0x32, 0x0f, 0x2e, 0x63, 0x61, 0x2e, 0x43, 0x52, 0x4c, 0x4d, 0x65, 0x74, 0x61,
0x64, 0x61, 0x74, 0x61, 0x48, 0x00, 0x52, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61,
0x12, 0x26, 0x0a, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32,
0x0e, 0x2e, 0x63, 0x6f, 0x72, 0x65, 0x2e, 0x43, 0x52, 0x4c, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x48,
0x00, 0x52, 0x05, 0x65, 0x6e, 0x74, 0x72, 0x79, 0x42, 0x09, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c,
0x6f, 0x61, 0x64, 0x22, 0x8f, 0x01, 0x0a, 0x0b, 0x43, 0x52, 0x4c, 0x4d, 0x65, 0x74, 0x61, 0x64,
0x61, 0x74, 0x61, 0x12, 0x22, 0x0a, 0x0c, 0x69, 0x73, 0x73, 0x75, 0x65, 0x72, 0x4e, 0x61, 0x6d,
0x65, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x03, 0x52, 0x0c, 0x69, 0x73, 0x73, 0x75, 0x65,
0x72, 0x4e, 0x61, 0x6d, 0x65, 0x49, 0x44, 0x12, 0x3a, 0x0a, 0x0a, 0x74, 0x68, 0x69, 0x73, 0x55,
0x70, 0x64, 0x61, 0x74, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f,
0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69,
0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0a, 0x74, 0x68, 0x69, 0x73, 0x55, 0x70, 0x64,
0x61, 0x74, 0x65, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x68, 0x61, 0x72, 0x64, 0x49, 0x64, 0x78, 0x18,
0x03, 0x20, 0x01, 0x28, 0x03, 0x52, 0x08, 0x73, 0x68, 0x61, 0x72, 0x64, 0x49, 0x64, 0x78, 0x4a,
0x04, 0x08, 0x02, 0x10, 0x03, 0x22, 0x2b, 0x0a, 0x13, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74,
0x65, 0x43, 0x52, 0x4c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x14, 0x0a, 0x05,
0x63, 0x68, 0x75, 0x6e, 0x6b, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x63, 0x68, 0x75,
0x6e, 0x6b, 0x32, 0x67, 0x0a, 0x14, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74,
0x65, 0x41, 0x75, 0x74, 0x68, 0x6f, 0x72, 0x69, 0x74, 0x79, 0x12, 0x4f, 0x0a, 0x10, 0x49, 0x73,
0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74, 0x65, 0x12, 0x1b,
0x2e, 0x63, 0x61, 0x2e, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69,
0x63, 0x61, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x1c, 0x2e, 0x63, 0x61,
0x2e, 0x49, 0x73, 0x73, 0x75, 0x65, 0x43, 0x65, 0x72, 0x74, 0x69, 0x66, 0x69, 0x63, 0x61, 0x74,
0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x32, 0x4c, 0x0a, 0x0d, 0x4f,
0x43, 0x53, 0x50, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x6f, 0x72, 0x12, 0x3b, 0x0a, 0x0c,
0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x4f, 0x43, 0x53, 0x50, 0x12, 0x17, 0x2e, 0x63,
0x61, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x4f, 0x43, 0x53, 0x50, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x10, 0x2e, 0x63, 0x61, 0x2e, 0x4f, 0x43, 0x53, 0x50, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x32, 0x54, 0x0a, 0x0c, 0x43, 0x52, 0x4c,
0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x6f, 0x72, 0x12, 0x44, 0x0a, 0x0b, 0x47, 0x65, 0x6e,
0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52, 0x4c, 0x12, 0x16, 0x2e, 0x63, 0x61, 0x2e, 0x47, 0x65,
0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52, 0x4c, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x1a, 0x17, 0x2e, 0x63, 0x61, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43, 0x52,
0x4c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x28, 0x01, 0x30, 0x01, 0x42,
0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6c, 0x65,
0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x2f, 0x62, 0x6f, 0x75, 0x6c, 0x64, 0x65,
0x72, 0x2f, 0x63, 0x61, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
0x74, 0x1a, 0x17, 0x2e, 0x63, 0x61, 0x2e, 0x47, 0x65, 0x6e, 0x65, 0x72, 0x61, 0x74, 0x65, 0x43,
0x52, 0x4c, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x00, 0x28, 0x01, 0x30, 0x01,
0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6c,
0x65, 0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x2f, 0x62, 0x6f, 0x75, 0x6c, 0x64,
0x65, 0x72, 0x2f, 0x63, 0x61, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x33,
})
var (
file_ca_proto_rawDescOnce sync.Once
file_ca_proto_rawDescData = file_ca_proto_rawDesc
file_ca_proto_rawDescData []byte
)
func file_ca_proto_rawDescGZIP() []byte {
file_ca_proto_rawDescOnce.Do(func() {
file_ca_proto_rawDescData = protoimpl.X.CompressGZIP(file_ca_proto_rawDescData)
file_ca_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_ca_proto_rawDesc), len(file_ca_proto_rawDesc)))
})
return file_ca_proto_rawDescData
}
var file_ca_proto_msgTypes = make([]protoimpl.MessageInfo, 8)
var file_ca_proto_goTypes = []interface{}{
var file_ca_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_ca_proto_goTypes = []any{
(*IssueCertificateRequest)(nil), // 0: ca.IssueCertificateRequest
(*IssueCertificateResponse)(nil), // 1: ca.IssueCertificateResponse
(*IssueCertificateForPrecertificateRequest)(nil), // 2: ca.IssueCertificateForPrecertificateRequest
(*GenerateOCSPRequest)(nil), // 3: ca.GenerateOCSPRequest
(*OCSPResponse)(nil), // 4: ca.OCSPResponse
(*GenerateCRLRequest)(nil), // 5: ca.GenerateCRLRequest
(*CRLMetadata)(nil), // 6: ca.CRLMetadata
(*GenerateCRLResponse)(nil), // 7: ca.GenerateCRLResponse
(*timestamppb.Timestamp)(nil), // 8: google.protobuf.Timestamp
(*proto.CRLEntry)(nil), // 9: core.CRLEntry
(*GenerateOCSPRequest)(nil), // 2: ca.GenerateOCSPRequest
(*OCSPResponse)(nil), // 3: ca.OCSPResponse
(*GenerateCRLRequest)(nil), // 4: ca.GenerateCRLRequest
(*CRLMetadata)(nil), // 5: ca.CRLMetadata
(*GenerateCRLResponse)(nil), // 6: ca.GenerateCRLResponse
(*timestamppb.Timestamp)(nil), // 7: google.protobuf.Timestamp
(*proto.CRLEntry)(nil), // 8: core.CRLEntry
}
var file_ca_proto_depIdxs = []int32{
8, // 0: ca.GenerateOCSPRequest.revokedAt:type_name -> google.protobuf.Timestamp
6, // 1: ca.GenerateCRLRequest.metadata:type_name -> ca.CRLMetadata
9, // 2: ca.GenerateCRLRequest.entry:type_name -> core.CRLEntry
8, // 3: ca.CRLMetadata.thisUpdate:type_name -> google.protobuf.Timestamp
7, // 0: ca.GenerateOCSPRequest.revokedAt:type_name -> google.protobuf.Timestamp
5, // 1: ca.GenerateCRLRequest.metadata:type_name -> ca.CRLMetadata
8, // 2: ca.GenerateCRLRequest.entry:type_name -> core.CRLEntry
7, // 3: ca.CRLMetadata.thisUpdate:type_name -> google.protobuf.Timestamp
0, // 4: ca.CertificateAuthority.IssueCertificate:input_type -> ca.IssueCertificateRequest
3, // 5: ca.OCSPGenerator.GenerateOCSP:input_type -> ca.GenerateOCSPRequest
5, // 6: ca.CRLGenerator.GenerateCRL:input_type -> ca.GenerateCRLRequest
2, // 5: ca.OCSPGenerator.GenerateOCSP:input_type -> ca.GenerateOCSPRequest
4, // 6: ca.CRLGenerator.GenerateCRL:input_type -> ca.GenerateCRLRequest
1, // 7: ca.CertificateAuthority.IssueCertificate:output_type -> ca.IssueCertificateResponse
4, // 8: ca.OCSPGenerator.GenerateOCSP:output_type -> ca.OCSPResponse
7, // 9: ca.CRLGenerator.GenerateCRL:output_type -> ca.GenerateCRLResponse
3, // 8: ca.OCSPGenerator.GenerateOCSP:output_type -> ca.OCSPResponse
6, // 9: ca.CRLGenerator.GenerateCRL:output_type -> ca.GenerateCRLResponse
7, // [7:10] is the sub-list for method output_type
4, // [4:7] is the sub-list for method input_type
4, // [4:4] is the sub-list for extension type_name
@ -685,105 +573,7 @@ func file_ca_proto_init() {
if File_ca_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_ca_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*IssueCertificateRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*IssueCertificateResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*IssueCertificateForPrecertificateRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GenerateOCSPRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*OCSPResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GenerateCRLRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CRLMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_ca_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*GenerateCRLResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_ca_proto_msgTypes[5].OneofWrappers = []interface{}{
file_ca_proto_msgTypes[4].OneofWrappers = []any{
(*GenerateCRLRequest_Metadata)(nil),
(*GenerateCRLRequest_Entry)(nil),
}
@ -791,9 +581,9 @@ func file_ca_proto_init() {
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_ca_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_ca_proto_rawDesc), len(file_ca_proto_rawDesc)),
NumEnums: 0,
NumMessages: 8,
NumMessages: 7,
NumExtensions: 0,
NumServices: 3,
},
@ -802,7 +592,6 @@ func file_ca_proto_init() {
MessageInfos: file_ca_proto_msgTypes,
}.Build()
File_ca_proto = out.File
file_ca_proto_rawDesc = nil
file_ca_proto_goTypes = nil
file_ca_proto_depIdxs = nil
}

View File

@ -30,19 +30,6 @@ message IssueCertificateResponse {
bytes DER = 1;
}
message IssueCertificateForPrecertificateRequest {
// Next unused field number: 6
bytes DER = 1;
repeated bytes SCTs = 2;
int64 registrationID = 3;
int64 orderID = 4;
// certProfileHash is a hash over the exported fields of a certificate profile
// to ensure that the profile remains unchanged after multiple roundtrips
// through the RA and CA.
bytes certProfileHash = 5;
}
// OCSPGenerator generates OCSP. We separate this out from
// CertificateAuthority so that we can restrict access to a different subset of
// hosts, so the hosts that need to request OCSP generation don't need to be

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.1
// source: ca.proto
@ -25,6 +25,8 @@ const (
// CertificateAuthorityClient is the client API for CertificateAuthority service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
//
// CertificateAuthority issues certificates.
type CertificateAuthorityClient interface {
// IssueCertificate issues a precertificate, gets SCTs, issues a certificate, and returns that.
IssueCertificate(ctx context.Context, in *IssueCertificateRequest, opts ...grpc.CallOption) (*IssueCertificateResponse, error)
@ -50,21 +52,27 @@ func (c *certificateAuthorityClient) IssueCertificate(ctx context.Context, in *I
// CertificateAuthorityServer is the server API for CertificateAuthority service.
// All implementations must embed UnimplementedCertificateAuthorityServer
// for forward compatibility
// for forward compatibility.
//
// CertificateAuthority issues certificates.
type CertificateAuthorityServer interface {
// IssueCertificate issues a precertificate, gets SCTs, issues a certificate, and returns that.
IssueCertificate(context.Context, *IssueCertificateRequest) (*IssueCertificateResponse, error)
mustEmbedUnimplementedCertificateAuthorityServer()
}
// UnimplementedCertificateAuthorityServer must be embedded to have forward compatible implementations.
type UnimplementedCertificateAuthorityServer struct {
}
// UnimplementedCertificateAuthorityServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedCertificateAuthorityServer struct{}
func (UnimplementedCertificateAuthorityServer) IssueCertificate(context.Context, *IssueCertificateRequest) (*IssueCertificateResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method IssueCertificate not implemented")
}
func (UnimplementedCertificateAuthorityServer) mustEmbedUnimplementedCertificateAuthorityServer() {}
func (UnimplementedCertificateAuthorityServer) testEmbeddedByValue() {}
// UnsafeCertificateAuthorityServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to CertificateAuthorityServer will
@ -74,6 +82,13 @@ type UnsafeCertificateAuthorityServer interface {
}
func RegisterCertificateAuthorityServer(s grpc.ServiceRegistrar, srv CertificateAuthorityServer) {
// If the following call pancis, it indicates UnimplementedCertificateAuthorityServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&CertificateAuthority_ServiceDesc, srv)
}
@ -118,6 +133,11 @@ const (
// OCSPGeneratorClient is the client API for OCSPGenerator service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
//
// OCSPGenerator generates OCSP. We separate this out from
// CertificateAuthority so that we can restrict access to a different subset of
// hosts, so the hosts that need to request OCSP generation don't need to be
// able to request certificate issuance.
type OCSPGeneratorClient interface {
GenerateOCSP(ctx context.Context, in *GenerateOCSPRequest, opts ...grpc.CallOption) (*OCSPResponse, error)
}
@ -142,20 +162,29 @@ func (c *oCSPGeneratorClient) GenerateOCSP(ctx context.Context, in *GenerateOCSP
// OCSPGeneratorServer is the server API for OCSPGenerator service.
// All implementations must embed UnimplementedOCSPGeneratorServer
// for forward compatibility
// for forward compatibility.
//
// OCSPGenerator generates OCSP. We separate this out from
// CertificateAuthority so that we can restrict access to a different subset of
// hosts, so the hosts that need to request OCSP generation don't need to be
// able to request certificate issuance.
type OCSPGeneratorServer interface {
GenerateOCSP(context.Context, *GenerateOCSPRequest) (*OCSPResponse, error)
mustEmbedUnimplementedOCSPGeneratorServer()
}
// UnimplementedOCSPGeneratorServer must be embedded to have forward compatible implementations.
type UnimplementedOCSPGeneratorServer struct {
}
// UnimplementedOCSPGeneratorServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedOCSPGeneratorServer struct{}
func (UnimplementedOCSPGeneratorServer) GenerateOCSP(context.Context, *GenerateOCSPRequest) (*OCSPResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method GenerateOCSP not implemented")
}
func (UnimplementedOCSPGeneratorServer) mustEmbedUnimplementedOCSPGeneratorServer() {}
func (UnimplementedOCSPGeneratorServer) testEmbeddedByValue() {}
// UnsafeOCSPGeneratorServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to OCSPGeneratorServer will
@ -165,6 +194,13 @@ type UnsafeOCSPGeneratorServer interface {
}
func RegisterOCSPGeneratorServer(s grpc.ServiceRegistrar, srv OCSPGeneratorServer) {
// If the following call pancis, it indicates UnimplementedOCSPGeneratorServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&OCSPGenerator_ServiceDesc, srv)
}
@ -209,6 +245,8 @@ const (
// CRLGeneratorClient is the client API for CRLGenerator service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
//
// CRLGenerator signs CRLs. It is separated for the same reason as OCSPGenerator.
type CRLGeneratorClient interface {
GenerateCRL(ctx context.Context, opts ...grpc.CallOption) (grpc.BidiStreamingClient[GenerateCRLRequest, GenerateCRLResponse], error)
}
@ -236,20 +274,26 @@ type CRLGenerator_GenerateCRLClient = grpc.BidiStreamingClient[GenerateCRLReques
// CRLGeneratorServer is the server API for CRLGenerator service.
// All implementations must embed UnimplementedCRLGeneratorServer
// for forward compatibility
// for forward compatibility.
//
// CRLGenerator signs CRLs. It is separated for the same reason as OCSPGenerator.
type CRLGeneratorServer interface {
GenerateCRL(grpc.BidiStreamingServer[GenerateCRLRequest, GenerateCRLResponse]) error
mustEmbedUnimplementedCRLGeneratorServer()
}
// UnimplementedCRLGeneratorServer must be embedded to have forward compatible implementations.
type UnimplementedCRLGeneratorServer struct {
}
// UnimplementedCRLGeneratorServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedCRLGeneratorServer struct{}
func (UnimplementedCRLGeneratorServer) GenerateCRL(grpc.BidiStreamingServer[GenerateCRLRequest, GenerateCRLResponse]) error {
return status.Errorf(codes.Unimplemented, "method GenerateCRL not implemented")
}
func (UnimplementedCRLGeneratorServer) mustEmbedUnimplementedCRLGeneratorServer() {}
func (UnimplementedCRLGeneratorServer) testEmbeddedByValue() {}
// UnsafeCRLGeneratorServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to CRLGeneratorServer will
@ -259,6 +303,13 @@ type UnsafeCRLGeneratorServer interface {
}
func RegisterCRLGeneratorServer(s grpc.ServiceRegistrar, srv CRLGeneratorServer) {
// If the following call pancis, it indicates UnimplementedCRLGeneratorServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&CRLGenerator_ServiceDesc, srv)
}

View File

@ -32,10 +32,6 @@ type dryRunSAC struct {
}
func (d dryRunSAC) AddBlockedKey(_ context.Context, req *sapb.AddBlockedKeyRequest, _ ...grpc.CallOption) (*emptypb.Empty, error) {
b, err := prototext.Marshal(req)
if err != nil {
return nil, err
}
d.log.Infof("dry-run: %#v", string(b))
d.log.Infof("dry-run: Block SPKI hash %x by %s %s", req.KeyHash, req.Comment, req.Source)
return &emptypb.Empty{}, nil
}

View File

@ -1,84 +0,0 @@
package main
import (
"context"
"errors"
"flag"
"fmt"
"github.com/letsencrypt/boulder/sa"
)
// subcommandUpdateEmail encapsulates the "admin update-email" command.
//
// Note that this command may be very slow, as the initial query to find the set
// of accounts which have a matching contact email address does not use a
// database index. Therefore, when updating the found accounts, it does not exit
// on failure, preferring to continue and make as much progress as possible.
type subcommandUpdateEmail struct {
address string
clear bool
}
var _ subcommand = (*subcommandUpdateEmail)(nil)
func (s *subcommandUpdateEmail) Desc() string {
return "Change or remove an email address across all accounts"
}
func (s *subcommandUpdateEmail) Flags(flag *flag.FlagSet) {
flag.StringVar(&s.address, "address", "", "Email address to update")
flag.BoolVar(&s.clear, "clear", false, "If set, remove the address")
}
func (s *subcommandUpdateEmail) Run(ctx context.Context, a *admin) error {
if s.address == "" {
return errors.New("the -address flag is required")
}
if s.clear {
return a.clearEmail(ctx, s.address)
}
return errors.New("no action to perform on the given email was specified")
}
func (a *admin) clearEmail(ctx context.Context, address string) error {
a.log.AuditInfof("Scanning database for accounts with email addresses matching %q in order to clear the email addresses.", address)
// We use SQL `CONCAT` rather than interpolating with `+` or `%s` because we want to
// use a `?` placeholder for the email, which prevents SQL injection.
// Since this uses a substring match, it is important
// to subsequently parse the JSON list of addresses and look for exact matches.
// Because this does not use an index, it is very slow.
var regIDs []int64
_, err := a.dbMap.Select(ctx, &regIDs, "SELECT id FROM registrations WHERE contact LIKE CONCAT('%\"mailto:', ?, '\"%')", address)
if err != nil {
return fmt.Errorf("identifying matching accounts: %w", err)
}
a.log.Infof("Found %d registration IDs matching email %q.", len(regIDs), address)
failures := 0
for _, regID := range regIDs {
if a.dryRun {
a.log.Infof("dry-run: remove %q from account %d", address, regID)
continue
}
err := sa.ClearEmail(ctx, a.dbMap, regID, address)
if err != nil {
// Log, but don't fail, because it took a long time to find the relevant registration IDs
// and we don't want to have to redo that work.
a.log.AuditErrf("failed to clear email %q for registration ID %d: %s", address, regID, err)
failures++
} else {
a.log.AuditInfof("cleared email %q for registration ID %d", address, regID)
}
}
if failures > 0 {
return fmt.Errorf("failed to clear email for %d out of %d registration IDs", failures, len(regIDs))
}
return nil
}

View File

@ -178,6 +178,6 @@ func TestBlockSPKIHash(t *testing.T) {
err = a.blockSPKIHash(context.Background(), keyHash[:], u, "")
test.AssertNotError(t, err, "")
test.AssertEquals(t, len(log.GetAllMatching("Found 0 unexpired certificates")), 1)
test.AssertEquals(t, len(log.GetAllMatching("dry-run:")), 1)
test.AssertEquals(t, len(log.GetAllMatching("dry-run: Block SPKI hash "+hex.EncodeToString(keyHash[:]))), 1)
test.AssertEquals(t, len(msa.blockRequests), 0)
}

View File

@ -70,7 +70,6 @@ func main() {
subcommands := map[string]subcommand{
"revoke-cert": &subcommandRevokeCert{},
"block-key": &subcommandBlockKey{},
"update-email": &subcommandUpdateEmail{},
"pause-identifier": &subcommandPauseIdentifier{},
"unpause-account": &subcommandUnpauseAccount{},
}

View File

@ -1,15 +1,10 @@
package notmain
import (
"bytes"
"context"
"crypto/x509"
"flag"
"fmt"
"html/template"
netmail "net/mail"
"os"
"strings"
"time"
"github.com/jmhodges/clock"
@ -24,7 +19,6 @@ import (
"github.com/letsencrypt/boulder/db"
bgrpc "github.com/letsencrypt/boulder/grpc"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mail"
rapb "github.com/letsencrypt/boulder/ra/proto"
"github.com/letsencrypt/boulder/sa"
)
@ -43,10 +37,6 @@ var certsRevoked = prometheus.NewCounter(prometheus.CounterOpts{
Name: "bad_keys_certs_revoked",
Help: "A counter of certificates associated with rows in blockedKeys that have been revoked",
})
var mailErrors = prometheus.NewCounter(prometheus.CounterOpts{
Name: "bad_keys_mail_errors",
Help: "A counter of email send errors",
})
// revoker is an interface used to reduce the scope of a RA gRPC client
// to only the single method we need to use, this makes testing significantly
@ -60,9 +50,6 @@ type badKeyRevoker struct {
maxRevocations int
serialBatchSize int
raClient revoker
mailer mail.Mailer
emailSubject string
emailTemplate *template.Template
logger blog.Logger
clk clock.Clock
backoffIntervalBase time.Duration
@ -190,81 +177,11 @@ func (bkr *badKeyRevoker) markRowChecked(ctx context.Context, unchecked unchecke
return err
}
// resolveContacts builds a map of id -> email addresses
func (bkr *badKeyRevoker) resolveContacts(ctx context.Context, ids []int64) (map[int64][]string, error) {
idToEmail := map[int64][]string{}
for _, id := range ids {
var emails struct {
Contact []string
}
err := bkr.dbMap.SelectOne(ctx, &emails, "SELECT contact FROM registrations WHERE id = ?", id)
if err != nil {
// ErrNoRows is not acceptable here since there should always be a
// row for the registration, even if there are no contacts
return nil, err
}
if len(emails.Contact) != 0 {
for _, email := range emails.Contact {
idToEmail[id] = append(idToEmail[id], strings.TrimPrefix(email, "mailto:"))
}
} else {
// if the account has no contacts add a placeholder empty contact
// so that we don't skip any certificates
idToEmail[id] = append(idToEmail[id], "")
continue
}
}
return idToEmail, nil
}
var maxSerials = 100
// sendMessage sends a single email to the provided address with the revoked
// serials
func (bkr *badKeyRevoker) sendMessage(addr string, serials []string) error {
conn, err := bkr.mailer.Connect()
if err != nil {
return err
}
defer func() {
_ = conn.Close()
}()
mutSerials := make([]string, len(serials))
copy(mutSerials, serials)
if len(mutSerials) > maxSerials {
more := len(mutSerials) - maxSerials
mutSerials = mutSerials[:maxSerials]
mutSerials = append(mutSerials, fmt.Sprintf("and %d more certificates.", more))
}
message := bytes.NewBuffer(nil)
err = bkr.emailTemplate.Execute(message, mutSerials)
if err != nil {
return err
}
err = conn.SendMail([]string{addr}, bkr.emailSubject, message.String())
if err != nil {
return err
}
return nil
}
// revokeCerts revokes all the certificates associated with a particular key hash and sends
// emails to the users that issued the certificates. Emails are not sent to the user which
// requested revocation of the original certificate which marked the key as compromised.
func (bkr *badKeyRevoker) revokeCerts(revokerEmails []string, emailToCerts map[string][]unrevokedCertificate) error {
revokerEmailsMap := map[string]bool{}
for _, email := range revokerEmails {
revokerEmailsMap[email] = true
}
alreadyRevoked := map[int]bool{}
for email, certs := range emailToCerts {
var revokedSerials []string
// revokeCerts revokes all the provided certificates. It uses reason
// keyCompromise and includes note indicating that they were revoked by
// bad-key-revoker.
func (bkr *badKeyRevoker) revokeCerts(certs []unrevokedCertificate) error {
for _, cert := range certs {
revokedSerials = append(revokedSerials, cert.Serial)
if alreadyRevoked[cert.ID] {
continue
}
_, err := bkr.raClient.AdministrativelyRevokeCertificate(context.Background(), &rapb.AdministrativelyRevokeCertificateRequest{
Cert: cert.DER,
Serial: cert.Serial,
@ -275,24 +192,12 @@ func (bkr *badKeyRevoker) revokeCerts(revokerEmails []string, emailToCerts map[s
return err
}
certsRevoked.Inc()
alreadyRevoked[cert.ID] = true
}
// don't send emails to the person who revoked the certificate
if revokerEmailsMap[email] || email == "" {
continue
}
err := bkr.sendMessage(email, revokedSerials)
if err != nil {
mailErrors.Inc()
bkr.logger.Errf("failed to send message: %s", err)
continue
}
}
return nil
}
// invoke processes a single key in the blockedKeys table and returns whether
// there were any rows to process or not.
// invoke exits early and returns true if there is no work to be done.
// Otherwise, it processes a single key in the blockedKeys table and returns false.
func (bkr *badKeyRevoker) invoke(ctx context.Context) (bool, error) {
// Gather a count of rows to be processed.
uncheckedCount, err := bkr.countUncheckedKeys(ctx)
@ -337,47 +242,14 @@ func (bkr *badKeyRevoker) invoke(ctx context.Context) (bool, error) {
return false, nil
}
// build a map of registration ID -> certificates, and collect a
// list of unique registration IDs
ownedBy := map[int64][]unrevokedCertificate{}
var ids []int64
for _, cert := range unrevokedCerts {
if ownedBy[cert.RegistrationID] == nil {
ids = append(ids, cert.RegistrationID)
}
ownedBy[cert.RegistrationID] = append(ownedBy[cert.RegistrationID], cert)
}
// if the account that revoked the original certificate isn't an owner of any
// extant certificates, still add them to ids so that we can resolve their
// email and avoid sending emails later. If RevokedBy == 0 it was a row
// inserted by admin-revoker with a dummy ID, since there won't be a registration
// to look up, don't bother adding it to ids.
if _, present := ownedBy[unchecked.RevokedBy]; !present && unchecked.RevokedBy != 0 {
ids = append(ids, unchecked.RevokedBy)
}
// get contact addresses for the list of IDs
idToEmails, err := bkr.resolveContacts(ctx, ids)
if err != nil {
return false, err
}
// build a map of email -> certificates, this de-duplicates accounts with
// the same email addresses
emailsToCerts := map[string][]unrevokedCertificate{}
for id, emails := range idToEmails {
for _, email := range emails {
emailsToCerts[email] = append(emailsToCerts[email], ownedBy[id]...)
}
}
var serials []string
for _, cert := range unrevokedCerts {
serials = append(serials, cert.Serial)
}
bkr.logger.AuditInfo(fmt.Sprintf("revoking serials %v for key with hash %s", serials, unchecked.KeyHash))
bkr.logger.AuditInfo(fmt.Sprintf("revoking serials %v for key with hash %x", serials, unchecked.KeyHash))
// revoke each certificate and send emails to their owners
err = bkr.revokeCerts(idToEmails[unchecked.RevokedBy], emailsToCerts)
// revoke each certificate
err = bkr.revokeCerts(unrevokedCerts)
if err != nil {
return false, err
}
@ -417,15 +289,14 @@ type Config struct {
// or no work to do.
BackoffIntervalMax config.Duration `validate:"-"`
// Deprecated: the bad-key-revoker no longer sends emails; we use ARI.
// TODO(#8199): Remove this config stanza entirely.
Mailer struct {
cmd.SMTPConfig
// Path to a file containing a list of trusted root certificates for use
// during the SMTP connection (as opposed to the gRPC connections).
cmd.SMTPConfig `validate:"-"`
SMTPTrustedRootFile string
From string `validate:"required"`
EmailSubject string `validate:"required"`
EmailTemplate string `validate:"required"`
From string
EmailSubject string
EmailTemplate string
}
}
@ -457,7 +328,6 @@ func main() {
scope.MustRegister(keysProcessed)
scope.MustRegister(certsRevoked)
scope.MustRegister(mailErrors)
dbMap, err := sa.InitWrappedDb(config.BadKeyRevoker.DB, scope, logger)
cmd.FailOnError(err, "While initializing dbMap")
@ -469,50 +339,11 @@ func main() {
cmd.FailOnError(err, "Failed to load credentials and create gRPC connection to RA")
rac := rapb.NewRegistrationAuthorityClient(conn)
var smtpRoots *x509.CertPool
if config.BadKeyRevoker.Mailer.SMTPTrustedRootFile != "" {
pem, err := os.ReadFile(config.BadKeyRevoker.Mailer.SMTPTrustedRootFile)
cmd.FailOnError(err, "Loading trusted roots file")
smtpRoots = x509.NewCertPool()
if !smtpRoots.AppendCertsFromPEM(pem) {
cmd.FailOnError(nil, "Failed to parse root certs PEM")
}
}
fromAddress, err := netmail.ParseAddress(config.BadKeyRevoker.Mailer.From)
cmd.FailOnError(err, fmt.Sprintf("Could not parse from address: %s", config.BadKeyRevoker.Mailer.From))
smtpPassword, err := config.BadKeyRevoker.Mailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Failed to load SMTP password")
mailClient := mail.New(
config.BadKeyRevoker.Mailer.Server,
config.BadKeyRevoker.Mailer.Port,
config.BadKeyRevoker.Mailer.Username,
smtpPassword,
smtpRoots,
*fromAddress,
logger,
scope,
1*time.Second, // reconnection base backoff
5*60*time.Second, // reconnection maximum backoff
)
if config.BadKeyRevoker.Mailer.EmailSubject == "" {
cmd.Fail("BadKeyRevoker.Mailer.EmailSubject must be populated")
}
templateBytes, err := os.ReadFile(config.BadKeyRevoker.Mailer.EmailTemplate)
cmd.FailOnError(err, fmt.Sprintf("failed to read email template %q: %s", config.BadKeyRevoker.Mailer.EmailTemplate, err))
emailTemplate, err := template.New("email").Parse(string(templateBytes))
cmd.FailOnError(err, fmt.Sprintf("failed to parse email template %q: %s", config.BadKeyRevoker.Mailer.EmailTemplate, err))
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: config.BadKeyRevoker.MaximumRevocations,
serialBatchSize: config.BadKeyRevoker.FindCertificatesBatchSize,
raClient: rac,
mailer: mailClient,
emailSubject: config.BadKeyRevoker.Mailer.EmailSubject,
emailTemplate: emailTemplate,
logger: logger,
clk: clk,
backoffIntervalMax: config.BadKeyRevoker.BackoffIntervalMax.Duration,

View File

@ -4,24 +4,22 @@ import (
"context"
"crypto/rand"
"fmt"
"html/template"
"strings"
"sync"
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mocks"
rapb "github.com/letsencrypt/boulder/ra/proto"
"github.com/letsencrypt/boulder/sa"
"github.com/letsencrypt/boulder/test"
"github.com/letsencrypt/boulder/test/vars"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
"google.golang.org/protobuf/types/known/emptypb"
)
func randHash(t *testing.T) []byte {
@ -81,25 +79,16 @@ func TestSelectUncheckedRows(t *testing.T) {
test.AssertEquals(t, row.RevokedBy, int64(1))
}
func insertRegistration(t *testing.T, dbMap *db.WrappedMap, fc clock.Clock, addrs ...string) int64 {
func insertRegistration(t *testing.T, dbMap *db.WrappedMap, fc clock.Clock) int64 {
t.Helper()
jwkHash := make([]byte, 32)
_, err := rand.Read(jwkHash)
test.AssertNotError(t, err, "failed to read rand")
contactStr := "[]"
if len(addrs) > 0 {
contacts := []string{}
for _, addr := range addrs {
contacts = append(contacts, fmt.Sprintf(`"mailto:%s"`, addr))
}
contactStr = fmt.Sprintf("[%s]", strings.Join(contacts, ","))
}
res, err := dbMap.ExecContext(
context.Background(),
"INSERT INTO registrations (jwk, jwk_sha256, contact, agreement, createdAt, status, LockCol) VALUES (?, ?, ?, ?, ?, ?, ?)",
"INSERT INTO registrations (jwk, jwk_sha256, agreement, createdAt, status, LockCol) VALUES (?, ?, ?, ?, ?, ?)",
[]byte{},
fmt.Sprintf("%x", jwkHash),
contactStr,
"yes",
fc.Now(),
string(core.StatusValid),
@ -244,47 +233,6 @@ func TestFindUnrevoked(t *testing.T) {
test.AssertEquals(t, err.Error(), fmt.Sprintf("too many certificates to revoke associated with %x: got 1, max 0", hashA))
}
func TestResolveContacts(t *testing.T) {
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
test.AssertNotError(t, err, "failed setting up db client")
defer test.ResetBoulderTestDatabase(t)()
fc := clock.NewFake()
bkr := &badKeyRevoker{dbMap: dbMap, clk: fc}
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc, "example.com", "example-2.com")
regIDC := insertRegistration(t, dbMap, fc, "example.com")
regIDD := insertRegistration(t, dbMap, fc, "example-2.com")
idToEmail, err := bkr.resolveContacts(context.Background(), []int64{regIDA, regIDB, regIDC, regIDD})
test.AssertNotError(t, err, "resolveContacts failed")
test.AssertDeepEquals(t, idToEmail, map[int64][]string{
regIDA: {""},
regIDB: {"example.com", "example-2.com"},
regIDC: {"example.com"},
regIDD: {"example-2.com"},
})
}
var testTemplate = template.Must(template.New("testing").Parse("{{range .}}{{.}}\n{{end}}"))
func TestSendMessage(t *testing.T) {
mm := &mocks.Mailer{}
fc := clock.NewFake()
bkr := &badKeyRevoker{mailer: mm, emailSubject: "testing", emailTemplate: testTemplate, clk: fc}
maxSerials = 2
err := bkr.sendMessage("example.com", []string{"a", "b", "c"})
test.AssertNotError(t, err, "sendMessages failed")
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "example.com")
test.AssertEquals(t, mm.Messages[0].Subject, bkr.emailSubject)
test.AssertEquals(t, mm.Messages[0].Body, "a\nb\nand 1 more certificates.\n")
}
type mockRevoker struct {
revoked int
mu sync.Mutex
@ -303,20 +251,15 @@ func TestRevokeCerts(t *testing.T) {
defer test.ResetBoulderTestDatabase(t)()
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap, raClient: mr, mailer: mm, emailSubject: "testing", emailTemplate: testTemplate, clk: fc}
bkr := &badKeyRevoker{dbMap: dbMap, raClient: mr, clk: fc}
err = bkr.revokeCerts([]string{"revoker@example.com", "revoker-b@example.com"}, map[string][]unrevokedCertificate{
"revoker@example.com": {{ID: 0, Serial: "ff"}},
"revoker-b@example.com": {{ID: 0, Serial: "ff"}},
"other@example.com": {{ID: 1, Serial: "ee"}},
err = bkr.revokeCerts([]unrevokedCertificate{
{ID: 0, Serial: "ff"},
{ID: 1, Serial: "ee"},
})
test.AssertNotError(t, err, "revokeCerts failed")
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "other@example.com")
test.AssertEquals(t, mm.Messages[0].Subject, bkr.emailSubject)
test.AssertEquals(t, mm.Messages[0].Body, "ee\n")
test.AssertEquals(t, mr.revoked, 2)
}
func TestCertificateAbsent(t *testing.T) {
@ -329,7 +272,7 @@ func TestCertificateAbsent(t *testing.T) {
fc := clock.NewFake()
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "example.com")
regIDA := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDA, false)
@ -349,9 +292,6 @@ func TestCertificateAbsent(t *testing.T) {
maxRevocations: 1,
serialBatchSize: 1,
raClient: &mockRevoker{},
mailer: &mocks.Mailer{},
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
@ -368,24 +308,20 @@ func TestInvoke(t *testing.T) {
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
mailer: mm,
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "example.com")
regIDB := insertRegistration(t, dbMap, fc, "example.com")
regIDC := insertRegistration(t, dbMap, fc, "other.example.com", "uno.example.com")
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc)
regIDC := insertRegistration(t, dbMap, fc)
regIDD := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDC, false)
@ -398,8 +334,6 @@ func TestInvoke(t *testing.T) {
test.AssertNotError(t, err, "invoke failed")
test.AssertEquals(t, noWork, false)
test.AssertEquals(t, mr.revoked, 4)
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "example.com")
test.AssertMetricWithLabelsEquals(t, keysToProcess, prometheus.Labels{}, 1)
var checked struct {
@ -440,23 +374,19 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
fc := clock.NewFake()
mm := &mocks.Mailer{}
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
mailer: mm,
emailSubject: "testing",
emailTemplate: testTemplate,
logger: blog.NewMock(),
clk: fc,
}
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc, "a@example.com")
regIDB := insertRegistration(t, dbMap, fc, "a@example.com")
regIDC := insertRegistration(t, dbMap, fc, "b@example.com")
regIDA := insertRegistration(t, dbMap, fc)
regIDB := insertRegistration(t, dbMap, fc)
regIDC := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
@ -471,8 +401,6 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
test.AssertNotError(t, err, "invoke failed")
test.AssertEquals(t, noWork, false)
test.AssertEquals(t, mr.revoked, 4)
test.AssertEquals(t, len(mm.Messages), 1)
test.AssertEquals(t, mm.Messages[0].To, "b@example.com")
}
func TestBackoffPolicy(t *testing.T) {

View File

@ -164,8 +164,9 @@ func main() {
metrics := ca.NewCAMetrics(scope)
cmd.FailOnError(c.PA.CheckChallenges(), "Invalid PA configuration")
cmd.FailOnError(c.PA.CheckIdentifiers(), "Invalid PA configuration")
pa, err := policy.New(c.PA.Challenges, logger)
pa, err := policy.New(c.PA.Identifiers, c.PA.Challenges, logger)
cmd.FailOnError(err, "Couldn't create PA")
if c.CA.HostnamePolicyFile == "" {

View File

@ -6,7 +6,6 @@ import (
"os"
akamaipb "github.com/letsencrypt/boulder/akamai/proto"
"github.com/letsencrypt/boulder/allowlist"
capb "github.com/letsencrypt/boulder/ca/proto"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/config"
@ -95,11 +94,13 @@ type Config struct {
// default.
DefaultProfileName string `validate:"required"`
// MustStapleAllowList specifies the path to a YAML file containing a
// MustStapleAllowList specified the path to a YAML file containing a
// list of account IDs permitted to request certificates with the OCSP
// Must-Staple extension. If no path is specified, the extension is
// permitted for all accounts. If the file exists but is empty, the
// extension is disabled for all accounts.
// Must-Staple extension.
//
// Deprecated: This field no longer has any effect, all Must-Staple requests
// are rejected.
// TODO(#8177): Remove this field.
MustStapleAllowList string `validate:"omitempty"`
// GoodKey is an embedded config stanza for the goodkey library.
@ -122,11 +123,6 @@ type Config struct {
// a `Stagger` value controlling how long we wait for one operator group
// to respond before trying a different one.
CTLogs ctconfig.CTConfig
// InformationalCTLogs are a set of CT logs we will always submit to
// but won't ever use the SCTs from. This may be because we want to
// test them or because they are not yet approved by a browser/root
// program but we still want our certs to end up there.
InformationalCTLogs []ctconfig.LogDescription
// IssuerCerts are paths to all intermediate certificates which may have
// been used to issue certificates in the last 90 days. These are used to
@ -171,8 +167,9 @@ func main() {
// Validate PA config and set defaults if needed
cmd.FailOnError(c.PA.CheckChallenges(), "Invalid PA configuration")
cmd.FailOnError(c.PA.CheckIdentifiers(), "Invalid PA configuration")
pa, err := policy.New(c.PA.Challenges, logger)
pa, err := policy.New(c.PA.Identifiers, c.PA.Challenges, logger)
cmd.FailOnError(err, "Couldn't create PA")
if c.RA.HostnamePolicyFile == "" {
@ -258,14 +255,6 @@ func main() {
validationProfiles, err := ra.NewValidationProfiles(c.RA.DefaultProfileName, c.RA.ValidationProfiles)
cmd.FailOnError(err, "Failed to load validation profiles")
var mustStapleAllowList *allowlist.List[int64]
if c.RA.MustStapleAllowList != "" {
data, err := os.ReadFile(c.RA.MustStapleAllowList)
cmd.FailOnError(err, "Failed to read allow list for Must-Staple extension")
mustStapleAllowList, err = allowlist.NewFromYAML[int64](data)
cmd.FailOnError(err, "Failed to parse allow list for Must-Staple extension")
}
if features.Get().AsyncFinalize && c.RA.FinalizeTimeout.Duration == 0 {
cmd.Fail("finalizeTimeout must be supplied when AsyncFinalize feature is enabled")
}
@ -298,7 +287,6 @@ func main() {
txnBuilder,
c.RA.MaxNames,
validationProfiles,
mustStapleAllowList,
pubc,
c.RA.FinalizeTimeout.Duration,
ctp,

View File

@ -10,6 +10,7 @@ import (
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/iana"
"github.com/letsencrypt/boulder/va"
vaConfig "github.com/letsencrypt/boulder/va/config"
vapb "github.com/letsencrypt/boulder/va/proto"
@ -81,16 +82,12 @@ func main() {
clk := cmd.Clock()
var servers bdns.ServerProvider
proto := "udp"
if features.Get().DOH {
proto = "tcp"
}
if len(c.VA.DNSStaticResolvers) != 0 {
servers, err = bdns.NewStaticProvider(c.VA.DNSStaticResolvers)
cmd.FailOnError(err, "Couldn't start static DNS server resolver")
} else {
servers, err = bdns.StartDynamicProvider(c.VA.DNSProvider, 60*time.Second, proto)
servers, err = bdns.StartDynamicProvider(c.VA.DNSProvider, 60*time.Second, "tcp")
cmd.FailOnError(err, "Couldn't start dynamic DNS server resolver")
}
defer servers.Stop()
@ -106,6 +103,7 @@ func main() {
scope,
clk,
c.VA.DNSTries,
c.VA.UserAgent,
logger,
tlsConfig)
} else {
@ -115,6 +113,7 @@ func main() {
scope,
clk,
c.VA.DNSTries,
c.VA.UserAgent,
logger,
tlsConfig)
}
@ -150,7 +149,7 @@ func main() {
c.VA.AccountURIPrefixes,
va.PrimaryPerspective,
"",
bdns.IsReservedIP)
iana.IsReservedAddr)
cmd.FailOnError(err, "Unable to create VA server")
start, err := bgrpc.NewServer(c.VA.GRPC, logger).Add(

View File

@ -127,6 +127,11 @@ type Config struct {
// Deprecated: This field no longer has any effect.
PendingAuthorizationLifetimeDays int `validate:"-"`
// MaxContactsPerRegistration limits the number of contact addresses which
// can be provided in a single NewAccount request. Requests containing more
// contacts than this are rejected. Default: 10.
MaxContactsPerRegistration int `validate:"omitempty,min=1"`
AccountCache *CacheConfig
Limiter struct {
@ -312,6 +317,10 @@ func main() {
c.WFE.StaleTimeout.Duration = time.Minute * 10
}
if c.WFE.MaxContactsPerRegistration == 0 {
c.WFE.MaxContactsPerRegistration = 10
}
var limiter *ratelimits.Limiter
var txnBuilder *ratelimits.TransactionBuilder
var limiterRedis *bredis.Ring
@ -346,6 +355,7 @@ func main() {
logger,
c.WFE.Timeout.Duration,
c.WFE.StaleTimeout.Duration,
c.WFE.MaxContactsPerRegistration,
rac,
sac,
eec,

View File

@ -15,16 +15,12 @@ import (
_ "github.com/letsencrypt/boulder/cmd/boulder-va"
_ "github.com/letsencrypt/boulder/cmd/boulder-wfe2"
_ "github.com/letsencrypt/boulder/cmd/cert-checker"
_ "github.com/letsencrypt/boulder/cmd/contact-auditor"
_ "github.com/letsencrypt/boulder/cmd/crl-checker"
_ "github.com/letsencrypt/boulder/cmd/crl-storer"
_ "github.com/letsencrypt/boulder/cmd/crl-updater"
_ "github.com/letsencrypt/boulder/cmd/email-exporter"
_ "github.com/letsencrypt/boulder/cmd/expiration-mailer"
_ "github.com/letsencrypt/boulder/cmd/id-exporter"
_ "github.com/letsencrypt/boulder/cmd/log-validator"
_ "github.com/letsencrypt/boulder/cmd/nonce-service"
_ "github.com/letsencrypt/boulder/cmd/notify-mailer"
_ "github.com/letsencrypt/boulder/cmd/ocsp-responder"
_ "github.com/letsencrypt/boulder/cmd/remoteva"
_ "github.com/letsencrypt/boulder/cmd/reversed-hostname-checker"

View File

@ -305,12 +305,11 @@ func makeTemplate(randReader io.Reader, profile *certProfile, pubKey []byte, tbc
case crlCert:
cert.IsCA = false
case requestCert, intermediateCert:
// id-kp-serverAuth and id-kp-clientAuth are included in intermediate
// certificates in order to technically constrain them. id-kp-serverAuth
// is required by 7.1.2.2.g of the CABF Baseline Requirements, but
// id-kp-clientAuth isn't. We include id-kp-clientAuth as we also include
// it in our end-entity certificates.
cert.ExtKeyUsage = []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth}
// id-kp-serverAuth is included in intermediate certificates, as required by
// Section 7.1.2.10.6 of the CA/BF Baseline Requirements.
// id-kp-clientAuth is excluded, as required by section 3.2.1 of the Chrome
// Root Program Requirements.
cert.ExtKeyUsage = []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}
cert.MaxPathLenZero = true
case crossCert:
cert.ExtKeyUsage = tbcs.ExtKeyUsage

View File

@ -133,9 +133,8 @@ func TestMakeTemplateRoot(t *testing.T) {
cert, err = makeTemplate(randReader, profile, pubKey, nil, intermediateCert)
test.AssertNotError(t, err, "makeTemplate failed when everything worked as expected")
test.Assert(t, cert.MaxPathLenZero, "MaxPathLenZero not set in intermediate template")
test.AssertEquals(t, len(cert.ExtKeyUsage), 2)
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageClientAuth)
test.AssertEquals(t, cert.ExtKeyUsage[1], x509.ExtKeyUsageServerAuth)
test.AssertEquals(t, len(cert.ExtKeyUsage), 1)
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageServerAuth)
}
func TestMakeTemplateRestrictedCrossCertificate(t *testing.T) {

View File

@ -8,6 +8,7 @@ import (
"encoding/json"
"flag"
"fmt"
"net/netip"
"os"
"regexp"
"slices"
@ -78,7 +79,7 @@ func (r *report) dump() error {
type reportEntry struct {
Valid bool `json:"valid"`
DNSNames []string `json:"dnsNames"`
SANs []string `json:"sans"`
Problems []string `json:"problems,omitempty"`
}
@ -100,7 +101,7 @@ type certChecker struct {
kp goodkey.KeyPolicy
dbMap certDB
getPrecert precertGetter
certs chan core.Certificate
certs chan *corepb.Certificate
clock clock.Clock
rMu *sync.Mutex
issuedReport report
@ -124,14 +125,14 @@ func newChecker(saDbMap certDB,
if err != nil {
return nil, err
}
return precertPb.DER, nil
return precertPb.Der, nil
}
return certChecker{
pa: pa,
kp: kp,
dbMap: saDbMap,
getPrecert: precertGetter,
certs: make(chan core.Certificate, batchSize),
certs: make(chan *corepb.Certificate, batchSize),
rMu: new(sync.Mutex),
clock: clk,
issuedReport: report{Entries: make(map[string]reportEntry)},
@ -214,7 +215,7 @@ func (c *certChecker) getCerts(ctx context.Context) error {
batchStartID := initialID
var retries int
for {
certs, err := sa.SelectCertificates(
certs, highestID, err := sa.SelectCertificates(
ctx,
c.dbMap,
`WHERE id > :id AND
@ -239,16 +240,16 @@ func (c *certChecker) getCerts(ctx context.Context) error {
}
retries = 0
for _, cert := range certs {
c.certs <- cert.Certificate
c.certs <- cert
}
if len(certs) == 0 {
break
}
lastCert := certs[len(certs)-1]
batchStartID = lastCert.ID
if lastCert.Issued.After(c.issuedReport.end) {
if lastCert.Issued.AsTime().After(c.issuedReport.end) {
break
}
batchStartID = highestID
}
// Close channel so range operations won't block once the channel empties out
@ -258,13 +259,13 @@ func (c *certChecker) getCerts(ctx context.Context) error {
func (c *certChecker) processCerts(ctx context.Context, wg *sync.WaitGroup, badResultsOnly bool) {
for cert := range c.certs {
dnsNames, problems := c.checkCert(ctx, cert)
sans, problems := c.checkCert(ctx, cert)
valid := len(problems) == 0
c.rMu.Lock()
if !badResultsOnly || (badResultsOnly && !valid) {
c.issuedReport.Entries[cert.Serial] = reportEntry{
Valid: valid,
DNSNames: dnsNames,
SANs: sans,
Problems: problems,
}
}
@ -302,8 +303,8 @@ var expectedExtensionContent = map[string][]byte{
// likely valid at the time the certificate was issued. Authorizations with
// status = "deactivated" are counted for this, so long as their validatedAt
// is before the issuance and expiration is after.
func (c *certChecker) checkValidations(ctx context.Context, cert core.Certificate, idents identifier.ACMEIdentifiers) error {
authzs, err := sa.SelectAuthzsMatchingIssuance(ctx, c.dbMap, cert.RegistrationID, cert.Issued, idents)
func (c *certChecker) checkValidations(ctx context.Context, cert *corepb.Certificate, idents identifier.ACMEIdentifiers) error {
authzs, err := sa.SelectAuthzsMatchingIssuance(ctx, c.dbMap, cert.RegistrationID, cert.Issued.AsTime(), idents)
if err != nil {
return fmt.Errorf("error checking authzs for certificate %s: %w", cert.Serial, err)
}
@ -312,8 +313,8 @@ func (c *certChecker) checkValidations(ctx context.Context, cert core.Certificat
return fmt.Errorf("no relevant authzs found valid at %s", cert.Issued)
}
// We may get multiple authorizations for the same name, but that's okay.
// Any authorization for a given name is sufficient.
// We may get multiple authorizations for the same identifier, but that's
// okay. Any authorization for a given identifier is sufficient.
identToAuthz := make(map[identifier.ACMEIdentifier]*corepb.Authorization)
for _, m := range authzs {
identToAuthz[identifier.FromProto(m.Identifier)] = m
@ -333,21 +334,29 @@ func (c *certChecker) checkValidations(ctx context.Context, cert core.Certificat
return nil
}
// checkCert returns a list of DNS names in the certificate and a list of problems with the certificate.
func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]string, []string) {
var dnsNames []string
// checkCert returns a list of Subject Alternative Names in the certificate and a list of problems with the certificate.
func (c *certChecker) checkCert(ctx context.Context, cert *corepb.Certificate) ([]string, []string) {
var problems []string
// Check that the digests match.
if cert.Digest != core.Fingerprint256(cert.DER) {
if cert.Digest != core.Fingerprint256(cert.Der) {
problems = append(problems, "Stored digest doesn't match certificate digest")
}
// Parse the certificate.
parsedCert, err := zX509.ParseCertificate(cert.DER)
parsedCert, err := zX509.ParseCertificate(cert.Der)
if err != nil {
problems = append(problems, fmt.Sprintf("Couldn't parse stored certificate: %s", err))
} else {
dnsNames = parsedCert.DNSNames
// This is a fatal error, we can't do any further processing.
return nil, problems
}
// Now that it's parsed, we can extract the SANs.
sans := slices.Clone(parsedCert.DNSNames)
for _, ip := range parsedCert.IPAddresses {
sans = append(sans, ip.String())
}
// Run zlint checks.
results := zlint.LintCertificateEx(parsedCert, c.lints)
for name, res := range results.Results {
@ -360,6 +369,7 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
}
problems = append(problems, prob)
}
// Check if stored serial is correct.
storedSerial, err := core.StringToSerial(cert.Serial)
if err != nil {
@ -367,18 +377,22 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
} else if parsedCert.SerialNumber.Cmp(storedSerial) != 0 {
problems = append(problems, "Stored serial doesn't match certificate serial")
}
// Check that we have the correct expiration time.
if !parsedCert.NotAfter.Equal(cert.Expires) {
if !parsedCert.NotAfter.Equal(cert.Expires.AsTime()) {
problems = append(problems, "Stored expiration doesn't match certificate NotAfter")
}
// Check if basic constraints are set.
if !parsedCert.BasicConstraintsValid {
problems = append(problems, "Certificate doesn't have basic constraints set")
}
// Check that the cert isn't able to sign other certificates.
if parsedCert.IsCA {
problems = append(problems, "Certificate can sign other certificates")
}
// Check that the cert has a valid validity period. The validity
// period is computed inclusive of the whole final second indicated by
// notAfter.
@ -387,10 +401,17 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
if !ok {
problems = append(problems, "Certificate has unacceptable validity period")
}
// Check that the stored issuance time isn't too far back/forward dated.
if parsedCert.NotBefore.Before(cert.Issued.Add(-6*time.Hour)) || parsedCert.NotBefore.After(cert.Issued.Add(6*time.Hour)) {
if parsedCert.NotBefore.Before(cert.Issued.AsTime().Add(-6*time.Hour)) || parsedCert.NotBefore.After(cert.Issued.AsTime().Add(6*time.Hour)) {
problems = append(problems, "Stored issuance date is outside of 6 hour window of certificate NotBefore")
}
// Check that the cert doesn't contain any SANs of unexpected types.
if len(parsedCert.EmailAddresses) != 0 || len(parsedCert.URIs) != 0 {
problems = append(problems, "Certificate contains SAN of unacceptable type (email or URI)")
}
if parsedCert.Subject.CommonName != "" {
// Check if the CommonName is <= 64 characters.
if len(parsedCert.Subject.CommonName) > 64 {
@ -401,21 +422,21 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
}
// Check that the CommonName is included in the SANs.
if !slices.Contains(parsedCert.DNSNames, parsedCert.Subject.CommonName) {
if !slices.Contains(sans, parsedCert.Subject.CommonName) {
problems = append(problems, fmt.Sprintf("Certificate Common Name does not appear in Subject Alternative Names: %q !< %v",
parsedCert.Subject.CommonName, parsedCert.DNSNames))
}
}
// Check that the PA is still willing to issue for each name in DNSNames.
// We do not check the CommonName here, as (if it exists) we already checked
// that it is identical to one of the DNSNames in the SAN.
//
// TODO(#7311): We'll need to iterate over IP address identifiers too.
// Check that the PA is still willing to issue for each DNS name and IP
// address in the SANs. We do not check the CommonName here, as (if it exists)
// we already checked that it is identical to one of the DNSNames in the SAN.
for _, name := range parsedCert.DNSNames {
err = c.pa.WillingToIssue(identifier.ACMEIdentifiers{identifier.NewDNS(name)})
if err != nil {
problems = append(problems, fmt.Sprintf("Policy Authority isn't willing to issue for '%s': %s", name, err))
} else {
continue
}
// For defense-in-depth, even if the PA was willing to issue for a name
// we double check it against a list of forbidden domains. This way even
// if the hostnamePolicyFile malfunctions we will flag the forbidden
@ -426,7 +447,19 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
"forbiddenDomains entry %q", name, pattern))
}
}
for _, name := range parsedCert.IPAddresses {
ip, ok := netip.AddrFromSlice(name)
if !ok {
problems = append(problems, fmt.Sprintf("SANs contain malformed IP %q", name))
continue
}
err = c.pa.WillingToIssue(identifier.ACMEIdentifiers{identifier.NewIP(ip)})
if err != nil {
problems = append(problems, fmt.Sprintf("Policy Authority isn't willing to issue for '%s': %s", name, err))
continue
}
}
// Check the cert has the correct key usage extensions
serverAndClient := slices.Equal(parsedCert.ExtKeyUsage, []zX509.ExtKeyUsage{zX509.ExtKeyUsageServerAuth, zX509.ExtKeyUsageClientAuth})
serverOnly := slices.Equal(parsedCert.ExtKeyUsage, []zX509.ExtKeyUsage{zX509.ExtKeyUsageServerAuth})
@ -451,14 +484,15 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
// checks which rely on external resources such as weak or blocked key
// lists, or the list of blocked keys in the database. This only performs
// static checks, such as against the RSA key size and the ECDSA curve.
p, err := x509.ParseCertificate(cert.DER)
p, err := x509.ParseCertificate(cert.Der)
if err != nil {
problems = append(problems, fmt.Sprintf("Couldn't parse stored certificate: %s", err))
}
} else {
err = c.kp.GoodKey(ctx, p.PublicKey)
if err != nil {
problems = append(problems, fmt.Sprintf("Key Policy isn't willing to issue for public key: %s", err))
}
}
precertDER, err := c.getPrecert(ctx, cert.Serial)
if err != nil {
@ -467,10 +501,9 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
c.logger.Errf("fetching linting precertificate for %s: %s", cert.Serial, err)
atomic.AddInt64(&c.issuedReport.DbErrs, 1)
} else {
err = precert.Correspond(precertDER, cert.DER)
err = precert.Correspond(precertDER, cert.Der)
if err != nil {
problems = append(problems,
fmt.Sprintf("Certificate does not correspond to precert for %s: %s", cert.Serial, err))
problems = append(problems, fmt.Sprintf("Certificate does not correspond to precert for %s: %s", cert.Serial, err))
}
}
@ -489,8 +522,8 @@ func (c *certChecker) checkCert(ctx context.Context, cert core.Certificate) ([]s
}
}
}
}
return dnsNames, problems
return sans, problems
}
type Config struct {
@ -562,6 +595,7 @@ func main() {
// Validate PA config and set defaults if needed.
cmd.FailOnError(config.PA.CheckChallenges(), "Invalid PA configuration")
cmd.FailOnError(config.PA.CheckIdentifiers(), "Invalid PA configuration")
kp, err := sagoodkey.NewPolicy(&config.CertChecker.GoodKey, nil)
cmd.FailOnError(err, "Unable to create key policy")
@ -575,7 +609,7 @@ func main() {
})
prometheus.DefaultRegisterer.MustRegister(checkerLatency)
pa, err := policy.New(config.PA.Challenges, logger)
pa, err := policy.New(config.PA.Identifiers, config.PA.Challenges, logger)
cmd.FailOnError(err, "Failed to create PA")
err = pa.LoadHostnamePolicyFile(config.CertChecker.HostnamePolicyFile)

View File

@ -27,9 +27,11 @@ import (
"google.golang.org/protobuf/types/known/timestamppb"
"github.com/letsencrypt/boulder/core"
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/ctpolicy/loglist"
"github.com/letsencrypt/boulder/goodkey"
"github.com/letsencrypt/boulder/goodkey/sagoodkey"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/linter"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
@ -51,7 +53,10 @@ var (
func init() {
var err error
pa, err = policy.New(map[core.AcmeChallenge]bool{}, blog.NewMock())
pa, err = policy.New(
map[identifier.IdentifierType]bool{identifier.TypeDNS: true, identifier.TypeIP: true},
map[core.AcmeChallenge]bool{},
blog.NewMock())
if err != nil {
log.Fatal(err)
}
@ -79,12 +84,12 @@ func BenchmarkCheckCert(b *testing.B) {
SerialNumber: serial,
}
certDer, _ := x509.CreateCertificate(rand.Reader, &rawCert, &rawCert, &testKey.PublicKey, testKey)
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: core.SerialToString(serial),
Digest: core.Fingerprint256(certDer),
DER: certDer,
Issued: time.Now(),
Expires: expiry,
Der: certDer,
Issued: timestamppb.New(time.Now()),
Expires: timestamppb.New(expiry),
}
b.ResetTimer()
for range b.N {
@ -125,12 +130,12 @@ func TestCheckWildcardCert(t *testing.T) {
test.AssertNotError(t, err, "Couldn't create certificate")
parsed, err := x509.ParseCertificate(wildcardCertDer)
test.AssertNotError(t, err, "Couldn't parse created certificate")
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: core.SerialToString(serial),
Digest: core.Fingerprint256(wildcardCertDer),
Expires: parsed.NotAfter,
Issued: parsed.NotBefore,
DER: wildcardCertDer,
Expires: timestamppb.New(parsed.NotAfter),
Issued: timestamppb.New(parsed.NotBefore),
Der: wildcardCertDer,
}
_, problems := checker.checkCert(context.Background(), cert)
for _, p := range problems {
@ -138,7 +143,7 @@ func TestCheckWildcardCert(t *testing.T) {
}
}
func TestCheckCertReturnsDNSNames(t *testing.T) {
func TestCheckCertReturnsSANs(t *testing.T) {
saDbMap, err := sa.DBMapForTest(vars.DBConnSA)
test.AssertNotError(t, err, "Couldn't connect to database")
saCleanup := test.ResetBoulderTestDatabase(t)
@ -157,16 +162,16 @@ func TestCheckCertReturnsDNSNames(t *testing.T) {
t.Fatal("failed to parse cert PEM")
}
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: "00000000000",
Digest: core.Fingerprint256(block.Bytes),
Expires: time.Now().Add(time.Hour),
Issued: time.Now(),
DER: block.Bytes,
Expires: timestamppb.New(time.Now().Add(time.Hour)),
Issued: timestamppb.New(time.Now()),
Der: block.Bytes,
}
names, problems := checker.checkCert(context.Background(), cert)
if !slices.Equal(names, []string{"quite_invalid.com", "al--so--wr--ong.com"}) {
if !slices.Equal(names, []string{"quite_invalid.com", "al--so--wr--ong.com", "127.0.0.1"}) {
t.Errorf("didn't get expected DNS names. other problems: %s", strings.Join(problems, "\n"))
}
}
@ -262,11 +267,11 @@ func TestCheckCert(t *testing.T) {
// Serial doesn't match
// Expiry doesn't match
// Issued doesn't match
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: "8485f2687eba29ad455ae4e31c8679206fec",
DER: brokenCertDer,
Issued: issued.Add(12 * time.Hour),
Expires: goodExpiry.AddDate(0, 0, 2), // Expiration doesn't match
Der: brokenCertDer,
Issued: timestamppb.New(issued.Add(12 * time.Hour)),
Expires: timestamppb.New(goodExpiry.AddDate(0, 0, 2)), // Expiration doesn't match
}
_, problems := checker.checkCert(context.Background(), cert)
@ -318,9 +323,9 @@ func TestCheckCert(t *testing.T) {
test.AssertNotError(t, err, "Couldn't parse created certificate")
cert.Serial = core.SerialToString(serial)
cert.Digest = core.Fingerprint256(goodCertDer)
cert.DER = goodCertDer
cert.Expires = parsed.NotAfter
cert.Issued = parsed.NotBefore
cert.Der = goodCertDer
cert.Expires = timestamppb.New(parsed.NotAfter)
cert.Issued = timestamppb.New(parsed.NotBefore)
_, problems = checker.checkCert(context.Background(), cert)
test.AssertEquals(t, len(problems), 0)
})
@ -396,9 +401,6 @@ func (db mismatchedCountDB) SelectNullInt(_ context.Context, _ string, _ ...inte
// `getCerts` then calls `Select` to retrieve the Certificate rows. We pull
// a dastardly switch-a-roo here and return an empty set
func (db mismatchedCountDB) Select(_ context.Context, output interface{}, _ string, _ ...interface{}) ([]interface{}, error) {
// But actually return nothing
outputPtr, _ := output.(*[]sa.CertWithID)
*outputPtr = []sa.CertWithID{}
return nil, nil
}
@ -624,12 +626,12 @@ func TestIgnoredLint(t *testing.T) {
subjectCert, err := x509.ParseCertificate(subjectCertDer)
test.AssertNotError(t, err, "failed to parse EE cert")
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: core.SerialToString(serial),
DER: subjectCertDer,
Der: subjectCertDer,
Digest: core.Fingerprint256(subjectCertDer),
Issued: subjectCert.NotBefore,
Expires: subjectCert.NotAfter,
Issued: timestamppb.New(subjectCert.NotBefore),
Expires: timestamppb.New(subjectCert.NotAfter),
}
// Without any ignored lints we expect several errors and warnings about SCTs,
@ -679,12 +681,12 @@ func TestPrecertCorrespond(t *testing.T) {
SerialNumber: serial,
}
certDer, _ := x509.CreateCertificate(rand.Reader, &rawCert, &rawCert, &testKey.PublicKey, testKey)
cert := core.Certificate{
cert := &corepb.Certificate{
Serial: core.SerialToString(serial),
Digest: core.Fingerprint256(certDer),
DER: certDer,
Issued: time.Now(),
Expires: expiry,
Der: certDer,
Issued: timestamppb.New(time.Now()),
Expires: timestamppb.New(expiry),
}
_, problems := checker.checkCert(context.Background(), cert)
if len(problems) == 0 {

View File

@ -1,5 +1,5 @@
-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIILgLqdMwyzT4wDQYJKoZIhvcNAQELBQAwIDEeMBwGA1UE
MIIDWTCCAkGgAwIBAgIILgLqdMwyzT4wDQYJKoZIhvcNAQELBQAwIDEeMBwGA1UE
AxMVbWluaWNhIHJvb3QgY2EgOTMzZTM5MB4XDTIxMTExMTIwMjMzMloXDTIzMTIx
MTIwMjMzMlowHDEaMBgGA1UEAwwRcXVpdGVfaW52YWxpZC5jb20wggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDi4jBbqMyvhMonDngNsvie9SHPB16mdpiy
@ -7,14 +7,14 @@ Y/agreU84xUz/roKK07TpVmeqvwWvDkvHTFov7ytKdnCY+z/NXKJ3hNqflWCwU7h
Uk9TmpBp0vg+5NvalYul/+bq/B4qDhEvTBzAX3k/UYzd0GQdMyAbwXtG41f5cSK6
cWTQYfJL3gGR5/KLoTz3/VemLgEgAP/CvgcUJPbQceQViiZ4opi9hFIfUqxX2NsD
49klw8cDFu/BG2LEC+XtbdT8XevD0aGIOuYVr+Pa2mxb2QCDXu4tXOsDXH9Y/Cmk
8103QbdB8Y+usOiHG/IXxK2q4J7QNPal4ER4/PGA06V0gwrjNH8BAgMBAAGjgZQw
gZEwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcD
8103QbdB8Y+usOiHG/IXxK2q4J7QNPal4ER4/PGA06V0gwrjNH8BAgMBAAGjgZow
gZcwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcD
AjAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFNIcaCjv32YRafE065dZO57ONWuk
MDEGA1UdEQQqMCiCEXF1aXRlX2ludmFsaWQuY29tghNhbC0tc28tLXdyLS1vbmcu
Y29tMA0GCSqGSIb3DQEBCwUAA4IBAQAjSv0o5G4VuLnnwHON4P53bLvGnYqaqYju
TEafi3hSgHAfBuhOQUVgwujoYpPp1w1fm5spfcbSwNNRte79HgV97kAuZ4R4RHk1
5Xux1ITLalaHR/ilu002N0eJ7dFYawBgV2xMudULzohwmW2RjPJ5811iWwtiVf1b
A3V5SZJWSJll1BhANBs7R0pBbyTSNHR470N8TGG0jfXqgTKd0xZaH91HrwEMo+96
llbfp90Y5OfHIfym/N1sH2hVgd+ZAkhiVEiNBWZlbSyOgbZ1cCBvBXg6TuwpQMZK
9RWjlpni8yuzLGduPl8qHG1dqsUvbVqcG+WhHLbaZMNhiMfiWInL
MDcGA1UdEQQwMC6CEXF1aXRlX2ludmFsaWQuY29tghNhbC0tc28tLXdyLS1vbmcu
Y29thwR/AAABMA0GCSqGSIb3DQEBCwUAA4IBAQAjSv0o5G4VuLnnwHON4P53bLvG
nYqaqYjuTEafi3hSgHAfBuhOQUVgwujoYpPp1w1fm5spfcbSwNNRte79HgV97kAu
Z4R4RHk15Xux1ITLalaHR/ilu002N0eJ7dFYawBgV2xMudULzohwmW2RjPJ5811i
WwtiVf1bA3V5SZJWSJll1BhANBs7R0pBbyTSNHR470N8TGG0jfXqgTKd0xZaH91H
rwEMo+96llbfp90Y5OfHIfym/N1sH2hVgd+ZAkhiVEiNBWZlbSyOgbZ1cCBvBXg6
TuwpQMZK9RWjlpni8yuzLGduPl8qHG1dqsUvbVqcG+WhHLbaZMNhiMfiWInL
-----END CERTIFICATE-----

View File

@ -16,6 +16,7 @@ import (
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/identifier"
)
// PasswordConfig contains a path to a file containing a password.
@ -88,6 +89,8 @@ func (d *DBConfig) URL() (string, error) {
return strings.TrimSpace(string(url)), err
}
// SMTPConfig is deprecated.
// TODO(#8199): Delete this when it is removed from bad-key-revoker's config.
type SMTPConfig struct {
PasswordConfig
Server string `validate:"required"`
@ -101,6 +104,7 @@ type SMTPConfig struct {
type PAConfig struct {
DBConfig `validate:"-"`
Challenges map[core.AcmeChallenge]bool `validate:"omitempty,dive,keys,oneof=http-01 dns-01 tls-alpn-01,endkeys"`
Identifiers map[identifier.IdentifierType]bool `validate:"omitempty,dive,keys,oneof=dns ip,endkeys"`
}
// CheckChallenges checks whether the list of challenges in the PA config
@ -117,6 +121,17 @@ func (pc PAConfig) CheckChallenges() error {
return nil
}
// CheckIdentifiers checks whether the list of identifiers in the PA config
// actually contains valid identifier type names
func (pc PAConfig) CheckIdentifiers() error {
for i := range pc.Identifiers {
if !i.IsValid() {
return fmt.Errorf("invalid identifier type in PA config: %s", i)
}
}
return nil
}
// HostnamePolicyConfig specifies a file from which to load a policy regarding
// what hostnames to issue for.
type HostnamePolicyConfig struct {
@ -284,7 +299,7 @@ type GRPCClientConfig struct {
// If you've added the above to your Consul configuration file (and reloaded
// Consul) then you should be able to resolve the following dig query:
//
// $ dig @10.55.55.10 -t SRV _foo._tcp.service.consul +short
// $ dig @10.77.77.10 -t SRV _foo._tcp.service.consul +short
// 1 1 8080 0a585858.addr.dc1.consul.
// 1 1 8080 0a4d4d4d.addr.dc1.consul.
SRVLookup *ServiceDomain `validate:"required_without_all=SRVLookups ServerAddress ServerIPAddresses"`
@ -324,7 +339,7 @@ type GRPCClientConfig struct {
// If you've added the above to your Consul configuration file (and reloaded
// Consul) then you should be able to resolve the following dig query:
//
// $ dig A @10.55.55.10 foo.service.consul +short
// $ dig A @10.77.77.10 foo.service.consul +short
// 10.77.77.77
// 10.88.88.88
ServerAddress string `validate:"required_without_all=ServerIPAddresses SRVLookup SRVLookups,omitempty,hostname_port"`
@ -450,7 +465,7 @@ type GRPCServerConfig struct {
// These service names must match the service names advertised by gRPC itself,
// which are identical to the names set in our gRPC .proto files prefixed by
// the package names set in those files (e.g. "ca.CertificateAuthority").
Services map[string]GRPCServiceConfig `json:"services" validate:"required,dive,required"`
Services map[string]*GRPCServiceConfig `json:"services" validate:"required,dive,required"`
// MaxConnectionAge specifies how long a connection may live before the server sends a GoAway to the
// client. Because gRPC connections re-resolve DNS after a connection close,
// this controls how long it takes before a client learns about changes to its
@ -461,10 +476,10 @@ type GRPCServerConfig struct {
// GRPCServiceConfig contains the information needed to configure a gRPC service.
type GRPCServiceConfig struct {
// PerServiceClientNames is a map of gRPC service names to client certificate
// SANs. The upstream listening server will reject connections from clients
// which do not appear in this list, and the server interceptor will reject
// RPC calls for this service from clients which are not listed here.
// ClientNames is the list of accepted gRPC client certificate SANs.
// Connections from clients not in this list will be rejected by the
// upstream listener, and RPCs from unlisted clients will be denied by the
// server interceptor.
ClientNames []string `json:"clientNames" validate:"min=1,dive,hostname,required"`
}
@ -549,7 +564,7 @@ type DNSProvider struct {
// If you've added the above to your Consul configuration file (and reloaded
// Consul) then you should be able to resolve the following dig query:
//
// $ dig @10.55.55.10 -t SRV _unbound._udp.service.consul +short
// $ dig @10.77.77.10 -t SRV _unbound._udp.service.consul +short
// 1 1 8053 0a4d4d4d.addr.dc1.consul.
// 1 1 8153 0a4d4d4d.addr.dc1.consul.
SRVLookup ServiceDomain `validate:"required"`

View File

@ -1,84 +0,0 @@
# Contact-Auditor
Audits subscriber registrations for e-mail addresses that
`notify-mailer` is currently configured to skip.
# Usage:
```shell
-config string
File containing a JSON config.
-to-file
Write the audit results to a file.
-to-stdout
Print the audit results to stdout.
```
## Results format:
```
<id> <createdAt> <problem type> "<contact contents or entry>" "<error msg>"
```
## Example output:
### Successful run with no violations encountered and `--to-file`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
I004823 contact-auditor 7LzGvQI Audit finished successfully
I004823 contact-auditor 5Pbk_QM Audit results were written to: audit-2006-01-02T15:04.tsv
```
### Contact contains entries that violate policy and `--to-stdout`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
1 2006-01-02 15:04:05 validation "<contact entry>" "<error msg>"
...
I004823 contact-auditor 2fv7-QY Audit finished successfully
```
### Contact is not valid JSON and `--to-stdout`:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
I004823 contact-auditor je7V9QM Query completed successfully
3 2006-01-02 15:04:05 unmarshal "<contact contents>" "<error msg>"
...
I004823 contact-auditor 2fv7-QY Audit finished successfully
```
### Audit incomplete, query ended prematurely:
```
I004823 contact-auditor nfWK_gM Running contact-auditor
I004823 contact-auditor qJ_zsQ4 Beginning database query
...
E004823 contact-auditor 8LmTgww [AUDIT] Audit was interrupted, results may be incomplete: <error msg>
exit status 1
```
# Configuration file:
The path to a database config file like the one below must be provided
following the `-config` flag.
```json
{
"contactAuditor": {
"db": {
"dbConnectFile": <string>,
"maxOpenConns": <int>,
"maxIdleConns": <int>,
"connMaxLifetime": <int>,
"connMaxIdleTime": <int>
}
}
}
```

View File

@ -1,212 +0,0 @@
package notmain
import (
"context"
"database/sql"
"encoding/json"
"errors"
"flag"
"fmt"
"os"
"strings"
"time"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
)
type contactAuditor struct {
db *db.WrappedMap
resultsFile *os.File
writeToStdout bool
logger blog.Logger
}
type result struct {
id int64
contacts []string
createdAt string
}
func unmarshalContact(contact []byte) ([]string, error) {
var contacts []string
err := json.Unmarshal(contact, &contacts)
if err != nil {
return nil, err
}
return contacts, nil
}
func validateContacts(id int64, createdAt string, contacts []string) error {
// Setup a buffer to store any validation problems we encounter.
var probsBuff strings.Builder
// Helper to write validation problems to our buffer.
writeProb := func(contact string, prob string) {
// Add validation problem to buffer.
fmt.Fprintf(&probsBuff, "%d\t%s\tvalidation\t%q\t%q\t%q\n", id, createdAt, contact, prob, contacts)
}
for _, contact := range contacts {
if strings.HasPrefix(contact, "mailto:") {
err := policy.ValidEmail(strings.TrimPrefix(contact, "mailto:"))
if err != nil {
writeProb(contact, err.Error())
}
} else {
writeProb(contact, "missing 'mailto:' prefix")
}
}
if probsBuff.Len() != 0 {
return errors.New(probsBuff.String())
}
return nil
}
// beginAuditQuery executes the audit query and returns a cursor used to
// stream the results.
func (c contactAuditor) beginAuditQuery(ctx context.Context) (*sql.Rows, error) {
rows, err := c.db.QueryContext(ctx, `
SELECT DISTINCT id, contact, createdAt
FROM registrations
WHERE contact NOT IN ('[]', 'null');`)
if err != nil {
return nil, err
}
return rows, nil
}
func (c contactAuditor) writeResults(result string) {
if c.writeToStdout {
_, err := fmt.Print(result)
if err != nil {
c.logger.Errf("Error while writing result to stdout: %s", err)
}
}
if c.resultsFile != nil {
_, err := c.resultsFile.WriteString(result)
if err != nil {
c.logger.Errf("Error while writing result to file: %s", err)
}
}
}
// run retrieves a cursor from `beginAuditQuery` and then audits the
// `contact` column of all returned rows for abnormalities or policy
// violations.
func (c contactAuditor) run(ctx context.Context, resChan chan *result) error {
c.logger.Infof("Beginning database query")
rows, err := c.beginAuditQuery(ctx)
if err != nil {
return err
}
for rows.Next() {
var id int64
var contact []byte
var createdAt string
err := rows.Scan(&id, &contact, &createdAt)
if err != nil {
return err
}
contacts, err := unmarshalContact(contact)
if err != nil {
c.writeResults(fmt.Sprintf("%d\t%s\tunmarshal\t%q\t%q\n", id, createdAt, contact, err))
}
err = validateContacts(id, createdAt, contacts)
if err != nil {
c.writeResults(err.Error())
}
// Only used for testing.
if resChan != nil {
resChan <- &result{id, contacts, createdAt}
}
}
// Ensure the query wasn't interrupted before it could complete.
err = rows.Close()
if err != nil {
return err
} else {
c.logger.Info("Query completed successfully")
}
// Only used for testing.
if resChan != nil {
close(resChan)
}
return nil
}
type Config struct {
ContactAuditor struct {
DB cmd.DBConfig
}
}
func main() {
configFile := flag.String("config", "", "File containing a JSON config.")
writeToStdout := flag.Bool("to-stdout", false, "Print the audit results to stdout.")
writeToFile := flag.Bool("to-file", false, "Write the audit results to a file.")
flag.Parse()
logger := cmd.NewLogger(cmd.SyslogConfig{StdoutLevel: 7})
logger.Info(cmd.VersionString())
if *configFile == "" {
flag.Usage()
os.Exit(1)
}
// Load config from JSON.
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, fmt.Sprintf("Error reading config file: %q", *configFile))
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Couldn't unmarshal config")
db, err := sa.InitWrappedDb(cfg.ContactAuditor.DB, nil, logger)
cmd.FailOnError(err, "Couldn't setup database client")
var resultsFile *os.File
if *writeToFile {
resultsFile, err = os.Create(
fmt.Sprintf("contact-audit-%s.tsv", time.Now().Format("2006-01-02T15:04")),
)
cmd.FailOnError(err, "Failed to create results file")
}
// Setup and run contact-auditor.
auditor := contactAuditor{
db: db,
resultsFile: resultsFile,
writeToStdout: *writeToStdout,
logger: logger,
}
logger.Info("Running contact-auditor")
err = auditor.run(context.TODO(), nil)
cmd.FailOnError(err, "Audit was interrupted, results may be incomplete")
logger.Info("Audit finished successfully")
if *writeToFile {
logger.Infof("Audit results were written to: %s", resultsFile.Name())
resultsFile.Close()
}
}
func init() {
cmd.RegisterCommand("contact-auditor", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,212 +0,0 @@
package notmain
import (
"context"
"fmt"
"os"
"strings"
"testing"
"time"
"github.com/jmhodges/clock"
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/sa"
"github.com/letsencrypt/boulder/test"
"github.com/letsencrypt/boulder/test/vars"
)
var (
regA *corepb.Registration
regB *corepb.Registration
regC *corepb.Registration
regD *corepb.Registration
)
const (
emailARaw = "test@example.com"
emailBRaw = "example@notexample.com"
emailCRaw = "test-example@notexample.com"
telNum = "666-666-7777"
)
func TestContactAuditor(t *testing.T) {
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations.
testCtx.addRegistrations(t)
resChan := make(chan *result, 10)
err := testCtx.c.run(context.Background(), resChan)
test.AssertNotError(t, err, "received error")
// We should get back A, B, C, and D
test.AssertEquals(t, len(resChan), 4)
for entry := range resChan {
err := validateContacts(entry.id, entry.createdAt, entry.contacts)
switch entry.id {
case regA.Id:
// Contact validation policy sad path.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:test@example.com"})
test.AssertError(t, err, "failed to error on a contact that violates our e-mail policy")
case regB.Id:
// Ensure grace period was respected.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:example@notexample.com"})
test.AssertNotError(t, err, "received error for a valid contact entry")
case regC.Id:
// Contact validation happy path.
test.AssertDeepEquals(t, entry.contacts, []string{"mailto:test-example@notexample.com"})
test.AssertNotError(t, err, "received error for a valid contact entry")
// Unmarshal Contact sad path.
_, err := unmarshalContact([]byte("[ mailto:test@example.com ]"))
test.AssertError(t, err, "failed to error while unmarshaling invalid Contact JSON")
// Fix our JSON and ensure that the contact field returns
// errors for our 2 additional contacts
contacts, err := unmarshalContact([]byte(`[ "mailto:test@example.com", "tel:666-666-7777" ]`))
test.AssertNotError(t, err, "received error while unmarshaling valid Contact JSON")
// Ensure Contact validation now fails.
err = validateContacts(entry.id, entry.createdAt, contacts)
test.AssertError(t, err, "failed to error on 2 invalid Contact entries")
case regD.Id:
test.AssertDeepEquals(t, entry.contacts, []string{"tel:666-666-7777"})
test.AssertError(t, err, "failed to error on an invalid contact entry")
default:
t.Errorf("ID: %d was not expected", entry.id)
}
}
// Load results file.
data, err := os.ReadFile(testCtx.c.resultsFile.Name())
if err != nil {
t.Error(err)
}
// Results file should contain 2 newlines, 1 for each result.
contentLines := strings.Split(strings.TrimRight(string(data), "\n"), "\n")
test.AssertEquals(t, len(contentLines), 2)
// Each result entry should contain six tab separated columns.
for _, line := range contentLines {
test.AssertEquals(t, len(strings.Split(line, "\t")), 6)
}
}
type testCtx struct {
c contactAuditor
dbMap *db.WrappedMap
ssa *sa.SQLStorageAuthority
cleanUp func()
}
func (tc testCtx) addRegistrations(t *testing.T) {
emailA := "mailto:" + emailARaw
emailB := "mailto:" + emailBRaw
emailC := "mailto:" + emailCRaw
tel := "tel:" + telNum
// Every registration needs a unique JOSE key
jsonKeyA := []byte(`{
"kty":"RSA",
"n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw",
"e":"AQAB"
}`)
jsonKeyB := []byte(`{
"kty":"RSA",
"n":"z8bp-jPtHt4lKBqepeKF28g_QAEOuEsCIou6sZ9ndsQsEjxEOQxQ0xNOQezsKa63eogw8YS3vzjUcPP5BJuVzfPfGd5NVUdT-vSSwxk3wvk_jtNqhrpcoG0elRPQfMVsQWmxCAXCVRz3xbcFI8GTe-syynG3l-g1IzYIIZVNI6jdljCZML1HOMTTW4f7uJJ8mM-08oQCeHbr5ejK7O2yMSSYxW03zY-Tj1iVEebROeMv6IEEJNFSS4yM-hLpNAqVuQxFGetwtwjDMC1Drs1dTWrPuUAAjKGrP151z1_dE74M5evpAhZUmpKv1hY-x85DC6N0hFPgowsanmTNNiV75w",
"e":"AAEAAQ"
}`)
jsonKeyC := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-sCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
jsonKeyD := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-FCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
regA = &corepb.Registration{
Id: 1,
Contact: []string{emailA},
Key: jsonKeyA,
}
regB = &corepb.Registration{
Id: 2,
Contact: []string{emailB},
Key: jsonKeyB,
}
regC = &corepb.Registration{
Id: 3,
Contact: []string{emailC},
Key: jsonKeyC,
}
// Reg D has a `tel:` contact ACME URL
regD = &corepb.Registration{
Id: 4,
Contact: []string{tel},
Key: jsonKeyD,
}
// Add the four test registrations
ctx := context.Background()
var err error
regA, err = tc.ssa.NewRegistration(ctx, regA)
test.AssertNotError(t, err, "Couldn't store regA")
regB, err = tc.ssa.NewRegistration(ctx, regB)
test.AssertNotError(t, err, "Couldn't store regB")
regC, err = tc.ssa.NewRegistration(ctx, regC)
test.AssertNotError(t, err, "Couldn't store regC")
regD, err = tc.ssa.NewRegistration(ctx, regD)
test.AssertNotError(t, err, "Couldn't store regD")
}
func setup(t *testing.T) testCtx {
log := blog.UseMock()
// Using DBConnSAFullPerms to be able to insert registrations and
// certificates
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
if err != nil {
t.Fatalf("Couldn't connect to the database: %s", err)
}
// Make temp results file
file, err := os.CreateTemp("", fmt.Sprintf("audit-%s", time.Now().Format("2006-01-02T15:04")))
if err != nil {
t.Fatal(err)
}
cleanUp := func() {
test.ResetBoulderTestDatabase(t)
file.Close()
os.Remove(file.Name())
}
db, err := sa.DBMapForTest(vars.DBConnSAMailer)
if err != nil {
t.Fatalf("Couldn't connect to the database: %s", err)
}
ssa, err := sa.NewSQLStorageAuthority(dbMap, dbMap, nil, 1, 0, clock.New(), log, metrics.NoopRegisterer)
if err != nil {
t.Fatalf("unable to create SQLStorageAuthority: %s", err)
}
return testCtx{
c: contactAuditor{
db: db,
resultsFile: file,
logger: blog.NewMock(),
},
dbMap: dbMap,
ssa: ssa,
cleanUp: cleanUp,
}
}

View File

@ -57,10 +57,9 @@ type Config struct {
LookbackPeriod config.Duration `validate:"-"`
// UpdatePeriod controls how frequently the crl-updater runs and publishes
// new versions of every CRL shard. The Baseline Requirements, Section 4.9.7
// state that this MUST NOT be more than 7 days. We believe that future
// updates may require that this not be more than 24 hours, and currently
// recommend an UpdatePeriod of 6 hours.
// new versions of every CRL shard. The Baseline Requirements, Section 4.9.7:
// "MUST update and publish a new CRL within twentyfour (24) hours after
// recording a Certificate as revoked."
UpdatePeriod config.Duration
// UpdateTimeout controls how long a single CRL shard is allowed to attempt

View File

@ -49,6 +49,12 @@ type Config struct {
// PardotBaseURL is the base URL for the Pardot API. (e.g.,
// "https://pi.pardot.com")
PardotBaseURL string `validate:"required"`
// EmailCacheSize controls how many hashed email addresses are retained
// in memory to prevent duplicates from being sent to the Pardot API.
// Each entry consumes ~120 bytes, so 100,000 entries uses around 12MB
// of memory. If left unset, no caching is performed.
EmailCacheSize int `validate:"omitempty,min=1"`
}
Syslog cmd.SyslogConfig
OpenTelemetry cmd.OpenTelemetryConfig
@ -87,6 +93,11 @@ func main() {
clientSecret, err := c.EmailExporter.ClientSecret.Pass()
cmd.FailOnError(err, "Loading clientSecret")
var cache *email.EmailCache
if c.EmailExporter.EmailCacheSize > 0 {
cache = email.NewHashedEmailCache(c.EmailExporter.EmailCacheSize, scope)
}
pardotClient, err := email.NewPardotClientImpl(
clk,
c.EmailExporter.PardotBusinessUnit,
@ -96,7 +107,7 @@ func main() {
c.EmailExporter.PardotBaseURL,
)
cmd.FailOnError(err, "Creating Pardot API client")
exporterServer := email.NewExporterImpl(pardotClient, c.EmailExporter.PerDayLimit, c.EmailExporter.MaxConcurrentRequests, scope, logger)
exporterServer := email.NewExporterImpl(pardotClient, cache, c.EmailExporter.PerDayLimit, c.EmailExporter.MaxConcurrentRequests, scope, logger)
tlsConfig, err := c.EmailExporter.TLS.Load(scope)
cmd.FailOnError(err, "Loading email-exporter TLS config")

View File

@ -1,965 +0,0 @@
package notmain
import (
"bytes"
"context"
"crypto/x509"
"encoding/json"
"errors"
"flag"
"fmt"
"math"
netmail "net/mail"
"net/url"
"os"
"sort"
"strings"
"sync"
"text/template"
"time"
"github.com/jmhodges/clock"
"google.golang.org/grpc"
"github.com/prometheus/client_golang/prometheus"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/config"
"github.com/letsencrypt/boulder/core"
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/db"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/identifier"
blog "github.com/letsencrypt/boulder/log"
bmail "github.com/letsencrypt/boulder/mail"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
sapb "github.com/letsencrypt/boulder/sa/proto"
)
const (
defaultExpirationSubject = "Let's Encrypt certificate expiration notice for domain {{.ExpirationSubject}}"
)
var (
errNoValidEmail = errors.New("no usable contact address")
)
type regStore interface {
GetRegistration(ctx context.Context, req *sapb.RegistrationID, _ ...grpc.CallOption) (*corepb.Registration, error)
}
// limiter tracks how many mails we've sent to a given address in a given day.
// Note that this does not track mails across restarts of the process.
// Modifications to `counts` and `currentDay` are protected by a mutex.
type limiter struct {
sync.RWMutex
// currentDay is a day in UTC, truncated to 24 hours. When the current
// time is more than 24 hours past this date, all counts reset and this
// date is updated.
currentDay time.Time
// counts is a map from address to number of mails we have attempted to
// send during `currentDay`.
counts map[string]int
// limit is the number of sends after which we'll return an error from
// check()
limit int
clk clock.Clock
}
const oneDay = 24 * time.Hour
// maybeBumpDay updates lim.currentDay if its current value is more than 24
// hours ago, and resets the counts map. Expects limiter is locked.
func (lim *limiter) maybeBumpDay() {
today := lim.clk.Now().Truncate(oneDay)
if (today.Sub(lim.currentDay) >= oneDay && len(lim.counts) > 0) ||
lim.counts == nil {
// Throw away counts so far and switch to a new day.
// This also does the initialization of counts and currentDay the first
// time inc() is called.
lim.counts = make(map[string]int)
lim.currentDay = today
}
}
// inc increments the count for the current day, and cleans up previous days
// if needed.
func (lim *limiter) inc(address string) {
lim.Lock()
defer lim.Unlock()
lim.maybeBumpDay()
lim.counts[address] += 1
}
// check checks whether the count for the given address is at the limit,
// and returns an error if so.
func (lim *limiter) check(address string) error {
lim.RLock()
defer lim.RUnlock()
lim.maybeBumpDay()
if lim.counts[address] >= lim.limit {
return errors.New("daily mail limit exceeded for this email address")
}
return nil
}
type mailer struct {
log blog.Logger
dbMap *db.WrappedMap
rs regStore
mailer bmail.Mailer
emailTemplate *template.Template
subjectTemplate *template.Template
nagTimes []time.Duration
parallelSends uint
certificatesPerTick int
// addressLimiter limits how many mails we'll send to a single address in
// a single day.
addressLimiter *limiter
// Maximum number of rows to update in a single SQL UPDATE statement.
updateChunkSize int
clk clock.Clock
stats mailerStats
}
type certDERWithRegID struct {
DER core.CertDER
RegID int64
}
type mailerStats struct {
sendDelay *prometheus.GaugeVec
sendDelayHistogram *prometheus.HistogramVec
nagsAtCapacity *prometheus.GaugeVec
errorCount *prometheus.CounterVec
sendLatency prometheus.Histogram
processingLatency prometheus.Histogram
certificatesExamined prometheus.Counter
certificatesAlreadyRenewed prometheus.Counter
certificatesPerAccountNeedingMail prometheus.Histogram
}
func (m *mailer) sendNags(conn bmail.Conn, contacts []string, certs []*x509.Certificate) error {
if len(certs) == 0 {
return errors.New("no certs given to send nags for")
}
emails := []string{}
for _, contact := range contacts {
parsed, err := url.Parse(contact)
if err != nil {
m.log.Errf("parsing contact email: %s", err)
continue
}
if parsed.Scheme != "mailto" {
continue
}
address := parsed.Opaque
err = policy.ValidEmail(address)
if err != nil {
m.log.Debugf("skipping invalid email: %s", err)
continue
}
err = m.addressLimiter.check(address)
if err != nil {
m.log.Infof("not sending mail: %s", err)
continue
}
m.addressLimiter.inc(address)
emails = append(emails, parsed.Opaque)
}
if len(emails) == 0 {
return errNoValidEmail
}
expiresIn := time.Duration(math.MaxInt64)
expDate := m.clk.Now()
domains := []string{}
serials := []string{}
// Pick out the expiration date that is closest to being hit.
for _, cert := range certs {
domains = append(domains, cert.DNSNames...)
serials = append(serials, core.SerialToString(cert.SerialNumber))
possible := cert.NotAfter.Sub(m.clk.Now())
if possible < expiresIn {
expiresIn = possible
expDate = cert.NotAfter
}
}
domains = core.UniqueLowerNames(domains)
sort.Strings(domains)
const maxSerials = 100
truncatedSerials := serials
if len(truncatedSerials) > maxSerials {
truncatedSerials = serials[0:maxSerials]
}
const maxDomains = 100
truncatedDomains := domains
if len(truncatedDomains) > maxDomains {
truncatedDomains = domains[0:maxDomains]
}
// Construct the information about the expiring certificates for use in the
// subject template
expiringSubject := fmt.Sprintf("%q", domains[0])
if len(domains) > 1 {
expiringSubject += fmt.Sprintf(" (and %d more)", len(domains)-1)
}
// Execute the subjectTemplate by filling in the ExpirationSubject
subjBuf := new(bytes.Buffer)
err := m.subjectTemplate.Execute(subjBuf, struct {
ExpirationSubject string
}{
ExpirationSubject: expiringSubject,
})
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "SubjectTemplateFailure"}).Inc()
return err
}
email := struct {
ExpirationDate string
DaysToExpiration int
DNSNames string
TruncatedDNSNames string
NumDNSNamesOmitted int
}{
ExpirationDate: expDate.UTC().Format(time.DateOnly),
DaysToExpiration: int(expiresIn.Hours() / 24),
DNSNames: strings.Join(domains, "\n"),
TruncatedDNSNames: strings.Join(truncatedDomains, "\n"),
NumDNSNamesOmitted: len(domains) - len(truncatedDomains),
}
msgBuf := new(bytes.Buffer)
err = m.emailTemplate.Execute(msgBuf, email)
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "TemplateFailure"}).Inc()
return err
}
logItem := struct {
DaysToExpiration int
TruncatedDNSNames []string
TruncatedSerials []string
}{
DaysToExpiration: email.DaysToExpiration,
TruncatedDNSNames: truncatedDomains,
TruncatedSerials: truncatedSerials,
}
logStr, err := json.Marshal(logItem)
if err != nil {
return fmt.Errorf("failed to serialize log line: %w", err)
}
m.log.Infof("attempting send for JSON=%s", string(logStr))
startSending := m.clk.Now()
err = conn.SendMail(emails, subjBuf.String(), msgBuf.String())
if err != nil {
return fmt.Errorf("failed send for %s: %w", string(logStr), err)
}
finishSending := m.clk.Now()
elapsed := finishSending.Sub(startSending)
m.stats.sendLatency.Observe(elapsed.Seconds())
return nil
}
// updateLastNagTimestamps updates the lastExpirationNagSent column for every cert in
// the given list. Even though it can encounter errors, it only logs them and
// does not return them, because we always prefer to simply continue.
func (m *mailer) updateLastNagTimestamps(ctx context.Context, certs []*x509.Certificate) {
for len(certs) > 0 {
size := len(certs)
if m.updateChunkSize > 0 && size > m.updateChunkSize {
size = m.updateChunkSize
}
chunk := certs[0:size]
certs = certs[size:]
m.updateLastNagTimestampsChunk(ctx, chunk)
}
}
// updateLastNagTimestampsChunk processes a single chunk (up to 65k) of certificates.
func (m *mailer) updateLastNagTimestampsChunk(ctx context.Context, certs []*x509.Certificate) {
params := make([]interface{}, len(certs)+1)
for i, cert := range certs {
params[i+1] = core.SerialToString(cert.SerialNumber)
}
query := fmt.Sprintf(
"UPDATE certificateStatus SET lastExpirationNagSent = ? WHERE serial IN (%s)",
db.QuestionMarks(len(certs)),
)
params[0] = m.clk.Now()
_, err := m.dbMap.ExecContext(ctx, query, params...)
if err != nil {
m.log.AuditErrf("Error updating certificate status for %d certs: %s", len(certs), err)
m.stats.errorCount.With(prometheus.Labels{"type": "UpdateCertificateStatus"}).Inc()
}
}
func (m *mailer) certIsRenewed(ctx context.Context, cert *x509.Certificate) (bool, error) {
idents := identifier.FromCert(cert)
var present bool
err := m.dbMap.SelectOne(
ctx,
&present,
`SELECT EXISTS (SELECT id FROM fqdnSets WHERE setHash = ? AND issued > ? LIMIT 1)`,
core.HashIdentifiers(idents),
cert.NotBefore,
)
return present, err
}
type work struct {
regID int64
certDERs []core.CertDER
}
func (m *mailer) processCerts(
ctx context.Context,
allCerts []certDERWithRegID,
expiresIn time.Duration,
) error {
regIDToCertDERs := make(map[int64][]core.CertDER)
for _, cert := range allCerts {
cs := regIDToCertDERs[cert.RegID]
cs = append(cs, cert.DER)
regIDToCertDERs[cert.RegID] = cs
}
parallelSends := m.parallelSends
if parallelSends == 0 {
parallelSends = 1
}
var wg sync.WaitGroup
workChan := make(chan work, len(regIDToCertDERs))
// Populate the work chan on a goroutine so work is available as soon
// as one of the sender routines starts.
go func(ch chan<- work) {
for regID, certs := range regIDToCertDERs {
ch <- work{regID, certs}
}
close(workChan)
}(workChan)
for senderNum := uint(0); senderNum < parallelSends; senderNum++ {
// For politeness' sake, don't open more than 1 new connection per
// second.
if senderNum > 0 {
time.Sleep(time.Second)
}
if ctx.Err() != nil {
return ctx.Err()
}
conn, err := m.mailer.Connect()
if err != nil {
m.log.AuditErrf("connecting parallel sender %d: %s", senderNum, err)
return err
}
wg.Add(1)
go func(conn bmail.Conn, ch <-chan work) {
defer wg.Done()
for w := range ch {
err := m.sendToOneRegID(ctx, conn, w.regID, w.certDERs, expiresIn)
if err != nil {
m.log.AuditErr(err.Error())
}
}
conn.Close()
}(conn, workChan)
}
wg.Wait()
return nil
}
func (m *mailer) sendToOneRegID(ctx context.Context, conn bmail.Conn, regID int64, certDERs []core.CertDER, expiresIn time.Duration) error {
if ctx.Err() != nil {
return ctx.Err()
}
if len(certDERs) == 0 {
return errors.New("shouldn't happen: empty certificate list in sendToOneRegID")
}
reg, err := m.rs.GetRegistration(ctx, &sapb.RegistrationID{Id: regID})
if err != nil {
m.stats.errorCount.With(prometheus.Labels{"type": "GetRegistration"}).Inc()
return fmt.Errorf("Error fetching registration %d: %s", regID, err)
}
parsedCerts := []*x509.Certificate{}
for i, certDER := range certDERs {
if ctx.Err() != nil {
return ctx.Err()
}
parsedCert, err := x509.ParseCertificate(certDER)
if err != nil {
// TODO(#1420): tell registration about this error
m.log.AuditErrf("Error parsing certificate: %s. Body: %x", err, certDER)
m.stats.errorCount.With(prometheus.Labels{"type": "ParseCertificate"}).Inc()
continue
}
// The histogram version of send delay reports the worst case send delay for
// a single regID in this cycle.
if i == 0 {
sendDelay := expiresIn - parsedCert.NotAfter.Sub(m.clk.Now())
m.stats.sendDelayHistogram.With(prometheus.Labels{"nag_group": expiresIn.String()}).Observe(
sendDelay.Truncate(time.Second).Seconds())
}
renewed, err := m.certIsRenewed(ctx, parsedCert)
if err != nil {
m.log.AuditErrf("expiration-mailer: error fetching renewal state: %v", err)
// assume not renewed
} else if renewed {
m.log.Debugf("Cert %s is already renewed", core.SerialToString(parsedCert.SerialNumber))
m.stats.certificatesAlreadyRenewed.Add(1)
m.updateLastNagTimestamps(ctx, []*x509.Certificate{parsedCert})
continue
}
parsedCerts = append(parsedCerts, parsedCert)
}
m.stats.certificatesPerAccountNeedingMail.Observe(float64(len(parsedCerts)))
if len(parsedCerts) == 0 {
// all certificates are renewed
return nil
}
err = m.sendNags(conn, reg.Contact, parsedCerts)
if err != nil {
// If the error was due to the address(es) being unusable or the mail being
// undeliverable, we don't want to try again later.
var badAddrErr *bmail.BadAddressSMTPError
if errors.Is(err, errNoValidEmail) || errors.As(err, &badAddrErr) {
m.updateLastNagTimestamps(ctx, parsedCerts)
// Some accounts have no email; some accounts have an invalid email.
// Treat those as non-error cases.
return nil
}
m.stats.errorCount.With(prometheus.Labels{"type": "SendNags"}).Inc()
return fmt.Errorf("sending nag emails: %s", err)
}
m.updateLastNagTimestamps(ctx, parsedCerts)
return nil
}
// findExpiringCertificates finds certificates that might need an expiration mail, filters them,
// groups by account, sends mail, and updates their status in the DB so we don't examine them again.
//
// Invariant: findExpiringCertificates should examine each certificate at most N times, where
// N is the number of reminders. For every certificate examined (barring errors), this function
// should update the lastExpirationNagSent field of certificateStatus, so it does not need to
// examine the same certificate again on the next go-round. This ensures we make forward progress
// and don't clog up the window of certificates to be examined.
func (m *mailer) findExpiringCertificates(ctx context.Context) error {
now := m.clk.Now()
// E.g. m.nagTimes = [2, 4, 8, 15] days from expiration
for i, expiresIn := range m.nagTimes {
left := now
if i > 0 {
left = left.Add(m.nagTimes[i-1])
}
right := now.Add(expiresIn)
m.log.Infof("expiration-mailer: Searching for certificates that expire between %s and %s and had last nag >%s before expiry",
left.UTC(), right.UTC(), expiresIn)
var certs []certDERWithRegID
var err error
if features.Get().ExpirationMailerUsesJoin {
certs, err = m.getCertsWithJoin(ctx, left, right, expiresIn)
} else {
certs, err = m.getCerts(ctx, left, right, expiresIn)
}
if err != nil {
return err
}
m.stats.certificatesExamined.Add(float64(len(certs)))
// If the number of rows was exactly `m.certificatesPerTick` rows we need to increment
// a stat indicating that this nag group is at capacity. If this condition
// continually occurs across mailer runs then we will not catch up,
// resulting in under-sending expiration mails. The effects of this
// were initially described in issue #2002[0].
//
// 0: https://github.com/letsencrypt/boulder/issues/2002
atCapacity := float64(0)
if len(certs) == m.certificatesPerTick {
m.log.Infof("nag group %s expiring certificates at configured capacity (select limit %d)",
expiresIn.String(), m.certificatesPerTick)
atCapacity = float64(1)
}
m.stats.nagsAtCapacity.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(atCapacity)
m.log.Infof("Found %d certificates expiring between %s and %s", len(certs),
left.Format(time.DateTime), right.Format(time.DateTime))
if len(certs) == 0 {
continue // nothing to do
}
processingStarted := m.clk.Now()
err = m.processCerts(ctx, certs, expiresIn)
if err != nil {
m.log.AuditErr(err.Error())
}
processingEnded := m.clk.Now()
elapsed := processingEnded.Sub(processingStarted)
m.stats.processingLatency.Observe(elapsed.Seconds())
}
return nil
}
func (m *mailer) getCertsWithJoin(ctx context.Context, left, right time.Time, expiresIn time.Duration) ([]certDERWithRegID, error) {
// First we do a query on the certificateStatus table to find certificates
// nearing expiry meeting our criteria for email notification. We later
// sequentially fetch the certificate details. This avoids an expensive
// JOIN.
var certs []certDERWithRegID
_, err := m.dbMap.Select(
ctx,
&certs,
`SELECT
cert.der as der, cert.registrationID as regID
FROM certificateStatus AS cs
JOIN certificates as cert
ON cs.serial = cert.serial
AND cs.notAfter > :cutoffA
AND cs.notAfter <= :cutoffB
AND cs.status != "revoked"
AND COALESCE(TIMESTAMPDIFF(SECOND, cs.lastExpirationNagSent, cs.notAfter) > :nagCutoff, 1)
ORDER BY cs.notAfter ASC
LIMIT :certificatesPerTick`,
map[string]interface{}{
"cutoffA": left,
"cutoffB": right,
"nagCutoff": expiresIn.Seconds(),
"certificatesPerTick": m.certificatesPerTick,
},
)
if err != nil {
m.log.AuditErrf("expiration-mailer: Error loading certificate serials: %s", err)
return nil, err
}
m.log.Debugf("found %d certificates", len(certs))
return certs, nil
}
func (m *mailer) getCerts(ctx context.Context, left, right time.Time, expiresIn time.Duration) ([]certDERWithRegID, error) {
// First we do a query on the certificateStatus table to find certificates
// nearing expiry meeting our criteria for email notification. We later
// sequentially fetch the certificate details. This avoids an expensive
// JOIN.
var serials []string
_, err := m.dbMap.Select(
ctx,
&serials,
`SELECT
cs.serial
FROM certificateStatus AS cs
WHERE cs.notAfter > :cutoffA
AND cs.notAfter <= :cutoffB
AND cs.status != "revoked"
AND COALESCE(TIMESTAMPDIFF(SECOND, cs.lastExpirationNagSent, cs.notAfter) > :nagCutoff, 1)
ORDER BY cs.notAfter ASC
LIMIT :certificatesPerTick`,
map[string]interface{}{
"cutoffA": left,
"cutoffB": right,
"nagCutoff": expiresIn.Seconds(),
"certificatesPerTick": m.certificatesPerTick,
},
)
if err != nil {
m.log.AuditErrf("expiration-mailer: Error loading certificate serials: %s", err)
return nil, err
}
m.log.Debugf("found %d certificates", len(serials))
// Now we can sequentially retrieve the certificate details for each of the
// certificate status rows
var certs []certDERWithRegID
for i, serial := range serials {
if ctx.Err() != nil {
return nil, ctx.Err()
}
var cert core.Certificate
cert, err := sa.SelectCertificate(ctx, m.dbMap, serial)
if err != nil {
// We can get a NoRowsErr when processing a serial number corresponding
// to a precertificate with no final certificate. Since this certificate
// is not being used by a subscriber, we don't send expiration email about
// it.
if db.IsNoRows(err) {
m.log.Infof("no rows for serial %q", serial)
continue
}
m.log.AuditErrf("expiration-mailer: Error loading cert %q: %s", cert.Serial, err)
continue
}
certs = append(certs, certDERWithRegID{
DER: cert.DER,
RegID: cert.RegistrationID,
})
if i == 0 {
// Report the send delay metric. Note: this is the worst-case send delay
// of any certificate in this batch because it's based on the first (oldest).
sendDelay := expiresIn - cert.Expires.Sub(m.clk.Now())
m.stats.sendDelay.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(
sendDelay.Truncate(time.Second).Seconds())
}
}
return certs, nil
}
type durationSlice []time.Duration
func (ds durationSlice) Len() int {
return len(ds)
}
func (ds durationSlice) Less(a, b int) bool {
return ds[a] < ds[b]
}
func (ds durationSlice) Swap(a, b int) {
ds[a], ds[b] = ds[b], ds[a]
}
type Config struct {
Mailer struct {
DebugAddr string `validate:"omitempty,hostname_port"`
DB cmd.DBConfig
cmd.SMTPConfig
// From is an RFC 5322 formatted "From" address for reminder messages,
// e.g. "Example <example@test.org>"
From string `validate:"required"`
// Subject is the Subject line of reminder messages. This is a Go
// template with a single variable: ExpirationSubject, which contains
// a list of affected hostnames, possibly truncated.
Subject string
// CertLimit is the maximum number of certificates to investigate in a
// single batch. Defaults to 100.
CertLimit int `validate:"min=0"`
// MailsPerAddressPerDay is the maximum number of emails we'll send to
// a single address in a single day. Defaults to 0 (unlimited).
// Note that this does not track sends across restarts of the process,
// so we may send more than this when we restart expiration-mailer.
// This is a best-effort limitation. Defaults to math.MaxInt.
MailsPerAddressPerDay int `validate:"min=0"`
// UpdateChunkSize is the maximum number of rows to update in a single
// SQL UPDATE statement.
UpdateChunkSize int `validate:"min=0,max=65535"`
NagTimes []string `validate:"min=1,dive,required"`
// Path to a text/template email template with a .gotmpl or .txt file
// extension.
EmailTemplate string `validate:"required"`
// How often to process a batch of certificates
Frequency config.Duration
// ParallelSends is the number of parallel goroutines used to process
// each batch of emails. Defaults to 1.
ParallelSends uint
TLS cmd.TLSConfig
SAService *cmd.GRPCClientConfig
// Path to a file containing a list of trusted root certificates for use
// during the SMTP connection (as opposed to the gRPC connections).
SMTPTrustedRootFile string
Features features.Config
}
Syslog cmd.SyslogConfig
OpenTelemetry cmd.OpenTelemetryConfig
}
func initStats(stats prometheus.Registerer) mailerStats {
sendDelay := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "send_delay",
Help: "For the last batch of certificates, difference between the idealized send time and actual send time. Will always be nonzero, bigger numbers are worse",
},
[]string{"nag_group"})
stats.MustRegister(sendDelay)
sendDelayHistogram := prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "send_delay_histogram",
Help: "For each mail sent, difference between the idealized send time and actual send time. Will always be nonzero, bigger numbers are worse",
Buckets: prometheus.LinearBuckets(86400, 86400, 10),
},
[]string{"nag_group"})
stats.MustRegister(sendDelayHistogram)
nagsAtCapacity := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "nags_at_capacity",
Help: "Count of nag groups at capacity",
},
[]string{"nag_group"})
stats.MustRegister(nagsAtCapacity)
errorCount := prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "errors",
Help: "Number of errors",
},
[]string{"type"})
stats.MustRegister(errorCount)
sendLatency := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "send_latency",
Help: "Time the mailer takes sending messages in seconds",
Buckets: metrics.InternetFacingBuckets,
})
stats.MustRegister(sendLatency)
processingLatency := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "processing_latency",
Help: "Time the mailer takes processing certificates in seconds",
Buckets: []float64{30, 60, 75, 90, 120, 600, 3600},
})
stats.MustRegister(processingLatency)
certificatesExamined := prometheus.NewCounter(
prometheus.CounterOpts{
Name: "certificates_examined",
Help: "Number of certificates looked at that are potentially due for an expiration mail",
})
stats.MustRegister(certificatesExamined)
certificatesAlreadyRenewed := prometheus.NewCounter(
prometheus.CounterOpts{
Name: "certificates_already_renewed",
Help: "Number of certificates from certificates_examined that were ignored because they were already renewed",
})
stats.MustRegister(certificatesAlreadyRenewed)
accountsNeedingMail := prometheus.NewHistogram(
prometheus.HistogramOpts{
Name: "certificates_per_account_needing_mail",
Help: "After ignoring certificates_already_renewed and grouping the remaining certificates by account, how many accounts needed to get an email; grouped by how many certificates each account needed",
Buckets: []float64{0, 1, 2, 100, 1000, 10000, 100000},
})
stats.MustRegister(accountsNeedingMail)
return mailerStats{
sendDelay: sendDelay,
sendDelayHistogram: sendDelayHistogram,
nagsAtCapacity: nagsAtCapacity,
errorCount: errorCount,
sendLatency: sendLatency,
processingLatency: processingLatency,
certificatesExamined: certificatesExamined,
certificatesAlreadyRenewed: certificatesAlreadyRenewed,
certificatesPerAccountNeedingMail: accountsNeedingMail,
}
}
func main() {
debugAddr := flag.String("debug-addr", "", "Debug server address override")
configFile := flag.String("config", "", "File path to the configuration file for this service")
certLimit := flag.Int("cert_limit", 0, "Count of certificates to process per expiration period")
reconnBase := flag.Duration("reconnectBase", 1*time.Second, "Base sleep duration between reconnect attempts")
reconnMax := flag.Duration("reconnectMax", 5*60*time.Second, "Max sleep duration between reconnect attempts after exponential backoff")
daemon := flag.Bool("daemon", false, "Run in daemon mode")
flag.Parse()
if *configFile == "" {
flag.Usage()
os.Exit(1)
}
var c Config
err := cmd.ReadConfigFile(*configFile, &c)
cmd.FailOnError(err, "Reading JSON config file into config structure")
features.Set(c.Mailer.Features)
if *debugAddr != "" {
c.Mailer.DebugAddr = *debugAddr
}
scope, logger, oTelShutdown := cmd.StatsAndLogging(c.Syslog, c.OpenTelemetry, c.Mailer.DebugAddr)
defer oTelShutdown(context.Background())
logger.Info(cmd.VersionString())
if *daemon && c.Mailer.Frequency.Duration == 0 {
fmt.Fprintln(os.Stderr, "mailer.frequency is not set in the JSON config")
os.Exit(1)
}
if *certLimit > 0 {
c.Mailer.CertLimit = *certLimit
}
// Default to 100 if no certLimit is set
if c.Mailer.CertLimit == 0 {
c.Mailer.CertLimit = 100
}
if c.Mailer.MailsPerAddressPerDay == 0 {
c.Mailer.MailsPerAddressPerDay = math.MaxInt
}
dbMap, err := sa.InitWrappedDb(c.Mailer.DB, scope, logger)
cmd.FailOnError(err, "While initializing dbMap")
tlsConfig, err := c.Mailer.TLS.Load(scope)
cmd.FailOnError(err, "TLS config")
clk := cmd.Clock()
conn, err := bgrpc.ClientSetup(c.Mailer.SAService, tlsConfig, scope, clk)
cmd.FailOnError(err, "Failed to load credentials and create gRPC connection to SA")
sac := sapb.NewStorageAuthorityClient(conn)
var smtpRoots *x509.CertPool
if c.Mailer.SMTPTrustedRootFile != "" {
pem, err := os.ReadFile(c.Mailer.SMTPTrustedRootFile)
cmd.FailOnError(err, "Loading trusted roots file")
smtpRoots = x509.NewCertPool()
if !smtpRoots.AppendCertsFromPEM(pem) {
cmd.FailOnError(nil, "Failed to parse root certs PEM")
}
}
// Load email template
emailTmpl, err := os.ReadFile(c.Mailer.EmailTemplate)
cmd.FailOnError(err, fmt.Sprintf("Could not read email template file [%s]", c.Mailer.EmailTemplate))
tmpl, err := template.New("expiry-email").Parse(string(emailTmpl))
cmd.FailOnError(err, "Could not parse email template")
// If there is no configured subject template, use a default
if c.Mailer.Subject == "" {
c.Mailer.Subject = defaultExpirationSubject
}
// Load subject template
subjTmpl, err := template.New("expiry-email-subject").Parse(c.Mailer.Subject)
cmd.FailOnError(err, "Could not parse email subject template")
fromAddress, err := netmail.ParseAddress(c.Mailer.From)
cmd.FailOnError(err, fmt.Sprintf("Could not parse from address: %s", c.Mailer.From))
smtpPassword, err := c.Mailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Failed to load SMTP password")
mailClient := bmail.New(
c.Mailer.Server,
c.Mailer.Port,
c.Mailer.Username,
smtpPassword,
smtpRoots,
*fromAddress,
logger,
scope,
*reconnBase,
*reconnMax)
var nags durationSlice
for _, nagDuration := range c.Mailer.NagTimes {
dur, err := time.ParseDuration(nagDuration)
if err != nil {
logger.AuditErrf("Failed to parse nag duration string [%s]: %s", nagDuration, err)
return
}
// Add some padding to the nag times so we send _before_ the configured
// time rather than after. See https://github.com/letsencrypt/boulder/pull/1029
adjustedInterval := dur + c.Mailer.Frequency.Duration
nags = append(nags, adjustedInterval)
}
// Make sure durations are sorted in increasing order
sort.Sort(nags)
if c.Mailer.UpdateChunkSize > 65535 {
// MariaDB limits the number of placeholders parameters to max_uint16:
// https://github.com/MariaDB/server/blob/10.5/sql/sql_prepare.cc#L2629-L2635
cmd.Fail(fmt.Sprintf("UpdateChunkSize of %d is too big", c.Mailer.UpdateChunkSize))
}
m := mailer{
log: logger,
dbMap: dbMap,
rs: sac,
mailer: mailClient,
subjectTemplate: subjTmpl,
emailTemplate: tmpl,
nagTimes: nags,
certificatesPerTick: c.Mailer.CertLimit,
addressLimiter: &limiter{clk: cmd.Clock(), limit: c.Mailer.MailsPerAddressPerDay},
updateChunkSize: c.Mailer.UpdateChunkSize,
parallelSends: c.Mailer.ParallelSends,
clk: clk,
stats: initStats(scope),
}
// Prefill this labelled stat with the possible label values, so each value is
// set to 0 on startup, rather than being missing from stats collection until
// the first mail run.
for _, expiresIn := range nags {
m.stats.nagsAtCapacity.With(prometheus.Labels{"nag_group": expiresIn.String()}).Set(0)
}
ctx, cancel := context.WithCancel(context.Background())
go cmd.CatchSignals(cancel)
if *daemon {
t := time.NewTicker(c.Mailer.Frequency.Duration)
for {
select {
case <-t.C:
err = m.findExpiringCertificates(ctx)
if err != nil && !errors.Is(err, context.Canceled) {
cmd.FailOnError(err, "expiration-mailer has failed")
}
case <-ctx.Done():
return
}
}
} else {
err = m.findExpiringCertificates(ctx)
if err != nil && !errors.Is(err, context.Canceled) {
cmd.FailOnError(err, "expiration-mailer has failed")
}
}
}
func init() {
cmd.RegisterCommand("expiration-mailer", main, &cmd.ConfigValidator{Config: &Config{}})
}

File diff suppressed because it is too large Load Diff

View File

@ -1,71 +0,0 @@
package notmain
import (
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"math/big"
"testing"
"time"
"github.com/letsencrypt/boulder/mocks"
"github.com/letsencrypt/boulder/test"
)
var (
email1 = "mailto:one@shared-example.com"
email2 = "mailto:two@shared-example.com"
)
func TestSendEarliestCertInfo(t *testing.T) {
expiresIn := 24 * time.Hour
ctx := setup(t, []time.Duration{expiresIn})
defer ctx.cleanUp()
rawCertA := newX509Cert("happy A",
ctx.fc.Now().AddDate(0, 0, 5),
[]string{"example-A.com", "SHARED-example.com"},
serial1,
)
rawCertB := newX509Cert("happy B",
ctx.fc.Now().AddDate(0, 0, 2),
[]string{"shared-example.com", "example-b.com"},
serial2,
)
conn, err := ctx.m.mailer.Connect()
test.AssertNotError(t, err, "connecting SMTP")
err = ctx.m.sendNags(conn, []string{email1, email2}, []*x509.Certificate{rawCertA, rawCertB})
if err != nil {
t.Fatal(err)
}
if len(ctx.mc.Messages) != 2 {
t.Errorf("num of messages, want %d, got %d", 2, len(ctx.mc.Messages))
}
if len(ctx.mc.Messages) == 0 {
t.Fatalf("no message sent")
}
domains := "example-a.com\nexample-b.com\nshared-example.com"
expected := mocks.MailerMessage{
Subject: "Testing: Let's Encrypt certificate expiration notice for domain \"example-a.com\" (and 2 more)",
Body: fmt.Sprintf(`hi, cert for DNS names %s is going to expire in 2 days (%s)`,
domains,
rawCertB.NotAfter.Format(time.DateOnly)),
}
expected.To = "one@shared-example.com"
test.AssertEquals(t, expected, ctx.mc.Messages[0])
expected.To = "two@shared-example.com"
test.AssertEquals(t, expected, ctx.mc.Messages[1])
}
func newX509Cert(commonName string, notAfter time.Time, dnsNames []string, serial *big.Int) *x509.Certificate {
return &x509.Certificate{
Subject: pkix.Name{
CommonName: commonName,
},
NotAfter: notAfter,
DNSNames: dnsNames,
SerialNumber: serial,
}
}

View File

@ -1,304 +0,0 @@
package notmain
import (
"bufio"
"context"
"encoding/json"
"errors"
"flag"
"fmt"
"os"
"strings"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
"github.com/letsencrypt/boulder/features"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/sa"
)
type idExporter struct {
log blog.Logger
dbMap *db.WrappedMap
clk clock.Clock
grace time.Duration
}
// resultEntry is a JSON marshalable exporter result entry.
type resultEntry struct {
// ID is exported to support marshaling to JSON.
ID int64 `json:"id"`
// Hostname is exported to support marshaling to JSON. Not all queries
// will fill this field, so it's JSON field tag marks at as
// omittable.
Hostname string `json:"hostname,omitempty"`
}
// reverseHostname converts (reversed) names sourced from the
// registrations table to standard hostnames.
func (r *resultEntry) reverseHostname() {
r.Hostname = sa.ReverseName(r.Hostname)
}
// idExporterResults is passed as a selectable 'holder' for the results
// of id-exporter database queries
type idExporterResults []*resultEntry
// marshalToJSON returns JSON as bytes for all elements of the inner `id`
// slice.
func (i *idExporterResults) marshalToJSON() ([]byte, error) {
data, err := json.Marshal(i)
if err != nil {
return nil, err
}
data = append(data, '\n')
return data, nil
}
// writeToFile writes the contents of the inner `ids` slice, as JSON, to
// a file
func (i *idExporterResults) writeToFile(outfile string) error {
data, err := i.marshalToJSON()
if err != nil {
return err
}
return os.WriteFile(outfile, data, 0644)
}
// findIDs gathers all registration IDs with unexpired certificates.
func (c idExporter) findIDs(ctx context.Context) (idExporterResults, error) {
var holder idExporterResults
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT DISTINCT r.id
FROM registrations AS r
INNER JOIN certificates AS c on c.registrationID = r.id
WHERE r.contact NOT IN ('[]', 'null')
AND c.expires >= :expireCutoff;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
})
if err != nil {
c.log.AuditErrf("Error finding IDs: %s", err)
return nil, err
}
return holder, nil
}
// findIDsWithExampleHostnames gathers all registration IDs with
// unexpired certificates and a corresponding example hostname.
func (c idExporter) findIDsWithExampleHostnames(ctx context.Context) (idExporterResults, error) {
var holder idExporterResults
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT SQL_BIG_RESULT
cert.registrationID AS id,
name.reversedName AS hostname
FROM certificates AS cert
INNER JOIN issuedNames AS name ON name.serial = cert.serial
WHERE cert.expires >= :expireCutoff
GROUP BY cert.registrationID;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
})
if err != nil {
c.log.AuditErrf("Error finding IDs and example hostnames: %s", err)
return nil, err
}
for _, result := range holder {
result.reverseHostname()
}
return holder, nil
}
// findIDsForHostnames gathers all registration IDs with unexpired
// certificates for each `hostnames` entry.
func (c idExporter) findIDsForHostnames(ctx context.Context, hostnames []string) (idExporterResults, error) {
var holder idExporterResults
for _, hostname := range hostnames {
// Pass the same list in each time, borp will happily just append to the slice
// instead of overwriting it each time
// https://github.com/letsencrypt/borp/blob/c87bd6443d59746a33aca77db34a60cfc344adb2/select.go#L349-L353
_, err := c.dbMap.Select(
ctx,
&holder,
`SELECT DISTINCT c.registrationID AS id
FROM certificates AS c
INNER JOIN issuedNames AS n ON c.serial = n.serial
WHERE c.expires >= :expireCutoff
AND n.reversedName = :reversedName;`,
map[string]interface{}{
"expireCutoff": c.clk.Now().Add(-c.grace),
"reversedName": sa.ReverseName(hostname),
},
)
if err != nil {
if db.IsNoRows(err) {
continue
}
return nil, err
}
}
return holder, nil
}
const usageIntro = `
Introduction:
The ID exporter exists to retrieve the IDs of all registered
users with currently unexpired certificates. This list of registration IDs can
then be given as input to the notification mailer to send bulk notifications.
The -grace parameter can be used to allow registrations with certificates that
have already expired to be included in the export. The argument is a Go duration
obeying the usual suffix rules (e.g. 24h).
Registration IDs are favoured over email addresses as the intermediate format in
order to ensure the most up to date contact information is used at the time of
notification. The notification mailer will resolve the ID to email(s) when the
mailing is underway, ensuring we use the correct address if a user has updated
their contact information between the time of export and the time of
notification.
By default, the ID exporter's output will be JSON of the form:
[
{ "id": 1 },
...
{ "id": n }
]
Operations that return a hostname will be JSON of the form:
[
{ "id": 1, "hostname": "example-1.com" },
...
{ "id": n, "hostname": "example-n.com" }
]
Examples:
Export all registration IDs with unexpired certificates to "regs.json":
id-exporter -config test/config/id-exporter.json -outfile regs.json
Export all registration IDs with certificates that are unexpired or expired
within the last two days to "regs.json":
id-exporter -config test/config/id-exporter.json -grace 48h -outfile
"regs.json"
Required arguments:
- config
- outfile`
// unmarshalHostnames unmarshals a hostnames file and ensures that the file
// contained at least one entry.
func unmarshalHostnames(filePath string) ([]string, error) {
file, err := os.Open(filePath)
if err != nil {
return nil, err
}
defer file.Close()
scanner := bufio.NewScanner(file)
scanner.Split(bufio.ScanLines)
var hostnames []string
for scanner.Scan() {
line := scanner.Text()
if strings.Contains(line, " ") {
return nil, fmt.Errorf(
"line: %q contains more than one entry, entries must be separated by newlines", line)
}
hostnames = append(hostnames, line)
}
if len(hostnames) == 0 {
return nil, errors.New("provided file contains 0 hostnames")
}
return hostnames, nil
}
type Config struct {
ContactExporter struct {
DB cmd.DBConfig
cmd.PasswordConfig
Features features.Config
}
}
func main() {
outFile := flag.String("outfile", "", "File to output results JSON to.")
grace := flag.Duration("grace", 2*24*time.Hour, "Include results with certificates that expired in < grace ago.")
hostnamesFile := flag.String(
"hostnames", "", "Only include results with unexpired certificates that contain hostnames\nlisted (newline separated) in this file.")
withExampleHostnames := flag.Bool(
"with-example-hostnames", false, "Include an example hostname for each registration ID with an unexpired certificate.")
configFile := flag.String("config", "", "File containing a JSON config.")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "%s\n\n", usageIntro)
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
}
// Parse flags and check required.
flag.Parse()
if *outFile == "" || *configFile == "" {
flag.Usage()
os.Exit(1)
}
log := cmd.NewLogger(cmd.SyslogConfig{StdoutLevel: 7})
log.Info(cmd.VersionString())
// Load configuration file.
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, fmt.Sprintf("Reading %q", *configFile))
// Unmarshal JSON config file.
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Unmarshaling config")
features.Set(cfg.ContactExporter.Features)
dbMap, err := sa.InitWrappedDb(cfg.ContactExporter.DB, nil, log)
cmd.FailOnError(err, "While initializing dbMap")
exporter := idExporter{
log: log,
dbMap: dbMap,
clk: cmd.Clock(),
grace: *grace,
}
var results idExporterResults
if *hostnamesFile != "" {
hostnames, err := unmarshalHostnames(*hostnamesFile)
cmd.FailOnError(err, "Problem unmarshalling hostnames")
results, err = exporter.findIDsForHostnames(context.TODO(), hostnames)
cmd.FailOnError(err, "Could not find IDs for hostnames")
} else if *withExampleHostnames {
results, err = exporter.findIDsWithExampleHostnames(context.TODO())
cmd.FailOnError(err, "Could not find IDs with hostnames")
} else {
results, err = exporter.findIDs(context.TODO())
cmd.FailOnError(err, "Could not find IDs")
}
err = results.writeToFile(*outFile)
cmd.FailOnError(err, fmt.Sprintf("Could not write result to outfile %q", *outFile))
}
func init() {
cmd.RegisterCommand("id-exporter", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,461 +0,0 @@
package notmain
import (
"context"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"math/big"
"os"
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/core"
corepb "github.com/letsencrypt/boulder/core/proto"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/sa"
sapb "github.com/letsencrypt/boulder/sa/proto"
"github.com/letsencrypt/boulder/test"
isa "github.com/letsencrypt/boulder/test/inmem/sa"
"github.com/letsencrypt/boulder/test/vars"
)
var (
regA *corepb.Registration
regB *corepb.Registration
regC *corepb.Registration
regD *corepb.Registration
)
const (
emailARaw = "test@example.com"
emailBRaw = "example@example.com"
emailCRaw = "test-example@example.com"
telNum = "666-666-7777"
)
func TestFindIDs(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDs - since no certificates have been added corresponding to
// the above registrations, no IDs should be found.
results, err := testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
// Run findIDs - since there are three registrations with unexpired certs
// we should get exactly three IDs back: RegA, RegC and RegD. RegB should
// *not* be present since their certificate has already expired. Unlike
// previous versions of this test RegD is not filtered out for having a `tel:`
// contact field anymore - this is the duty of the notify-mailer.
results, err = testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
// Allow a 1 year grace period
testCtx.c.grace = 360 * 24 * time.Hour
results, err = testCtx.c.findIDs(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
// Now all four registration should be returned, including RegB since its
// certificate expired within the grace period
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regB.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestFindIDsWithExampleHostnames(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDsWithExampleHostnames - since no certificates have been
// added corresponding to the above registrations, no IDs should be
// found.
results, err := testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
// Run findIDsWithExampleHostnames - since there are three
// registrations with unexpired certs we should get exactly three
// IDs back: RegA, RegC and RegD. RegB should *not* be present since
// their certificate has already expired.
results, err = testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
test.AssertEquals(t, entry.Hostname, "example-a.com")
case regC.Id:
test.AssertEquals(t, entry.Hostname, "example-c.com")
case regD.Id:
test.AssertEquals(t, entry.Hostname, "example-d.com")
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
// Allow a 1 year grace period
testCtx.c.grace = 360 * 24 * time.Hour
results, err = testCtx.c.findIDsWithExampleHostnames(ctx)
test.AssertNotError(t, err, "findIDs() produced error")
// Now all four registrations should be returned, including RegB
// since it expired within the grace period
test.AssertEquals(t, len(results), 4)
for _, entry := range results {
switch entry.ID {
case regA.Id:
test.AssertEquals(t, entry.Hostname, "example-a.com")
case regB.Id:
test.AssertEquals(t, entry.Hostname, "example-b.com")
case regC.Id:
test.AssertEquals(t, entry.Hostname, "example-c.com")
case regD.Id:
test.AssertEquals(t, entry.Hostname, "example-d.com")
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestFindIDsForHostnames(t *testing.T) {
ctx := context.Background()
testCtx := setup(t)
defer testCtx.cleanUp()
// Add some test registrations
testCtx.addRegistrations(t)
// Run findIDsForHostnames - since no certificates have been added corresponding to
// the above registrations, no IDs should be found.
results, err := testCtx.c.findIDsForHostnames(ctx, []string{"example-a.com", "example-b.com", "example-c.com", "example-d.com"})
test.AssertNotError(t, err, "findIDs() produced error")
test.AssertEquals(t, len(results), 0)
// Now add some certificates
testCtx.addCertificates(t)
results, err = testCtx.c.findIDsForHostnames(ctx, []string{"example-a.com", "example-b.com", "example-c.com", "example-d.com"})
test.AssertNotError(t, err, "findIDsForHostnames() failed")
test.AssertEquals(t, len(results), 3)
for _, entry := range results {
switch entry.ID {
case regA.Id:
case regC.Id:
case regD.Id:
default:
t.Errorf("ID: %d not expected", entry.ID)
}
}
}
func TestWriteToFile(t *testing.T) {
expected := `[{"id":1},{"id":2},{"id":3}]`
mockResults := idExporterResults{{ID: 1}, {ID: 2}, {ID: 3}}
dir := os.TempDir()
f, err := os.CreateTemp(dir, "ids_test")
test.AssertNotError(t, err, "os.CreateTemp produced an error")
// Writing the result to an outFile should produce the correct results
err = mockResults.writeToFile(f.Name())
test.AssertNotError(t, err, fmt.Sprintf("writeIDs produced an error writing to %s", f.Name()))
contents, err := os.ReadFile(f.Name())
test.AssertNotError(t, err, fmt.Sprintf("os.ReadFile produced an error reading from %s", f.Name()))
test.AssertEquals(t, string(contents), expected+"\n")
}
func Test_unmarshalHostnames(t *testing.T) {
testDir := os.TempDir()
testFile, err := os.CreateTemp(testDir, "ids_test")
test.AssertNotError(t, err, "os.CreateTemp produced an error")
// Non-existent hostnamesFile
_, err = unmarshalHostnames("file_does_not_exist")
test.AssertError(t, err, "expected error for non-existent file")
// Empty hostnamesFile
err = os.WriteFile(testFile.Name(), []byte(""), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
_, err = unmarshalHostnames(testFile.Name())
test.AssertError(t, err, "expected error for file containing 0 entries")
// One hostname present in the hostnamesFile
err = os.WriteFile(testFile.Name(), []byte("example-a.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
results, err := unmarshalHostnames(testFile.Name())
test.AssertNotError(t, err, "error when unmarshalling hostnamesFile with a single hostname")
test.AssertEquals(t, len(results), 1)
// Two hostnames present in the hostnamesFile
err = os.WriteFile(testFile.Name(), []byte("example-a.com\nexample-b.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
results, err = unmarshalHostnames(testFile.Name())
test.AssertNotError(t, err, "error when unmarshalling hostnamesFile with a two hostnames")
test.AssertEquals(t, len(results), 2)
// Three hostnames present in the hostnamesFile but two are separated only by a space
err = os.WriteFile(testFile.Name(), []byte("example-a.com\nexample-b.com example-c.com"), 0644)
test.AssertNotError(t, err, "os.WriteFile produced an error")
_, err = unmarshalHostnames(testFile.Name())
test.AssertError(t, err, "error when unmarshalling hostnamesFile with three space separated domains")
}
type testCtx struct {
c idExporter
ssa sapb.StorageAuthorityClient
cleanUp func()
}
func (tc testCtx) addRegistrations(t *testing.T) {
emailA := "mailto:" + emailARaw
emailB := "mailto:" + emailBRaw
emailC := "mailto:" + emailCRaw
tel := "tel:" + telNum
// Every registration needs a unique JOSE key
jsonKeyA := []byte(`{
"kty":"RSA",
"n":"0vx7agoebGcQSuuPiLJXZptN9nndrQmbXEps2aiAFbWhM78LhWx4cbbfAAtVT86zwu1RK7aPFFxuhDR1L6tSoc_BJECPebWKRXjBZCiFV4n3oknjhMstn64tZ_2W-5JsGY4Hc5n9yBXArwl93lqt7_RN5w6Cf0h4QyQ5v-65YGjQR0_FDW2QvzqY368QQMicAtaSqzs8KJZgnYb9c7d0zgdAZHzu6qMQvRL5hajrn1n91CbOpbISD08qNLyrdkt-bFTWhAI4vMQFh6WeZu0fM4lFd2NcRwr3XPksINHaQ-G_xBniIqbw0Ls1jF44-csFCur-kEgU8awapJzKnqDKgw",
"e":"AQAB"
}`)
jsonKeyB := []byte(`{
"kty":"RSA",
"n":"z8bp-jPtHt4lKBqepeKF28g_QAEOuEsCIou6sZ9ndsQsEjxEOQxQ0xNOQezsKa63eogw8YS3vzjUcPP5BJuVzfPfGd5NVUdT-vSSwxk3wvk_jtNqhrpcoG0elRPQfMVsQWmxCAXCVRz3xbcFI8GTe-syynG3l-g1IzYIIZVNI6jdljCZML1HOMTTW4f7uJJ8mM-08oQCeHbr5ejK7O2yMSSYxW03zY-Tj1iVEebROeMv6IEEJNFSS4yM-hLpNAqVuQxFGetwtwjDMC1Drs1dTWrPuUAAjKGrP151z1_dE74M5evpAhZUmpKv1hY-x85DC6N0hFPgowsanmTNNiV75w",
"e":"AAEAAQ"
}`)
jsonKeyC := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-sCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
jsonKeyD := []byte(`{
"kty":"RSA",
"n":"rFH5kUBZrlPj73epjJjyCxzVzZuV--JjKgapoqm9pOuOt20BUTdHqVfC2oDclqM7HFhkkX9OSJMTHgZ7WaVqZv9u1X2yjdx9oVmMLuspX7EytW_ZKDZSzL-FCOFCuQAuYKkLbsdcA3eHBK_lwc4zwdeHFMKIulNvLqckkqYB9s8GpgNXBDIQ8GjR5HuJke_WUNjYHSd8jY1LU9swKWsLQe2YoQUz_ekQvBvBCoaFEtrtRaSJKNLIVDObXFr2TLIiFiM0Em90kK01-eQ7ZiruZTKomll64bRFPoNo4_uwubddg3xTqur2vdF3NyhTrYdvAgTem4uC0PFjEQ1bK_djBQ",
"e":"AQAB"
}`)
// Regs A through C have `mailto:` contact ACME URL's
regA = &corepb.Registration{
Id: 1,
Contact: []string{emailA},
Key: jsonKeyA,
}
regB = &corepb.Registration{
Id: 2,
Contact: []string{emailB},
Key: jsonKeyB,
}
regC = &corepb.Registration{
Id: 3,
Contact: []string{emailC},
Key: jsonKeyC,
}
// Reg D has a `tel:` contact ACME URL
regD = &corepb.Registration{
Id: 4,
Contact: []string{tel},
Key: jsonKeyD,
}
// Add the four test registrations
ctx := context.Background()
var err error
regA, err = tc.ssa.NewRegistration(ctx, regA)
test.AssertNotError(t, err, "Couldn't store regA")
regB, err = tc.ssa.NewRegistration(ctx, regB)
test.AssertNotError(t, err, "Couldn't store regB")
regC, err = tc.ssa.NewRegistration(ctx, regC)
test.AssertNotError(t, err, "Couldn't store regC")
regD, err = tc.ssa.NewRegistration(ctx, regD)
test.AssertNotError(t, err, "Couldn't store regD")
}
func (tc testCtx) addCertificates(t *testing.T) {
ctx := context.Background()
serial1 := big.NewInt(1336)
serial1String := core.SerialToString(serial1)
serial2 := big.NewInt(1337)
serial2String := core.SerialToString(serial2)
serial3 := big.NewInt(1338)
serial3String := core.SerialToString(serial3)
serial4 := big.NewInt(1339)
serial4String := core.SerialToString(serial4)
key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
test.AssertNotError(t, err, "creating test key")
fc := clock.NewFake()
// Add one cert for RegA that expires in 30 days
rawCertA := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy A",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-a.com"},
SerialNumber: serial1,
}
certDerA, _ := x509.CreateCertificate(rand.Reader, &rawCertA, &rawCertA, key.Public(), key)
certA := &core.Certificate{
RegistrationID: regA.Id,
Serial: serial1String,
Expires: rawCertA.NotAfter,
DER: certDerA,
}
err = tc.c.dbMap.Insert(ctx, certA)
test.AssertNotError(t, err, "Couldn't add certA")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-a",
serial1String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certA")
// Add one cert for RegB that already expired 30 days ago
rawCertB := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy B",
},
NotAfter: fc.Now().Add(-30 * 24 * time.Hour),
DNSNames: []string{"example-b.com"},
SerialNumber: serial2,
}
certDerB, _ := x509.CreateCertificate(rand.Reader, &rawCertB, &rawCertB, key.Public(), key)
certB := &core.Certificate{
RegistrationID: regB.Id,
Serial: serial2String,
Expires: rawCertB.NotAfter,
DER: certDerB,
}
err = tc.c.dbMap.Insert(ctx, certB)
test.AssertNotError(t, err, "Couldn't add certB")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-b",
serial2String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certB")
// Add one cert for RegC that expires in 30 days
rawCertC := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy C",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-c.com"},
SerialNumber: serial3,
}
certDerC, _ := x509.CreateCertificate(rand.Reader, &rawCertC, &rawCertC, key.Public(), key)
certC := &core.Certificate{
RegistrationID: regC.Id,
Serial: serial3String,
Expires: rawCertC.NotAfter,
DER: certDerC,
}
err = tc.c.dbMap.Insert(ctx, certC)
test.AssertNotError(t, err, "Couldn't add certC")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-c",
serial3String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certC")
// Add one cert for RegD that expires in 30 days
rawCertD := x509.Certificate{
Subject: pkix.Name{
CommonName: "happy D",
},
NotAfter: fc.Now().Add(30 * 24 * time.Hour),
DNSNames: []string{"example-d.com"},
SerialNumber: serial4,
}
certDerD, _ := x509.CreateCertificate(rand.Reader, &rawCertD, &rawCertD, key.Public(), key)
certD := &core.Certificate{
RegistrationID: regD.Id,
Serial: serial4String,
Expires: rawCertD.NotAfter,
DER: certDerD,
}
err = tc.c.dbMap.Insert(ctx, certD)
test.AssertNotError(t, err, "Couldn't add certD")
_, err = tc.c.dbMap.ExecContext(
ctx,
"INSERT INTO issuedNames (reversedName, serial, notBefore) VALUES (?,?,0)",
"com.example-d",
serial4String,
)
test.AssertNotError(t, err, "Couldn't add issued name for certD")
}
func setup(t *testing.T) testCtx {
log := blog.UseMock()
fc := clock.NewFake()
// Using DBConnSAFullPerms to be able to insert registrations and certificates
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
if err != nil {
t.Fatalf("Couldn't connect the database: %s", err)
}
cleanUp := test.ResetBoulderTestDatabase(t)
ssa, err := sa.NewSQLStorageAuthority(dbMap, dbMap, nil, 1, 0, fc, log, metrics.NoopRegisterer)
if err != nil {
t.Fatalf("unable to create SQLStorageAuthority: %s", err)
}
return testCtx{
c: idExporter{
dbMap: dbMap,
log: log,
clk: fc,
},
ssa: isa.SA{Impl: ssa},
cleanUp: cleanUp,
}
}

View File

@ -5,6 +5,7 @@ import (
"flag"
"fmt"
"net"
"net/netip"
"os"
"github.com/letsencrypt/boulder/cmd"
@ -41,8 +42,8 @@ func derivePrefix(key []byte, grpcAddr string) (string, error) {
return "", fmt.Errorf("nonce service gRPC address must include an IP address: got %q", grpcAddr)
}
if host != "" && port != "" {
hostIP := net.ParseIP(host)
if hostIP == nil {
hostIP, err := netip.ParseAddr(host)
if err != nil {
return "", fmt.Errorf("gRPC address host part was not an IP address")
}
if hostIP.IsUnspecified() {

View File

@ -1,619 +0,0 @@
package notmain
import (
"context"
"encoding/csv"
"encoding/json"
"errors"
"flag"
"fmt"
"io"
"net/mail"
"os"
"sort"
"strconv"
"strings"
"sync"
"text/template"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
bmail "github.com/letsencrypt/boulder/mail"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/policy"
"github.com/letsencrypt/boulder/sa"
)
type mailer struct {
clk clock.Clock
log blog.Logger
dbMap dbSelector
mailer bmail.Mailer
subject string
emailTemplate *template.Template
recipients []recipient
targetRange interval
sleepInterval time.Duration
parallelSends uint
}
// interval defines a range of email addresses to send to in alphabetical order.
// The `start` field is inclusive and the `end` field is exclusive. To include
// everything, set `end` to \xFF.
type interval struct {
start string
end string
}
// contactQueryResult is a receiver for queries to the `registrations` table.
type contactQueryResult struct {
// ID is exported to receive the value of `id`.
ID int64
// Contact is exported to receive the value of `contact`.
Contact []byte
}
func (i *interval) ok() error {
if i.start > i.end {
return fmt.Errorf("interval start value (%s) is greater than end value (%s)",
i.start, i.end)
}
return nil
}
func (i *interval) includes(s string) bool {
return s >= i.start && s < i.end
}
// ok ensures that both the `targetRange` and `sleepInterval` are valid.
func (m *mailer) ok() error {
err := m.targetRange.ok()
if err != nil {
return err
}
if m.sleepInterval < 0 {
return fmt.Errorf(
"sleep interval (%d) is < 0", m.sleepInterval)
}
return nil
}
func (m *mailer) logStatus(to string, current, total int, start time.Time) {
// Should never happen.
if total <= 0 || current < 1 || current > total {
m.log.AuditErrf("Invalid current (%d) or total (%d)", current, total)
}
completion := (float32(current) / float32(total)) * 100
now := m.clk.Now()
elapsed := now.Sub(start)
m.log.Infof("Sending message (%d) of (%d) to address (%s) [%.2f%%] time elapsed (%s)",
current, total, to, completion, elapsed)
}
func sortAddresses(input addressToRecipientMap) []string {
var addresses []string
for address := range input {
addresses = append(addresses, address)
}
sort.Strings(addresses)
return addresses
}
// makeMessageBody is a helper for mailer.run() that's split out for the
// purposes of testing.
func (m *mailer) makeMessageBody(recipients []recipient) (string, error) {
var messageBody strings.Builder
err := m.emailTemplate.Execute(&messageBody, recipients)
if err != nil {
return "", err
}
if messageBody.Len() == 0 {
return "", errors.New("templating resulted in an empty message body")
}
return messageBody.String(), nil
}
func (m *mailer) run(ctx context.Context) error {
err := m.ok()
if err != nil {
return err
}
totalRecipients := len(m.recipients)
m.log.Infof("Resolving addresses for (%d) recipients", totalRecipients)
addressToRecipient, err := m.resolveAddresses(ctx)
if err != nil {
return err
}
totalAddresses := len(addressToRecipient)
if totalAddresses == 0 {
return errors.New("0 recipients remained after resolving addresses")
}
m.log.Infof("%d recipients were resolved to %d addresses", totalRecipients, totalAddresses)
var mostRecipients string
var mostRecipientsLen int
for k, v := range addressToRecipient {
if len(v) > mostRecipientsLen {
mostRecipientsLen = len(v)
mostRecipients = k
}
}
m.log.Infof("Address %q was associated with the most recipients (%d)",
mostRecipients, mostRecipientsLen)
type work struct {
index int
address string
}
var wg sync.WaitGroup
workChan := make(chan work, totalAddresses)
startTime := m.clk.Now()
sortedAddresses := sortAddresses(addressToRecipient)
if (m.targetRange.start != "" && m.targetRange.start > sortedAddresses[totalAddresses-1]) ||
(m.targetRange.end != "" && m.targetRange.end < sortedAddresses[0]) {
return errors.New("Zero found addresses fall inside target range")
}
go func(ch chan<- work) {
for i, address := range sortedAddresses {
ch <- work{i, address}
}
close(workChan)
}(workChan)
if m.parallelSends < 1 {
m.parallelSends = 1
}
for senderNum := uint(0); senderNum < m.parallelSends; senderNum++ {
// For politeness' sake, don't open more than 1 new connection per
// second.
if senderNum > 0 {
m.clk.Sleep(time.Second)
}
conn, err := m.mailer.Connect()
if err != nil {
return fmt.Errorf("connecting parallel sender %d: %w", senderNum, err)
}
wg.Add(1)
go func(conn bmail.Conn, ch <-chan work) {
defer wg.Done()
for w := range ch {
if !m.targetRange.includes(w.address) {
m.log.Debugf("Address %q is outside of target range, skipping", w.address)
continue
}
err := policy.ValidEmail(w.address)
if err != nil {
m.log.Infof("Skipping %q due to policy violation: %s", w.address, err)
continue
}
recipients := addressToRecipient[w.address]
m.logStatus(w.address, w.index+1, totalAddresses, startTime)
messageBody, err := m.makeMessageBody(recipients)
if err != nil {
m.log.Errf("Skipping %q due to templating error: %s", w.address, err)
continue
}
err = conn.SendMail([]string{w.address}, m.subject, messageBody)
if err != nil {
var badAddrErr bmail.BadAddressSMTPError
if errors.As(err, &badAddrErr) {
m.log.Errf("address %q was rejected by server: %s", w.address, err)
continue
}
m.log.AuditErrf("while sending mail (%d) of (%d) to address %q: %s",
w.index, len(sortedAddresses), w.address, err)
}
m.clk.Sleep(m.sleepInterval)
}
conn.Close()
}(conn, workChan)
}
wg.Wait()
return nil
}
// resolveAddresses creates a mapping of email addresses to (a list of)
// `recipient`s that resolve to that email address.
func (m *mailer) resolveAddresses(ctx context.Context) (addressToRecipientMap, error) {
result := make(addressToRecipientMap, len(m.recipients))
for _, recipient := range m.recipients {
addresses, err := getAddressForID(ctx, recipient.id, m.dbMap)
if err != nil {
return nil, err
}
for _, address := range addresses {
parsed, err := mail.ParseAddress(address)
if err != nil {
m.log.Errf("Unparsable address %q, skipping ID (%d)", address, recipient.id)
continue
}
result[parsed.Address] = append(result[parsed.Address], recipient)
}
}
return result, nil
}
// dbSelector abstracts over a subset of methods from `borp.DbMap` objects to
// facilitate mocking in unit tests.
type dbSelector interface {
SelectOne(ctx context.Context, holder interface{}, query string, args ...interface{}) error
}
// getAddressForID queries the database for the email address associated with
// the provided registration ID.
func getAddressForID(ctx context.Context, id int64, dbMap dbSelector) ([]string, error) {
var result contactQueryResult
err := dbMap.SelectOne(ctx, &result,
`SELECT id,
contact
FROM registrations
WHERE contact NOT IN ('[]', 'null')
AND id = :id;`,
map[string]interface{}{"id": id})
if err != nil {
if db.IsNoRows(err) {
return []string{}, nil
}
return nil, err
}
var contacts []string
err = json.Unmarshal(result.Contact, &contacts)
if err != nil {
return nil, err
}
var addresses []string
for _, contact := range contacts {
if strings.HasPrefix(contact, "mailto:") {
addresses = append(addresses, strings.TrimPrefix(contact, "mailto:"))
}
}
return addresses, nil
}
// recipient represents a single record from the recipient list file. The 'id'
// column is parsed to the 'id' field, all additional data will be parsed to a
// mapping of column name to value in the 'Data' field. Please inform SRE if you
// make any changes to the exported fields of this struct. These fields are
// referenced in operationally critical e-mail templates used to notify
// subscribers during incident response.
type recipient struct {
// id is the subscriber's ID.
id int64
// Data is a mapping of column name to value parsed from a single record in
// the provided recipient list file. It's exported so the contents can be
// accessed by the template package. Please inform SRE if you make any
// changes to this field.
Data map[string]string
}
// addressToRecipientMap maps email addresses to a list of `recipient`s that
// resolve to that email address.
type addressToRecipientMap map[string][]recipient
// readRecipientsList parses the contents of a recipient list file into a list
// of `recipient` objects.
func readRecipientsList(filename string, delimiter rune) ([]recipient, string, error) {
f, err := os.Open(filename)
if err != nil {
return nil, "", err
}
reader := csv.NewReader(f)
reader.Comma = delimiter
// Parse header.
record, err := reader.Read()
if err != nil {
return nil, "", fmt.Errorf("failed to parse header: %w", err)
}
if record[0] != "id" {
return nil, "", errors.New("header must begin with \"id\"")
}
// Collect the names of each header column after `id`.
var dataColumns []string
for _, v := range record[1:] {
dataColumns = append(dataColumns, strings.TrimSpace(v))
if len(v) == 0 {
return nil, "", errors.New("header contains an empty column")
}
}
var recordsWithEmptyColumns []int64
var recordsWithDuplicateIDs []int64
var probsBuff strings.Builder
stringProbs := func() string {
if len(recordsWithEmptyColumns) != 0 {
fmt.Fprintf(&probsBuff, "ID(s) %v contained empty columns and ",
recordsWithEmptyColumns)
}
if len(recordsWithDuplicateIDs) != 0 {
fmt.Fprintf(&probsBuff, "ID(s) %v were skipped as duplicates",
recordsWithDuplicateIDs)
}
if probsBuff.Len() == 0 {
return ""
}
return strings.TrimSuffix(probsBuff.String(), " and ")
}
// Parse records.
recipientIDs := make(map[int64]bool)
var recipients []recipient
for {
record, err := reader.Read()
if errors.Is(err, io.EOF) {
// Finished parsing the file.
if len(recipients) == 0 {
return nil, stringProbs(), errors.New("no records after header")
}
return recipients, stringProbs(), nil
} else if err != nil {
return nil, "", err
}
// Ensure the first column of each record can be parsed as a valid
// registration ID.
recordID := record[0]
id, err := strconv.ParseInt(recordID, 10, 64)
if err != nil {
return nil, "", fmt.Errorf(
"%q couldn't be parsed as a registration ID due to: %s", recordID, err)
}
// Skip records that have the same ID as those read previously.
if recipientIDs[id] {
recordsWithDuplicateIDs = append(recordsWithDuplicateIDs, id)
continue
}
recipientIDs[id] = true
// Collect the columns of data after `id` into a map.
var emptyColumn bool
data := make(map[string]string)
for i, v := range record[1:] {
if len(v) == 0 {
emptyColumn = true
}
data[dataColumns[i]] = v
}
// Only used for logging.
if emptyColumn {
recordsWithEmptyColumns = append(recordsWithEmptyColumns, id)
}
recipients = append(recipients, recipient{id, data})
}
}
const usageIntro = `
Introduction:
The notification mailer exists to send a message to the contact associated
with a list of registration IDs. The attributes of the message (from address,
subject, and message content) are provided by the command line arguments. The
message content is provided as a path to a template file via the -body argument.
Provide a list of recipient user ids in a CSV file passed with the -recipientList
flag. The CSV file must have "id" as the first column and may have additional
fields to be interpolated into the email template:
id, lastIssuance
1234, "from example.com 2018-12-01"
5678, "from example.net 2018-12-13"
The additional fields will be interpolated with Golang templating, e.g.:
Your last issuance on each account was:
{{ range . }} {{ .Data.lastIssuance }}
{{ end }}
To help the operator gain confidence in the mailing run before committing fully
three safety features are supported: dry runs, intervals and a sleep between emails.
The -dryRun=true flag will use a mock mailer that prints message content to
stdout instead of performing an SMTP transaction with a real mailserver. This
can be used when the initial parameters are being tweaked to ensure no real
emails are sent. Using -dryRun=false will send real email.
Intervals supported via the -start and -end arguments. Only email addresses that
are alphabetically between the -start and -end strings will be sent. This can be used
to break up sending into batches, or more likely to resume sending if a batch is killed,
without resending messages that have already been sent. The -start flag is inclusive and
the -end flag is exclusive.
Notify-mailer de-duplicates email addresses and groups together the resulting recipient
structs, so a person who has multiple accounts using the same address will only receive
one email.
During mailing the -sleep argument is used to space out individual messages.
This can be used to ensure that the mailing happens at a steady pace with ample
opportunity for the operator to terminate early in the event of error. The
-sleep flag honours durations with a unit suffix (e.g. 1m for 1 minute, 10s for
10 seconds, etc). Using -sleep=0 will disable the sleep and send at full speed.
Examples:
Send an email with subject "Hello!" from the email "hello@goodbye.com" with
the contents read from "test_msg_body.txt" to every email associated with the
registration IDs listed in "test_reg_recipients.json", sleeping 10 seconds
between each message:
notify-mailer -config test/config/notify-mailer.json -body
cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-sleep 10s -dryRun=false
Do the same, but only to example@example.com:
notify-mailer -config test/config/notify-mailer.json
-body cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-start example@example.com -end example@example.comX
Send the message starting with example@example.com and emailing every address that's
alphabetically higher:
notify-mailer -config test/config/notify-mailer.json
-body cmd/notify-mailer/testdata/test_msg_body.txt -from hello@goodbye.com
-recipientList cmd/notify-mailer/testdata/test_msg_recipients.csv -subject "Hello!"
-start example@example.com
Required arguments:
- body
- config
- from
- subject
- recipientList`
type Config struct {
NotifyMailer struct {
DB cmd.DBConfig
cmd.SMTPConfig
}
Syslog cmd.SyslogConfig
}
func main() {
from := flag.String("from", "", "From header for emails. Must be a bare email address.")
subject := flag.String("subject", "", "Subject of emails")
recipientListFile := flag.String("recipientList", "", "File containing a CSV list of registration IDs and extra info.")
parseAsTSV := flag.Bool("tsv", false, "Parse the recipient list file as a TSV.")
bodyFile := flag.String("body", "", "File containing the email body in Golang template format.")
dryRun := flag.Bool("dryRun", true, "Whether to do a dry run.")
sleep := flag.Duration("sleep", 500*time.Millisecond, "How long to sleep between emails.")
parallelSends := flag.Uint("parallelSends", 1, "How many parallel goroutines should process emails")
start := flag.String("start", "", "Alphabetically lowest email address to include.")
end := flag.String("end", "\xFF", "Alphabetically highest email address (exclusive).")
reconnBase := flag.Duration("reconnectBase", 1*time.Second, "Base sleep duration between reconnect attempts")
reconnMax := flag.Duration("reconnectMax", 5*60*time.Second, "Max sleep duration between reconnect attempts after exponential backoff")
configFile := flag.String("config", "", "File containing a JSON config.")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "%s\n\n", usageIntro)
fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
flag.PrintDefaults()
}
// Validate required args.
flag.Parse()
if *from == "" || *subject == "" || *bodyFile == "" || *configFile == "" || *recipientListFile == "" {
flag.Usage()
os.Exit(1)
}
configData, err := os.ReadFile(*configFile)
cmd.FailOnError(err, "Couldn't load JSON config file")
// Parse JSON config.
var cfg Config
err = json.Unmarshal(configData, &cfg)
cmd.FailOnError(err, "Couldn't unmarshal JSON config file")
log := cmd.NewLogger(cfg.Syslog)
log.Info(cmd.VersionString())
dbMap, err := sa.InitWrappedDb(cfg.NotifyMailer.DB, nil, log)
cmd.FailOnError(err, "While initializing dbMap")
// Load and parse message body.
template, err := template.ParseFiles(*bodyFile)
cmd.FailOnError(err, "Couldn't parse message template")
// Ensure that in the event of a missing key, an informative error is
// returned.
template.Option("missingkey=error")
address, err := mail.ParseAddress(*from)
cmd.FailOnError(err, fmt.Sprintf("Couldn't parse %q to address", *from))
recipientListDelimiter := ','
if *parseAsTSV {
recipientListDelimiter = '\t'
}
recipients, probs, err := readRecipientsList(*recipientListFile, recipientListDelimiter)
cmd.FailOnError(err, "Couldn't populate recipients")
if probs != "" {
log.Infof("While reading the recipient list file %s", probs)
}
var mailClient bmail.Mailer
if *dryRun {
log.Infof("Starting %s in dry-run mode", cmd.VersionString())
mailClient = bmail.NewDryRun(*address, log)
} else {
log.Infof("Starting %s", cmd.VersionString())
smtpPassword, err := cfg.NotifyMailer.PasswordConfig.Pass()
cmd.FailOnError(err, "Couldn't load SMTP password from file")
mailClient = bmail.New(
cfg.NotifyMailer.Server,
cfg.NotifyMailer.Port,
cfg.NotifyMailer.Username,
smtpPassword,
nil,
*address,
log,
metrics.NoopRegisterer,
*reconnBase,
*reconnMax)
}
m := mailer{
clk: cmd.Clock(),
log: log,
dbMap: dbMap,
mailer: mailClient,
subject: *subject,
recipients: recipients,
emailTemplate: template,
targetRange: interval{
start: *start,
end: *end,
},
sleepInterval: *sleep,
parallelSends: *parallelSends,
}
err = m.run(context.TODO())
cmd.FailOnError(err, "Couldn't complete")
log.Info("Completed successfully")
}
func init() {
cmd.RegisterCommand("notify-mailer", main, &cmd.ConfigValidator{Config: &Config{}})
}

View File

@ -1,782 +0,0 @@
package notmain
import (
"context"
"database/sql"
"errors"
"fmt"
"io"
"os"
"testing"
"text/template"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/mocks"
"github.com/letsencrypt/boulder/test"
)
func TestIntervalOK(t *testing.T) {
// Test a number of intervals know to be OK, ensure that no error is
// produced when calling `ok()`.
okCases := []struct {
testInterval interval
}{
{interval{}},
{interval{start: "aa", end: "\xFF"}},
{interval{end: "aa"}},
{interval{start: "aa", end: "bb"}},
}
for _, testcase := range okCases {
err := testcase.testInterval.ok()
test.AssertNotError(t, err, "valid interval produced ok() error")
}
badInterval := interval{start: "bb", end: "aa"}
err := badInterval.ok()
test.AssertError(t, err, "bad interval was considered ok")
}
func setupMakeRecipientList(t *testing.T, contents string) string {
entryFile, err := os.CreateTemp("", "")
test.AssertNotError(t, err, "couldn't create temp file")
_, err = entryFile.WriteString(contents)
test.AssertNotError(t, err, "couldn't write contents to temp file")
err = entryFile.Close()
test.AssertNotError(t, err, "couldn't close temp file")
return entryFile.Name()
}
func TestReadRecipientList(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
list, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
expected := []recipient{
{id: 10, Data: map[string]string{"date": "2018-11-21", "domainName": "example.com"}},
{id: 23, Data: map[string]string{"date": "2018-11-22", "domainName": "example.net"}},
}
test.AssertDeepEquals(t, list, expected)
contents = `id domainName date
10 example.com 2018-11-21
23 example.net 2018-11-22`
entryFile = setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
list, _, err = readRecipientsList(entryFile, '\t')
test.AssertNotError(t, err, "received an error for a valid TSV file")
test.AssertDeepEquals(t, list, expected)
}
func TestReadRecipientListNoExtraColumns(t *testing.T) {
contents := `id
10
23`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientsListFileNoExist(t *testing.T) {
_, _, err := readRecipientsList("doesNotExist", ',')
test.AssertError(t, err, "expected error for a file that doesn't exist")
}
func TestReadRecipientListWithEmptyColumnInHeader(t *testing.T) {
contents := `id, domainName,,date
10,example.com,2018-11-21
23,example.net`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "failed to error on CSV file with trailing delimiter in header")
test.AssertDeepEquals(t, err, errors.New("header contains an empty column"))
}
func TestReadRecipientListWithProblems(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,
10,example.com,2018-11-22
42,example.net,
24,example.com,2018-11-21
24,example.com,2018-11-21
`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
recipients, probs, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
test.AssertEquals(t, probs, "ID(s) [23 42] contained empty columns and ID(s) [10 24] were skipped as duplicates")
test.AssertEquals(t, len(recipients), 4)
// Ensure trailing " and " is trimmed from single problem.
contents = `id, domainName, date
23,example.net,
10,example.com,2018-11-21
42,example.net,
`
entryFile = setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, probs, err = readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
test.AssertEquals(t, probs, "ID(s) [23 42] contained empty columns")
}
func TestReadRecipientListWithEmptyLine(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientListWithMismatchedColumns(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
23,example.net`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "failed to error on CSV file with mismatched columns")
}
func TestReadRecipientListWithDuplicateIDs(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
10,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertNotError(t, err, "received an error for a valid CSV file")
}
func TestReadRecipientListWithUnparsableID(t *testing.T) {
contents := `id, domainName, date
10,example.com,2018-11-21
twenty,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file that contains an unparsable registration ID")
}
func TestReadRecipientListWithoutIDHeader(t *testing.T) {
contents := `notId, domainName, date
10,example.com,2018-11-21
twenty,example.net,2018-11-22`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file missing header field `id`")
}
func TestReadRecipientListWithNoRecords(t *testing.T) {
contents := `id, domainName, date
`
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file containing only a header")
}
func TestReadRecipientListWithNoHeaderOrRecords(t *testing.T) {
contents := ``
entryFile := setupMakeRecipientList(t, contents)
defer os.Remove(entryFile)
_, _, err := readRecipientsList(entryFile, ',')
test.AssertError(t, err, "expected error for CSV file containing only a header")
test.AssertErrorIs(t, err, io.EOF)
}
func TestMakeMessageBody(t *testing.T) {
emailTemplate := `{{range . }}
{{ .Data.date }}
{{ .Data.domainName }}
{{end}}`
m := &mailer{
log: blog.UseMock(),
mailer: &mocks.Mailer{},
emailTemplate: template.Must(template.New("email").Parse(emailTemplate)).Option("missingkey=error"),
sleepInterval: 0,
targetRange: interval{end: "\xFF"},
clk: clock.NewFake(),
recipients: nil,
dbMap: mockEmailResolver{},
}
recipients := []recipient{
{id: 10, Data: map[string]string{"date": "2018-11-21", "domainName": "example.com"}},
{id: 23, Data: map[string]string{"date": "2018-11-22", "domainName": "example.net"}},
}
expectedMessageBody := `
2018-11-21
example.com
2018-11-22
example.net
`
// Ensure that a very basic template with 2 recipients can be successfully
// executed.
messageBody, err := m.makeMessageBody(recipients)
test.AssertNotError(t, err, "failed to execute a valid template")
test.AssertEquals(t, messageBody, expectedMessageBody)
// With no recipients we should get an empty body error.
recipients = []recipient{}
_, err = m.makeMessageBody(recipients)
test.AssertError(t, err, "should have errored on empty body")
// With a missing key we should get an informative templating error.
recipients = []recipient{{id: 10, Data: map[string]string{"domainName": "example.com"}}}
_, err = m.makeMessageBody(recipients)
test.AssertEquals(t, err.Error(), "template: email:2:8: executing \"email\" at <.Data.date>: map has no entry for key \"date\"")
}
func TestSleepInterval(t *testing.T) {
const sleepLen = 10
mc := &mocks.Mailer{}
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}}
// Set up a mock mailer that sleeps for `sleepLen` seconds and only has one
// goroutine to process results
m := &mailer{
log: blog.UseMock(),
mailer: mc,
emailTemplate: tmpl,
sleepInterval: sleepLen * time.Second,
parallelSends: 1,
targetRange: interval{start: "", end: "\xFF"},
clk: clock.NewFake(),
recipients: recipients,
dbMap: dbMap,
}
// Call run() - this should sleep `sleepLen` per destination address
// After it returns, we expect (sleepLen * number of destinations) seconds has
// elapsed
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
expectedEnd := clock.NewFake()
expectedEnd.Add(time.Second * time.Duration(sleepLen*len(recipients)))
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
// Set up a mock mailer that doesn't sleep at all
m = &mailer{
log: blog.UseMock(),
mailer: mc,
emailTemplate: tmpl,
sleepInterval: 0,
targetRange: interval{end: "\xFF"},
clk: clock.NewFake(),
recipients: recipients,
dbMap: dbMap,
}
// Call run() - this should blast through all destinations without sleep
// After it returns, we expect no clock time to have elapsed on the fake clock
err = m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
expectedEnd = clock.NewFake()
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
}
func TestMailIntervals(t *testing.T) {
const testSubject = "Test Subject"
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}}
mc := &mocks.Mailer{}
// Create a mailer with a checkpoint interval larger than any of the
// destination email addresses.
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{start: "\xFF", end: "\xFF\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. It should produce an error about the interval start
mc.Clear()
err := m.run(context.Background())
test.AssertError(t, err, "expected error")
test.AssertEquals(t, len(mc.Messages), 0)
// Create a mailer with a negative sleep interval
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{},
sleepInterval: -10,
clk: clock.NewFake(),
}
// Run the mailer. It should produce an error about the sleep interval
mc.Clear()
err = m.run(context.Background())
test.AssertEquals(t, len(mc.Messages), 0)
test.AssertEquals(t, err.Error(), "sleep interval (-10) is < 0")
// Create a mailer with an interval starting with a specific email address.
// It should send email to that address and others alphabetically higher.
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}},
emailTemplate: tmpl,
targetRange: interval{start: "test-example-updated@letsencrypt.org", end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. Two messages should have been produced, one to
// test-example-updated@letsencrypt.org (beginning of the range),
// and one to test-test-test@letsencrypt.org.
mc.Clear()
err = m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
test.AssertEquals(t, len(mc.Messages), 2)
test.AssertEquals(t, mocks.MailerMessage{
To: "test-example-updated@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
test.AssertEquals(t, mocks.MailerMessage{
To: "test-test-test@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[1])
// Create a mailer with a checkpoint interval ending before
// "test-example-updated@letsencrypt.org"
m = &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}},
emailTemplate: tmpl,
targetRange: interval{end: "test-example-updated@letsencrypt.org"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer. Two messages should have been produced, one to
// example@letsencrypt.org (ID 1), one to example-example-example@example.com (ID 2)
mc.Clear()
err = m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
test.AssertEquals(t, len(mc.Messages), 2)
test.AssertEquals(t, mocks.MailerMessage{
To: "example-example-example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[1])
}
func TestParallelism(t *testing.T) {
const testSubject = "Test Subject"
dbMap := mockEmailResolver{}
tmpl := template.Must(template.New("letter").Parse("an email body"))
recipients := []recipient{{id: 1}, {id: 2}, {id: 3}, {id: 4}}
mc := &mocks.Mailer{}
// Create a mailer with 10 parallel workers.
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
parallelSends: 10,
clk: clock.NewFake(),
}
mc.Clear()
err := m.run(context.Background())
test.AssertNotError(t, err, "run() produced an error")
// The fake clock should have advanced 9 seconds, one for each parallel
// goroutine after the first doing its polite 1-second sleep at startup.
expectedEnd := clock.NewFake()
expectedEnd.Add(9 * time.Second)
test.AssertEquals(t, m.clk.Now(), expectedEnd.Now())
// A message should have been sent to all four addresses.
test.AssertEquals(t, len(mc.Messages), 4)
expectedAddresses := []string{
"example@letsencrypt.org",
"test-example-updated@letsencrypt.org",
"test-test-test@letsencrypt.org",
"example-example-example@letsencrypt.org",
}
for _, msg := range mc.Messages {
test.AssertSliceContains(t, expectedAddresses, msg.To)
}
}
func TestMessageContentStatic(t *testing.T) {
// Create a mailer with fixed content
const (
testSubject = "Test Subject"
)
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: testSubject,
recipients: []recipient{{id: 1}},
emailTemplate: template.Must(template.New("letter").Parse("an email body")),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: testSubject,
Body: "an email body",
}, mc.Messages[0])
}
// Send mail with a variable interpolated.
func TestMessageContentInterpolated(t *testing.T) {
recipients := []recipient{
{
id: 1,
Data: map[string]string{
"validationMethod": "eyeballing it",
},
},
}
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test Subject",
recipients: recipients,
emailTemplate: template.Must(template.New("letter").Parse(
`issued by {{range .}}{{ .Data.validationMethod }}{{end}}`)),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "example@letsencrypt.org",
Subject: "Test Subject",
Body: "issued by eyeballing it",
}, mc.Messages[0])
}
// Send mail with a variable interpolated multiple times for accounts that share
// an email address.
func TestMessageContentInterpolatedMultiple(t *testing.T) {
recipients := []recipient{
{
id: 200,
Data: map[string]string{
"domain": "blog.example.com",
},
},
{
id: 201,
Data: map[string]string{
"domain": "nas.example.net",
},
},
{
id: 202,
Data: map[string]string{
"domain": "mail.example.org",
},
},
{
id: 203,
Data: map[string]string{
"domain": "panel.example.net",
},
},
}
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test Subject",
recipients: recipients,
emailTemplate: template.Must(template.New("letter").Parse(
`issued for:
{{range .}}{{ .Data.domain }}
{{end}}Thanks`)),
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
// Run the mailer, one message should have been created with the content
// expected
err := m.run(context.Background())
test.AssertNotError(t, err, "error calling mailer run()")
test.AssertEquals(t, len(mc.Messages), 1)
test.AssertEquals(t, mocks.MailerMessage{
To: "gotta.lotta.accounts@letsencrypt.org",
Subject: "Test Subject",
Body: `issued for:
blog.example.com
nas.example.net
mail.example.org
panel.example.net
Thanks`,
}, mc.Messages[0])
}
// the `mockEmailResolver` implements the `dbSelector` interface from
// `notify-mailer/main.go` to allow unit testing without using a backing
// database
type mockEmailResolver struct{}
// the `mockEmailResolver` select method treats the requested reg ID as an index
// into a list of anonymous structs
func (bs mockEmailResolver) SelectOne(ctx context.Context, output interface{}, _ string, args ...interface{}) error {
// The "dbList" is just a list of contact records in memory
dbList := []contactQueryResult{
{
ID: 1,
Contact: []byte(`["mailto:example@letsencrypt.org"]`),
},
{
ID: 2,
Contact: []byte(`["mailto:test-example-updated@letsencrypt.org"]`),
},
{
ID: 3,
Contact: []byte(`["mailto:test-test-test@letsencrypt.org"]`),
},
{
ID: 4,
Contact: []byte(`["mailto:example-example-example@letsencrypt.org"]`),
},
{
ID: 5,
Contact: []byte(`["mailto:youve.got.mail@letsencrypt.org"]`),
},
{
ID: 6,
Contact: []byte(`["mailto:mail@letsencrypt.org"]`),
},
{
ID: 7,
Contact: []byte(`["mailto:***********"]`),
},
{
ID: 200,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 201,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 202,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 203,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
{
ID: 204,
Contact: []byte(`["mailto:gotta.lotta.accounts@letsencrypt.org"]`),
},
}
// Play the type cast game so that we can dig into the arguments map and get
// out an int64 `id` parameter.
argsRaw := args[0]
argsMap, ok := argsRaw.(map[string]interface{})
if !ok {
return fmt.Errorf("incorrect args type %T", args)
}
idRaw := argsMap["id"]
id, ok := idRaw.(int64)
if !ok {
return fmt.Errorf("incorrect args ID type %T", id)
}
// Play the type cast game to get a `*contactQueryResult` so we can write
// the result from the db list.
outputPtr, ok := output.(*contactQueryResult)
if !ok {
return fmt.Errorf("incorrect output type %T", output)
}
for _, v := range dbList {
if v.ID == id {
*outputPtr = v
}
}
if outputPtr.ID == 0 {
return db.ErrDatabaseOp{
Op: "select one",
Table: "registrations",
Err: sql.ErrNoRows,
}
}
return nil
}
func TestResolveEmails(t *testing.T) {
// Start with three reg. IDs. Note: the IDs have been matched with fake
// results in the `db` slice in `mockEmailResolver`'s `SelectOne`. If you add
// more test cases here you must also add the corresponding DB result in the
// mock.
recipients := []recipient{
{
id: 1,
},
{
id: 2,
},
{
id: 3,
},
// This registration ID deliberately doesn't exist in the mock data to make
// sure this case is handled gracefully
{
id: 999,
},
// This registration ID deliberately returns an invalid email to make sure any
// invalid contact info that slipped into the DB once upon a time will be ignored
{
id: 7,
},
{
id: 200,
},
{
id: 201,
},
{
id: 202,
},
{
id: 203,
},
{
id: 204,
},
}
tmpl := template.Must(template.New("letter").Parse("an email body"))
dbMap := mockEmailResolver{}
mc := &mocks.Mailer{}
m := &mailer{
log: blog.UseMock(),
mailer: mc,
dbMap: dbMap,
subject: "Test",
recipients: recipients,
emailTemplate: tmpl,
targetRange: interval{end: "\xFF"},
sleepInterval: 0,
clk: clock.NewFake(),
}
addressesToRecipients, err := m.resolveAddresses(context.Background())
test.AssertNotError(t, err, "failed to resolveEmailAddresses")
expected := []string{
"example@letsencrypt.org",
"test-example-updated@letsencrypt.org",
"test-test-test@letsencrypt.org",
"gotta.lotta.accounts@letsencrypt.org",
}
test.AssertEquals(t, len(addressesToRecipients), len(expected))
for _, address := range expected {
if _, ok := addressesToRecipients[address]; !ok {
t.Errorf("missing entry in addressesToRecipients: %q", address)
}
}
}

View File

@ -1,3 +0,0 @@
This is a test message body regarding these domains:
{{ range . }} {{ .Extra.domainName }}
{{ end }}

View File

@ -1,4 +0,0 @@
id,domainName
1,one.example.com
2,two.example.net
3,three.example.org
1 id domainName
2 1 one.example.com
3 2 two.example.net
4 3 three.example.org

View File

@ -11,6 +11,7 @@ import (
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/features"
bgrpc "github.com/letsencrypt/boulder/grpc"
"github.com/letsencrypt/boulder/iana"
"github.com/letsencrypt/boulder/va"
vaConfig "github.com/letsencrypt/boulder/va/config"
vapb "github.com/letsencrypt/boulder/va/proto"
@ -86,16 +87,12 @@ func main() {
clk := cmd.Clock()
var servers bdns.ServerProvider
proto := "udp"
if features.Get().DOH {
proto = "tcp"
}
if len(c.RVA.DNSStaticResolvers) != 0 {
servers, err = bdns.NewStaticProvider(c.RVA.DNSStaticResolvers)
cmd.FailOnError(err, "Couldn't start static DNS server resolver")
} else {
servers, err = bdns.StartDynamicProvider(c.RVA.DNSProvider, 60*time.Second, proto)
servers, err = bdns.StartDynamicProvider(c.RVA.DNSProvider, 60*time.Second, "tcp")
cmd.FailOnError(err, "Couldn't start dynamic DNS server resolver")
}
defer servers.Stop()
@ -115,6 +112,7 @@ func main() {
scope,
clk,
c.RVA.DNSTries,
c.RVA.UserAgent,
logger,
tlsConfig)
} else {
@ -124,6 +122,7 @@ func main() {
scope,
clk,
c.RVA.DNSTries,
c.RVA.UserAgent,
logger,
tlsConfig)
}
@ -139,7 +138,7 @@ func main() {
c.RVA.AccountURIPrefixes,
c.RVA.Perspective,
c.RVA.RIR,
bdns.IsReservedIP)
iana.IsReservedAddr)
cmd.FailOnError(err, "Unable to create Remote-VA server")
start, err := bgrpc.NewServer(c.RVA.GRPC, logger).Add(

View File

@ -1,5 +1,5 @@
// Read a list of reversed hostnames, separated by newlines. Print only those
// that are rejected by the current policy.
// Read a list of reversed FQDNs and/or normal IP addresses, separated by
// newlines. Print only those that are rejected by the current policy.
package notmain
@ -9,6 +9,7 @@ import (
"fmt"
"io"
"log"
"net/netip"
"os"
"github.com/letsencrypt/boulder/cmd"
@ -40,7 +41,7 @@ func main() {
scanner := bufio.NewScanner(input)
logger := cmd.NewLogger(cmd.SyslogConfig{StdoutLevel: 7})
logger.Info(cmd.VersionString())
pa, err := policy.New(nil, logger)
pa, err := policy.New(nil, nil, logger)
if err != nil {
log.Fatal(err)
}
@ -50,8 +51,15 @@ func main() {
}
var errors bool
for scanner.Scan() {
n := sa.ReverseName(scanner.Text())
err := pa.WillingToIssue(identifier.ACMEIdentifiers{identifier.NewDNS(n)})
n := sa.EncodeIssuedName(scanner.Text())
var ident identifier.ACMEIdentifier
ip, err := netip.ParseAddr(n)
if err == nil {
ident = identifier.NewIP(ip)
} else {
ident = identifier.NewDNS(n)
}
err = pa.WillingToIssue(identifier.ACMEIdentifiers{ident})
if err != nil {
errors = true
fmt.Printf("%s: %s\n", n, err)

View File

@ -15,6 +15,7 @@ import (
capb "github.com/letsencrypt/boulder/ca/proto"
"github.com/letsencrypt/boulder/cmd"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/db"
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/rocsp"
@ -39,8 +40,8 @@ func makeClient() (*rocsp.RWClient, clock.Clock) {
rdb := redis.NewRing(&redis.RingOptions{
Addrs: map[string]string{
"shard1": "10.33.33.2:4218",
"shard2": "10.33.33.3:4218",
"shard1": "10.77.77.2:4218",
"shard2": "10.77.77.3:4218",
},
Username: "unittest-rw",
Password: "824968fa490f4ecec1e52d5e34916bdb60d45f8d",
@ -50,29 +51,34 @@ func makeClient() (*rocsp.RWClient, clock.Clock) {
return rocsp.NewWritingClient(rdb, 500*time.Millisecond, clk, metrics.NoopRegisterer), clk
}
func TestGetStartingID(t *testing.T) {
ctx := context.Background()
func insertCertificateStatus(t *testing.T, dbMap db.Executor, serial string, notAfter, ocspLastUpdated time.Time) int64 {
result, err := dbMap.ExecContext(context.Background(),
`INSERT INTO certificateStatus
(serial, notAfter, status, ocspLastUpdated, revokedDate, revokedReason, lastExpirationNagSent, issuerID)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
serial,
notAfter,
core.OCSPStatusGood,
ocspLastUpdated,
time.Time{},
0,
time.Time{},
99)
test.AssertNotError(t, err, "inserting certificate status")
id, err := result.LastInsertId()
test.AssertNotError(t, err, "getting last insert ID")
return id
}
func TestGetStartingID(t *testing.T) {
clk := clock.NewFake()
dbMap, err := sa.DBMapForTest(vars.DBConnSAFullPerms)
test.AssertNotError(t, err, "failed setting up db client")
defer test.ResetBoulderTestDatabase(t)()
cs := core.CertificateStatus{
Serial: "1337",
NotAfter: clk.Now().Add(12 * time.Hour),
}
err = dbMap.Insert(ctx, &cs)
test.AssertNotError(t, err, "inserting certificate status")
firstID := cs.ID
firstID := insertCertificateStatus(t, dbMap, "1337", clk.Now().Add(12*time.Hour), time.Time{})
secondID := insertCertificateStatus(t, dbMap, "1338", clk.Now().Add(36*time.Hour), time.Time{})
cs = core.CertificateStatus{
Serial: "1338",
NotAfter: clk.Now().Add(36 * time.Hour),
}
err = dbMap.Insert(ctx, &cs)
test.AssertNotError(t, err, "inserting certificate status")
secondID := cs.ID
t.Logf("first ID %d, second ID %d", firstID, secondID)
clk.Sleep(48 * time.Hour)
@ -131,11 +137,7 @@ func TestLoadFromDB(t *testing.T) {
defer test.ResetBoulderTestDatabase(t)
for i := range 100 {
err = dbMap.Insert(context.Background(), &core.CertificateStatus{
Serial: fmt.Sprintf("%036x", i),
NotAfter: clk.Now().Add(200 * time.Hour),
OCSPLastUpdated: clk.Now(),
})
insertCertificateStatus(t, dbMap, fmt.Sprintf("%036x", i), clk.Now().Add(200*time.Hour), clk.Now())
if err != nil {
t.Fatalf("Failed to insert certificateStatus: %s", err)
}

View File

@ -31,7 +31,7 @@ import (
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.25.0"
semconv "go.opentelemetry.io/otel/semconv/v1.30.0"
"google.golang.org/grpc/grpclog"
"github.com/letsencrypt/boulder/config"

View File

@ -23,22 +23,24 @@ var (
validPAConfig = []byte(`{
"dbConnect": "dummyDBConnect",
"enforcePolicyWhitelist": false,
"challenges": { "http-01": true }
"challenges": { "http-01": true },
"identifiers": { "dns": true, "ip": true }
}`)
invalidPAConfig = []byte(`{
"dbConnect": "dummyDBConnect",
"enforcePolicyWhitelist": false,
"challenges": { "nonsense": true }
"challenges": { "nonsense": true },
"identifiers": { "openpgp": true }
}`)
noChallengesPAConfig = []byte(`{
noChallengesIdentsPAConfig = []byte(`{
"dbConnect": "dummyDBConnect",
"enforcePolicyWhitelist": false
}`)
emptyChallengesPAConfig = []byte(`{
emptyChallengesIdentsPAConfig = []byte(`{
"dbConnect": "dummyDBConnect",
"enforcePolicyWhitelist": false,
"challenges": {}
"challenges": {},
"identifiers": {}
}`)
)
@ -47,21 +49,25 @@ func TestPAConfigUnmarshal(t *testing.T) {
err := json.Unmarshal(validPAConfig, &pc1)
test.AssertNotError(t, err, "Failed to unmarshal PAConfig")
test.AssertNotError(t, pc1.CheckChallenges(), "Flagged valid challenges as bad")
test.AssertNotError(t, pc1.CheckIdentifiers(), "Flagged valid identifiers as bad")
var pc2 PAConfig
err = json.Unmarshal(invalidPAConfig, &pc2)
test.AssertNotError(t, err, "Failed to unmarshal PAConfig")
test.AssertError(t, pc2.CheckChallenges(), "Considered invalid challenges as good")
test.AssertError(t, pc2.CheckIdentifiers(), "Considered invalid identifiers as good")
var pc3 PAConfig
err = json.Unmarshal(noChallengesPAConfig, &pc3)
err = json.Unmarshal(noChallengesIdentsPAConfig, &pc3)
test.AssertNotError(t, err, "Failed to unmarshal PAConfig")
test.AssertError(t, pc3.CheckChallenges(), "Disallow empty challenges map")
test.AssertNotError(t, pc3.CheckIdentifiers(), "Disallowed empty identifiers map")
var pc4 PAConfig
err = json.Unmarshal(emptyChallengesPAConfig, &pc4)
err = json.Unmarshal(emptyChallengesIdentsPAConfig, &pc4)
test.AssertNotError(t, err, "Failed to unmarshal PAConfig")
test.AssertError(t, pc4.CheckChallenges(), "Disallow empty challenges map")
test.AssertNotError(t, pc4.CheckIdentifiers(), "Disallowed empty identifiers map")
}
func TestMysqlLogger(t *testing.T) {
@ -127,16 +133,13 @@ func TestReadConfigFile(t *testing.T) {
test.AssertError(t, err, "ReadConfigFile('') did not error")
type config struct {
NotifyMailer struct {
DB DBConfig
SMTPConfig
}
Syslog SyslogConfig
GRPC *GRPCClientConfig
TLS *TLSConfig
}
var c config
err = ReadConfigFile("../test/config/notify-mailer.json", &c)
test.AssertNotError(t, err, "ReadConfigFile(../test/config/notify-mailer.json) errored")
test.AssertEquals(t, c.NotifyMailer.SMTPConfig.Server, "localhost")
err = ReadConfigFile("../test/config/health-checker.json", &c)
test.AssertNotError(t, err, "ReadConfigFile(../test/config/health-checker.json) errored")
test.AssertEquals(t, c.GRPC.Timeout.Duration, 1*time.Second)
}
func TestLogWriter(t *testing.T) {
@ -273,7 +276,6 @@ func TestFailExit(t *testing.T) {
return
}
//nolint: gosec // Test-only code is not concerned about untrusted values in os.Args[0]
cmd := exec.Command(os.Args[0], "-test.run=TestFailExit")
cmd.Env = append(os.Environ(), "TIME_TO_DIE=1")
output, err := cmd.CombinedOutput()
@ -300,7 +302,6 @@ func TestPanicStackTrace(t *testing.T) {
return
}
//nolint: gosec // Test-only code is not concerned about untrusted values in os.Args[0]
cmd := exec.Command(os.Args[0], "-test.run=TestPanicStackTrace")
cmd.Env = append(os.Environ(), "AT_THE_DISCO=1")
output, err := cmd.CombinedOutput()

View File

@ -6,7 +6,7 @@ import (
"encoding/json"
"fmt"
"hash/fnv"
"net"
"net/netip"
"strings"
"time"
@ -68,7 +68,7 @@ func (c AcmeChallenge) IsValid() bool {
}
}
// OCSPStatus defines the state of OCSP for a domain
// OCSPStatus defines the state of OCSP for a certificate
type OCSPStatus string
// These status are the states of OCSP
@ -123,11 +123,11 @@ type ValidationRecord struct {
// Shared
//
// TODO(#7311): Replace DnsName with Identifier.
DnsName string `json:"hostname,omitempty"`
// Hostname can hold either a DNS name or an IP address.
Hostname string `json:"hostname,omitempty"`
Port string `json:"port,omitempty"`
AddressesResolved []net.IP `json:"addressesResolved,omitempty"`
AddressUsed net.IP `json:"addressUsed,omitempty"`
AddressesResolved []netip.Addr `json:"addressesResolved,omitempty"`
AddressUsed netip.Addr `json:"addressUsed,omitempty"`
// AddressesTried contains a list of addresses tried before the `AddressUsed`.
// Presently this will only ever be one IP from `AddressesResolved` since the
@ -143,7 +143,7 @@ type ValidationRecord struct {
// AddressesTried: [ ::1 ],
// ...
// }
AddressesTried []net.IP `json:"addressesTried,omitempty"`
AddressesTried []netip.Addr `json:"addressesTried,omitempty"`
// ResolverAddrs is the host:port of the DNS resolver(s) that fulfilled the
// lookup for AddressUsed. During recursive A and AAAA lookups, a record may
@ -210,7 +210,7 @@ func (ch Challenge) RecordsSane() bool {
for _, rec := range ch.ValidationRecord {
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if rec.URL == "" || rec.DnsName == "" || rec.Port == "" || rec.AddressUsed == nil ||
if rec.URL == "" || rec.Hostname == "" || rec.Port == "" || (rec.AddressUsed == netip.Addr{}) ||
len(rec.AddressesResolved) == 0 {
return false
}
@ -224,8 +224,8 @@ func (ch Challenge) RecordsSane() bool {
}
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if ch.ValidationRecord[0].DnsName == "" || ch.ValidationRecord[0].Port == "" ||
ch.ValidationRecord[0].AddressUsed == nil || len(ch.ValidationRecord[0].AddressesResolved) == 0 {
if ch.ValidationRecord[0].Hostname == "" || ch.ValidationRecord[0].Port == "" ||
(ch.ValidationRecord[0].AddressUsed == netip.Addr{}) || len(ch.ValidationRecord[0].AddressesResolved) == 0 {
return false
}
case ChallengeTypeDNS01:
@ -234,7 +234,7 @@ func (ch Challenge) RecordsSane() bool {
}
// TODO(#7140): Add a check for ResolverAddress == "" only after the
// core.proto change has been deployed.
if ch.ValidationRecord[0].DnsName == "" {
if ch.ValidationRecord[0].Hostname == "" {
return false
}
return true
@ -271,10 +271,10 @@ func (ch Challenge) StringID() string {
return base64.RawURLEncoding.EncodeToString(h.Sum(nil)[0:4])
}
// Authorization represents the authorization of an account key holder
// to act on behalf of a domain. This struct is intended to be used both
// internally and for JSON marshaling on the wire. Any fields that should be
// suppressed on the wire (e.g., ID, regID) must be made empty before marshaling.
// Authorization represents the authorization of an account key holder to act on
// behalf of an identifier. This struct is intended to be used both internally
// and for JSON marshaling on the wire. Any fields that should be suppressed on
// the wire (e.g., ID, regID) must be made empty before marshaling.
type Authorization struct {
// An identifier for this authorization, unique across
// authorizations and certificates within this instance.

View File

@ -4,7 +4,7 @@ import (
"crypto/rsa"
"encoding/json"
"math/big"
"net"
"net/netip"
"testing"
"time"
@ -37,10 +37,10 @@ func TestRecordSanityCheckOnUnsupportedChallengeType(t *testing.T) {
rec := []ValidationRecord{
{
URL: "http://localhost/test",
DnsName: "localhost",
Hostname: "localhost",
Port: "80",
AddressesResolved: []net.IP{{127, 0, 0, 1}},
AddressUsed: net.IP{127, 0, 0, 1},
AddressesResolved: []netip.Addr{netip.MustParseAddr("127.0.0.1")},
AddressUsed: netip.MustParseAddr("127.0.0.1"),
ResolverAddrs: []string{"eastUnboundAndDown"},
},
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: core.proto
@ -12,6 +12,7 @@ import (
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -22,22 +23,19 @@ const (
)
type Identifier struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Identifier) Reset() {
*x = Identifier{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Identifier) String() string {
return protoimpl.X.MessageStringOf(x)
@ -47,7 +45,7 @@ func (*Identifier) ProtoMessage() {}
func (x *Identifier) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -77,10 +75,7 @@ func (x *Identifier) GetValue() string {
}
type Challenge struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Id int64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
// Fields specified by RFC 8555, Section 8.
Type string `protobuf:"bytes,2,opt,name=type,proto3" json:"type,omitempty"`
@ -92,16 +87,16 @@ type Challenge struct {
Token string `protobuf:"bytes,3,opt,name=token,proto3" json:"token,omitempty"`
// Additional fields for our own record keeping.
Validationrecords []*ValidationRecord `protobuf:"bytes,10,rep,name=validationrecords,proto3" json:"validationrecords,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Challenge) Reset() {
*x = Challenge{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Challenge) String() string {
return protoimpl.X.MessageStringOf(x)
@ -111,7 +106,7 @@ func (*Challenge) ProtoMessage() {}
func (x *Challenge) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -183,32 +178,29 @@ func (x *Challenge) GetValidationrecords() []*ValidationRecord {
}
type ValidationRecord struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 9
Hostname string `protobuf:"bytes,1,opt,name=hostname,proto3" json:"hostname,omitempty"`
Port string `protobuf:"bytes,2,opt,name=port,proto3" json:"port,omitempty"`
AddressesResolved [][]byte `protobuf:"bytes,3,rep,name=addressesResolved,proto3" json:"addressesResolved,omitempty"` // net.IP.MarshalText()
AddressUsed []byte `protobuf:"bytes,4,opt,name=addressUsed,proto3" json:"addressUsed,omitempty"` // net.IP.MarshalText()
AddressesResolved [][]byte `protobuf:"bytes,3,rep,name=addressesResolved,proto3" json:"addressesResolved,omitempty"` // netip.Addr.MarshalText()
AddressUsed []byte `protobuf:"bytes,4,opt,name=addressUsed,proto3" json:"addressUsed,omitempty"` // netip.Addr.MarshalText()
Authorities []string `protobuf:"bytes,5,rep,name=authorities,proto3" json:"authorities,omitempty"`
Url string `protobuf:"bytes,6,opt,name=url,proto3" json:"url,omitempty"`
// A list of addresses tried before the address used (see
// core/objects.go and the comment on the ValidationRecord structure
// definition for more information.
AddressesTried [][]byte `protobuf:"bytes,7,rep,name=addressesTried,proto3" json:"addressesTried,omitempty"` // net.IP.MarshalText()
AddressesTried [][]byte `protobuf:"bytes,7,rep,name=addressesTried,proto3" json:"addressesTried,omitempty"` // netip.Addr.MarshalText()
ResolverAddrs []string `protobuf:"bytes,8,rep,name=resolverAddrs,proto3" json:"resolverAddrs,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ValidationRecord) Reset() {
*x = ValidationRecord{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ValidationRecord) String() string {
return protoimpl.X.MessageStringOf(x)
@ -218,7 +210,7 @@ func (*ValidationRecord) ProtoMessage() {}
func (x *ValidationRecord) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -290,23 +282,20 @@ func (x *ValidationRecord) GetResolverAddrs() []string {
}
type ProblemDetails struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
ProblemType string `protobuf:"bytes,1,opt,name=problemType,proto3" json:"problemType,omitempty"`
Detail string `protobuf:"bytes,2,opt,name=detail,proto3" json:"detail,omitempty"`
HttpStatus int32 `protobuf:"varint,3,opt,name=httpStatus,proto3" json:"httpStatus,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *ProblemDetails) Reset() {
*x = ProblemDetails{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ProblemDetails) String() string {
return protoimpl.X.MessageStringOf(x)
@ -316,7 +305,7 @@ func (*ProblemDetails) ProtoMessage() {}
func (x *ProblemDetails) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -353,10 +342,7 @@ func (x *ProblemDetails) GetHttpStatus() int32 {
}
type Certificate struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 9
RegistrationID int64 `protobuf:"varint,1,opt,name=registrationID,proto3" json:"registrationID,omitempty"`
Serial string `protobuf:"bytes,2,opt,name=serial,proto3" json:"serial,omitempty"`
@ -364,16 +350,16 @@ type Certificate struct {
Der []byte `protobuf:"bytes,4,opt,name=der,proto3" json:"der,omitempty"`
Issued *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=issued,proto3" json:"issued,omitempty"`
Expires *timestamppb.Timestamp `protobuf:"bytes,8,opt,name=expires,proto3" json:"expires,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Certificate) Reset() {
*x = Certificate{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Certificate) String() string {
return protoimpl.X.MessageStringOf(x)
@ -383,7 +369,7 @@ func (*Certificate) ProtoMessage() {}
func (x *Certificate) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -441,10 +427,7 @@ func (x *Certificate) GetExpires() *timestamppb.Timestamp {
}
type CertificateStatus struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 16
Serial string `protobuf:"bytes,1,opt,name=serial,proto3" json:"serial,omitempty"`
Status string `protobuf:"bytes,3,opt,name=status,proto3" json:"status,omitempty"`
@ -455,16 +438,16 @@ type CertificateStatus struct {
NotAfter *timestamppb.Timestamp `protobuf:"bytes,14,opt,name=notAfter,proto3" json:"notAfter,omitempty"`
IsExpired bool `protobuf:"varint,10,opt,name=isExpired,proto3" json:"isExpired,omitempty"`
IssuerID int64 `protobuf:"varint,11,opt,name=issuerID,proto3" json:"issuerID,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CertificateStatus) Reset() {
*x = CertificateStatus{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CertificateStatus) String() string {
return protoimpl.X.MessageStringOf(x)
@ -474,7 +457,7 @@ func (*CertificateStatus) ProtoMessage() {}
func (x *CertificateStatus) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -553,10 +536,7 @@ func (x *CertificateStatus) GetIssuerID() int64 {
}
type Registration struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 10
Id int64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Key []byte `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
@ -564,16 +544,16 @@ type Registration struct {
Agreement string `protobuf:"bytes,5,opt,name=agreement,proto3" json:"agreement,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=createdAt,proto3" json:"createdAt,omitempty"`
Status string `protobuf:"bytes,8,opt,name=status,proto3" json:"status,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Registration) Reset() {
*x = Registration{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Registration) String() string {
return protoimpl.X.MessageStringOf(x)
@ -583,7 +563,7 @@ func (*Registration) ProtoMessage() {}
func (x *Registration) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -641,10 +621,7 @@ func (x *Registration) GetStatus() string {
}
type Authorization struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
RegistrationID int64 `protobuf:"varint,3,opt,name=registrationID,proto3" json:"registrationID,omitempty"`
Identifier *Identifier `protobuf:"bytes,11,opt,name=identifier,proto3" json:"identifier,omitempty"`
@ -652,16 +629,16 @@ type Authorization struct {
Expires *timestamppb.Timestamp `protobuf:"bytes,9,opt,name=expires,proto3" json:"expires,omitempty"`
Challenges []*Challenge `protobuf:"bytes,6,rep,name=challenges,proto3" json:"challenges,omitempty"`
CertificateProfileName string `protobuf:"bytes,10,opt,name=certificateProfileName,proto3" json:"certificateProfileName,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Authorization) Reset() {
*x = Authorization{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Authorization) String() string {
return protoimpl.X.MessageStringOf(x)
@ -671,7 +648,7 @@ func (*Authorization) ProtoMessage() {}
func (x *Authorization) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -736,10 +713,7 @@ func (x *Authorization) GetCertificateProfileName() string {
}
type Order struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Id int64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
RegistrationID int64 `protobuf:"varint,2,opt,name=registrationID,proto3" json:"registrationID,omitempty"`
// Fields specified by RFC 8555, Section 7.1.3
@ -756,16 +730,16 @@ type Order struct {
CertificateProfileName string `protobuf:"bytes,14,opt,name=certificateProfileName,proto3" json:"certificateProfileName,omitempty"`
Replaces string `protobuf:"bytes,15,opt,name=replaces,proto3" json:"replaces,omitempty"`
BeganProcessing bool `protobuf:"varint,9,opt,name=beganProcessing,proto3" json:"beganProcessing,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Order) Reset() {
*x = Order{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Order) String() string {
return protoimpl.X.MessageStringOf(x)
@ -775,7 +749,7 @@ func (*Order) ProtoMessage() {}
func (x *Order) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -875,24 +849,21 @@ func (x *Order) GetBeganProcessing() bool {
}
type CRLEntry struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Next unused field number: 5
Serial string `protobuf:"bytes,1,opt,name=serial,proto3" json:"serial,omitempty"`
Reason int32 `protobuf:"varint,2,opt,name=reason,proto3" json:"reason,omitempty"`
RevokedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=revokedAt,proto3" json:"revokedAt,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CRLEntry) Reset() {
*x = CRLEntry{}
if protoimpl.UnsafeEnabled {
mi := &file_core_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CRLEntry) String() string {
return protoimpl.X.MessageStringOf(x)
@ -902,7 +873,7 @@ func (*CRLEntry) ProtoMessage() {}
func (x *CRLEntry) ProtoReflect() protoreflect.Message {
mi := &file_core_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -940,7 +911,7 @@ func (x *CRLEntry) GetRevokedAt() *timestamppb.Timestamp {
var File_core_proto protoreflect.FileDescriptor
var file_core_proto_rawDesc = []byte{
var file_core_proto_rawDesc = string([]byte{
0x0a, 0x0a, 0x63, 0x6f, 0x72, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x04, 0x63, 0x6f,
0x72, 0x65, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72,
@ -1114,22 +1085,22 @@ var file_core_proto_rawDesc = []byte{
0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6c, 0x65, 0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79,
0x70, 0x74, 0x2f, 0x62, 0x6f, 0x75, 0x6c, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x6f, 0x72, 0x65, 0x2f,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
})
var (
file_core_proto_rawDescOnce sync.Once
file_core_proto_rawDescData = file_core_proto_rawDesc
file_core_proto_rawDescData []byte
)
func file_core_proto_rawDescGZIP() []byte {
file_core_proto_rawDescOnce.Do(func() {
file_core_proto_rawDescData = protoimpl.X.CompressGZIP(file_core_proto_rawDescData)
file_core_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_core_proto_rawDesc), len(file_core_proto_rawDesc)))
})
return file_core_proto_rawDescData
}
var file_core_proto_msgTypes = make([]protoimpl.MessageInfo, 10)
var file_core_proto_goTypes = []interface{}{
var file_core_proto_goTypes = []any{
(*Identifier)(nil), // 0: core.Identifier
(*Challenge)(nil), // 1: core.Challenge
(*ValidationRecord)(nil), // 2: core.ValidationRecord
@ -1173,133 +1144,11 @@ func file_core_proto_init() {
if File_core_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_core_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Identifier); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Challenge); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ValidationRecord); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ProblemDetails); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Certificate); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CertificateStatus); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Registration); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Authorization); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Order); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_core_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CRLEntry); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_core_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_core_proto_rawDesc), len(file_core_proto_rawDesc)),
NumEnums: 0,
NumMessages: 10,
NumExtensions: 0,
@ -1310,7 +1159,6 @@ func file_core_proto_init() {
MessageInfos: file_core_proto_msgTypes,
}.Build()
File_core_proto = out.File
file_core_proto_rawDesc = nil
file_core_proto_goTypes = nil
file_core_proto_depIdxs = nil
}

View File

@ -30,15 +30,15 @@ message ValidationRecord {
// Next unused field number: 9
string hostname = 1;
string port = 2;
repeated bytes addressesResolved = 3; // net.IP.MarshalText()
bytes addressUsed = 4; // net.IP.MarshalText()
repeated bytes addressesResolved = 3; // netip.Addr.MarshalText()
bytes addressUsed = 4; // netip.Addr.MarshalText()
repeated string authorities = 5;
string url = 6;
// A list of addresses tried before the address used (see
// core/objects.go and the comment on the ValidationRecord structure
// definition for more information.
repeated bytes addressesTried = 7; // net.IP.MarshalText()
repeated bytes addressesTried = 7; // netip.Addr.MarshalText()
repeated string resolverAddrs = 8;
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: storer.proto
@ -13,6 +13,7 @@ import (
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -23,25 +24,22 @@ const (
)
type UploadCRLRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
// Types that are assignable to Payload:
state protoimpl.MessageState `protogen:"open.v1"`
// Types that are valid to be assigned to Payload:
//
// *UploadCRLRequest_Metadata
// *UploadCRLRequest_CrlChunk
Payload isUploadCRLRequest_Payload `protobuf_oneof:"payload"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *UploadCRLRequest) Reset() {
*x = UploadCRLRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_storer_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *UploadCRLRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -51,7 +49,7 @@ func (*UploadCRLRequest) ProtoMessage() {}
func (x *UploadCRLRequest) ProtoReflect() protoreflect.Message {
mi := &file_storer_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -66,24 +64,28 @@ func (*UploadCRLRequest) Descriptor() ([]byte, []int) {
return file_storer_proto_rawDescGZIP(), []int{0}
}
func (m *UploadCRLRequest) GetPayload() isUploadCRLRequest_Payload {
if m != nil {
return m.Payload
func (x *UploadCRLRequest) GetPayload() isUploadCRLRequest_Payload {
if x != nil {
return x.Payload
}
return nil
}
func (x *UploadCRLRequest) GetMetadata() *CRLMetadata {
if x, ok := x.GetPayload().(*UploadCRLRequest_Metadata); ok {
if x != nil {
if x, ok := x.Payload.(*UploadCRLRequest_Metadata); ok {
return x.Metadata
}
}
return nil
}
func (x *UploadCRLRequest) GetCrlChunk() []byte {
if x, ok := x.GetPayload().(*UploadCRLRequest_CrlChunk); ok {
if x != nil {
if x, ok := x.Payload.(*UploadCRLRequest_CrlChunk); ok {
return x.CrlChunk
}
}
return nil
}
@ -104,25 +106,22 @@ func (*UploadCRLRequest_Metadata) isUploadCRLRequest_Payload() {}
func (*UploadCRLRequest_CrlChunk) isUploadCRLRequest_Payload() {}
type CRLMetadata struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
IssuerNameID int64 `protobuf:"varint,1,opt,name=issuerNameID,proto3" json:"issuerNameID,omitempty"`
Number int64 `protobuf:"varint,2,opt,name=number,proto3" json:"number,omitempty"`
ShardIdx int64 `protobuf:"varint,3,opt,name=shardIdx,proto3" json:"shardIdx,omitempty"`
Expires *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=expires,proto3" json:"expires,omitempty"`
CacheControl string `protobuf:"bytes,5,opt,name=cacheControl,proto3" json:"cacheControl,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CRLMetadata) Reset() {
*x = CRLMetadata{}
if protoimpl.UnsafeEnabled {
mi := &file_storer_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CRLMetadata) String() string {
return protoimpl.X.MessageStringOf(x)
@ -132,7 +131,7 @@ func (*CRLMetadata) ProtoMessage() {}
func (x *CRLMetadata) ProtoReflect() protoreflect.Message {
mi := &file_storer_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -184,7 +183,7 @@ func (x *CRLMetadata) GetCacheControl() string {
var File_storer_proto protoreflect.FileDescriptor
var file_storer_proto_rawDesc = []byte{
var file_storer_proto_rawDesc = string([]byte{
0x0a, 0x0c, 0x73, 0x74, 0x6f, 0x72, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x06,
0x73, 0x74, 0x6f, 0x72, 0x65, 0x72, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72,
@ -219,22 +218,22 @@ var file_storer_proto_rawDesc = []byte{
0x2f, 0x62, 0x6f, 0x75, 0x6c, 0x64, 0x65, 0x72, 0x2f, 0x63, 0x72, 0x6c, 0x2f, 0x73, 0x74, 0x6f,
0x72, 0x65, 0x72, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x33,
}
})
var (
file_storer_proto_rawDescOnce sync.Once
file_storer_proto_rawDescData = file_storer_proto_rawDesc
file_storer_proto_rawDescData []byte
)
func file_storer_proto_rawDescGZIP() []byte {
file_storer_proto_rawDescOnce.Do(func() {
file_storer_proto_rawDescData = protoimpl.X.CompressGZIP(file_storer_proto_rawDescData)
file_storer_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_storer_proto_rawDesc), len(file_storer_proto_rawDesc)))
})
return file_storer_proto_rawDescData
}
var file_storer_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_storer_proto_goTypes = []interface{}{
var file_storer_proto_goTypes = []any{
(*UploadCRLRequest)(nil), // 0: storer.UploadCRLRequest
(*CRLMetadata)(nil), // 1: storer.CRLMetadata
(*timestamppb.Timestamp)(nil), // 2: google.protobuf.Timestamp
@ -257,33 +256,7 @@ func file_storer_proto_init() {
if File_storer_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_storer_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*UploadCRLRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_storer_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CRLMetadata); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
file_storer_proto_msgTypes[0].OneofWrappers = []interface{}{
file_storer_proto_msgTypes[0].OneofWrappers = []any{
(*UploadCRLRequest_Metadata)(nil),
(*UploadCRLRequest_CrlChunk)(nil),
}
@ -291,7 +264,7 @@ func file_storer_proto_init() {
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_storer_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_storer_proto_rawDesc), len(file_storer_proto_rawDesc)),
NumEnums: 0,
NumMessages: 2,
NumExtensions: 0,
@ -302,7 +275,6 @@ func file_storer_proto_init() {
MessageInfos: file_storer_proto_msgTypes,
}.Build()
File_storer_proto = out.File
file_storer_proto_rawDesc = nil
file_storer_proto_goTypes = nil
file_storer_proto_depIdxs = nil
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.1
// source: storer.proto
@ -53,20 +53,24 @@ type CRLStorer_UploadCRLClient = grpc.ClientStreamingClient[UploadCRLRequest, em
// CRLStorerServer is the server API for CRLStorer service.
// All implementations must embed UnimplementedCRLStorerServer
// for forward compatibility
// for forward compatibility.
type CRLStorerServer interface {
UploadCRL(grpc.ClientStreamingServer[UploadCRLRequest, emptypb.Empty]) error
mustEmbedUnimplementedCRLStorerServer()
}
// UnimplementedCRLStorerServer must be embedded to have forward compatible implementations.
type UnimplementedCRLStorerServer struct {
}
// UnimplementedCRLStorerServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedCRLStorerServer struct{}
func (UnimplementedCRLStorerServer) UploadCRL(grpc.ClientStreamingServer[UploadCRLRequest, emptypb.Empty]) error {
return status.Errorf(codes.Unimplemented, "method UploadCRL not implemented")
}
func (UnimplementedCRLStorerServer) mustEmbedUnimplementedCRLStorerServer() {}
func (UnimplementedCRLStorerServer) testEmbeddedByValue() {}
// UnsafeCRLStorerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to CRLStorerServer will
@ -76,6 +80,13 @@ type UnsafeCRLStorerServer interface {
}
func RegisterCRLStorerServer(s grpc.ServiceRegistrar, srv CRLStorerServer) {
// If the following call pancis, it indicates UnimplementedCRLStorerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&CRLStorer_ServiceDesc, srv)
}

View File

@ -80,8 +80,8 @@ func NewUpdater(
return nil, fmt.Errorf("must have positive number of shards, got: %d", numShards)
}
if updatePeriod >= 7*24*time.Hour {
return nil, fmt.Errorf("must update CRLs at least every 7 days, got: %s", updatePeriod)
if updatePeriod >= 24*time.Hour {
return nil, fmt.Errorf("must update CRLs at least every 24 hours, got: %s", updatePeriod)
}
if updateTimeout >= updatePeriod {

View File

@ -5,6 +5,7 @@ import (
"crypto"
"crypto/x509"
"errors"
"net/netip"
"strings"
"github.com/letsencrypt/boulder/core"
@ -34,13 +35,13 @@ var (
unsupportedSigAlg = berrors.BadCSRError("signature algorithm not supported")
invalidSig = berrors.BadCSRError("invalid signature on CSR")
invalidEmailPresent = berrors.BadCSRError("CSR contains one or more email address fields")
invalidIPPresent = berrors.BadCSRError("CSR contains one or more IP address fields")
invalidURIPresent = berrors.BadCSRError("CSR contains one or more URI fields")
invalidNoIdent = berrors.BadCSRError("at least one identifier is required")
)
// VerifyCSR checks the validity of a x509.CertificateRequest. It uses
// NamesFromCSR to normalize the DNS names before checking whether we'll issue
// for them.
// identifier.FromCSR to normalize the DNS names before checking whether we'll
// issue for them.
func VerifyCSR(ctx context.Context, csr *x509.CertificateRequest, maxNames int, keyPolicy *goodkey.KeyPolicy, pa core.PolicyAuthority) error {
key, ok := csr.PublicKey.(crypto.PublicKey)
if !ok {
@ -64,71 +65,54 @@ func VerifyCSR(ctx context.Context, csr *x509.CertificateRequest, maxNames int,
if len(csr.EmailAddresses) > 0 {
return invalidEmailPresent
}
if len(csr.IPAddresses) > 0 {
return invalidIPPresent
if len(csr.URIs) > 0 {
return invalidURIPresent
}
// NamesFromCSR also performs normalization, returning values that may not
// match the literal CSR contents.
names := NamesFromCSR(csr)
if len(names.SANs) == 0 && names.CN == "" {
// FromCSR also performs normalization, returning values that may not match
// the literal CSR contents.
idents := identifier.FromCSR(csr)
if len(idents) == 0 {
return invalidNoIdent
}
if len(names.CN) > maxCNLength {
return berrors.BadCSRError("CN was longer than %d bytes", maxCNLength)
}
if len(names.SANs) > maxNames {
return berrors.BadCSRError("CSR contains more than %d DNS names", maxNames)
if len(idents) > maxNames {
return berrors.BadCSRError("CSR contains more than %d identifiers", maxNames)
}
err = pa.WillingToIssue(identifier.NewDNSSlice(names.SANs))
err = pa.WillingToIssue(idents)
if err != nil {
return err
}
return nil
}
type names struct {
SANs []string
CN string
}
// NamesFromCSR deduplicates and lower-cases the Subject Common Name and Subject
// Alternative Names from the CSR. If a CN was provided, it will be used if it
// is short enough, otherwise there will be no CN. If no CN was provided, the CN
// will be the first SAN that is short enough, which is done only for backwards
// compatibility with prior Let's Encrypt behaviour. The resulting SANs will
// always include the original CN, if any.
//
// TODO(#7311): For callers that don't care about CNs, use identifier.FromCSR.
// For the rest, either revise the names struct to hold identifiers instead of
// strings, or add an ipSANs field (and rename SANs to dnsSANs).
func NamesFromCSR(csr *x509.CertificateRequest) names {
// Produce a new "sans" slice with the same memory address as csr.DNSNames
// but force a new allocation if an append happens so that we don't
// accidentally mutate the underlying csr.DNSNames array.
sans := csr.DNSNames[0:len(csr.DNSNames):len(csr.DNSNames)]
if csr.Subject.CommonName != "" {
sans = append(sans, csr.Subject.CommonName)
}
// CNFromCSR returns the lower-cased Subject Common Name from the CSR, if a
// short enough CN was provided. If it was too long or appears to be an IP,
// there will be no CN. If none was provided, the CN will be the first SAN that
// is short enough, which is done only for backwards compatibility with prior
// Let's Encrypt behaviour.
func CNFromCSR(csr *x509.CertificateRequest) string {
if len(csr.Subject.CommonName) > maxCNLength {
return names{SANs: core.UniqueLowerNames(sans)}
return ""
}
if csr.Subject.CommonName != "" {
return names{SANs: core.UniqueLowerNames(sans), CN: strings.ToLower(csr.Subject.CommonName)}
_, err := netip.ParseAddr(csr.Subject.CommonName)
if err == nil { // inverted; we're looking for successful parsing here
return ""
}
// If there's no CN already, but we want to set one, promote the first SAN
// which is shorter than the maximum acceptable CN length (if any).
for _, name := range sans {
return strings.ToLower(csr.Subject.CommonName)
}
// If there's no CN already, but we want to set one, promote the first dnsName
// SAN which is shorter than the maximum acceptable CN length (if any). We
// will never promote an ipAddress SAN to the CN.
for _, name := range csr.DNSNames {
if len(name) <= maxCNLength {
return names{SANs: core.UniqueLowerNames(sans), CN: strings.ToLower(name)}
return strings.ToLower(name)
}
}
return names{SANs: core.UniqueLowerNames(sans)}
return ""
}

View File

@ -9,6 +9,8 @@ import (
"encoding/asn1"
"errors"
"net"
"net/netip"
"net/url"
"strings"
"testing"
@ -68,6 +70,10 @@ func TestVerifyCSR(t *testing.T) {
signedReqWithIPAddress := new(x509.CertificateRequest)
*signedReqWithIPAddress = *signedReq
signedReqWithIPAddress.IPAddresses = []net.IP{net.IPv4(1, 2, 3, 4)}
signedReqWithURI := new(x509.CertificateRequest)
*signedReqWithURI = *signedReq
testURI, _ := url.ParseRequestURI("https://example.com/")
signedReqWithURI.URIs = []*url.URL{testURI}
signedReqWithAllLongSANs := new(x509.CertificateRequest)
*signedReqWithAllLongSANs = *signedReq
signedReqWithAllLongSANs.DNSNames = []string{"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.com"}
@ -115,7 +121,7 @@ func TestVerifyCSR(t *testing.T) {
signedReqWithHosts,
1,
&mockPA{},
berrors.BadCSRError("CSR contains more than 1 DNS names"),
berrors.BadCSRError("CSR contains more than 1 identifiers"),
},
{
signedReqWithBadNames,
@ -133,7 +139,13 @@ func TestVerifyCSR(t *testing.T) {
signedReqWithIPAddress,
100,
&mockPA{},
invalidIPPresent,
nil,
},
{
signedReqWithURI,
100,
&mockPA{},
invalidURIPresent,
},
{
signedReqWithAllLongSANs,
@ -149,44 +161,38 @@ func TestVerifyCSR(t *testing.T) {
}
}
func TestNamesFromCSR(t *testing.T) {
func TestCNFromCSR(t *testing.T) {
tooLongString := strings.Repeat("a", maxCNLength+1)
cases := []struct {
name string
csr *x509.CertificateRequest
expectedCN string
expectedNames []string
}{
{
"no explicit CN",
&x509.CertificateRequest{DNSNames: []string{"a.com"}},
"a.com",
[]string{"a.com"},
},
{
"explicit uppercase CN",
&x509.CertificateRequest{Subject: pkix.Name{CommonName: "A.com"}, DNSNames: []string{"a.com"}},
"a.com",
[]string{"a.com"},
},
{
"no explicit CN, uppercase SAN",
&x509.CertificateRequest{DNSNames: []string{"A.com"}},
"a.com",
[]string{"a.com"},
},
{
"duplicate SANs",
&x509.CertificateRequest{DNSNames: []string{"b.com", "b.com", "a.com", "a.com"}},
"b.com",
[]string{"a.com", "b.com"},
},
{
"explicit CN not found in SANs",
&x509.CertificateRequest{Subject: pkix.Name{CommonName: "a.com"}, DNSNames: []string{"b.com"}},
"a.com",
[]string{"a.com", "b.com"},
},
{
"no explicit CN, all SANs too long to be the CN",
@ -195,7 +201,6 @@ func TestNamesFromCSR(t *testing.T) {
tooLongString + ".b.com",
}},
"",
[]string{tooLongString + ".a.com", tooLongString + ".b.com"},
},
{
"no explicit CN, leading SANs too long to be the CN",
@ -206,7 +211,6 @@ func TestNamesFromCSR(t *testing.T) {
"b.com",
}},
"a.com",
[]string{"a.com", tooLongString + ".a.com", tooLongString + ".b.com", "b.com"},
},
{
"explicit CN, leading SANs too long to be the CN",
@ -219,7 +223,6 @@ func TestNamesFromCSR(t *testing.T) {
"b.com",
}},
"a.com",
[]string{"a.com", tooLongString + ".a.com", tooLongString + ".b.com", "b.com"},
},
{
"explicit CN that's too long to be the CN",
@ -227,7 +230,6 @@ func TestNamesFromCSR(t *testing.T) {
Subject: pkix.Name{CommonName: tooLongString + ".a.com"},
},
"",
[]string{tooLongString + ".a.com"},
},
{
"explicit CN that's too long to be the CN, with a SAN",
@ -237,14 +239,27 @@ func TestNamesFromCSR(t *testing.T) {
"b.com",
}},
"",
[]string{tooLongString + ".a.com", "b.com"},
},
{
"explicit CN that's an IP",
&x509.CertificateRequest{
Subject: pkix.Name{CommonName: "127.0.0.1"},
},
"",
},
{
"no CN, only IP SANs",
&x509.CertificateRequest{
IPAddresses: []net.IP{
netip.MustParseAddr("127.0.0.1").AsSlice(),
},
},
"",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
names := NamesFromCSR(tc.csr)
test.AssertEquals(t, names.CN, tc.expectedCN)
test.AssertDeepEquals(t, names.SANs, tc.expectedNames)
test.AssertEquals(t, CNFromCSR(tc.csr), tc.expectedCN)
})
}
}

View File

@ -1,93 +1,9 @@
package ctconfig
import (
"errors"
"fmt"
"time"
"github.com/letsencrypt/boulder/config"
)
// LogShard describes a single shard of a temporally sharded
// CT log
type LogShard struct {
URI string
Key string
WindowStart time.Time
WindowEnd time.Time
}
// TemporalSet contains a set of temporal shards of a single log
type TemporalSet struct {
Name string
Shards []LogShard
}
// Setup initializes the TemporalSet by parsing the start and end dates
// and verifying WindowEnd > WindowStart
func (ts *TemporalSet) Setup() error {
if ts.Name == "" {
return errors.New("Name cannot be empty")
}
if len(ts.Shards) == 0 {
return errors.New("temporal set contains no shards")
}
for i := range ts.Shards {
if !ts.Shards[i].WindowEnd.After(ts.Shards[i].WindowStart) {
return errors.New("WindowStart must be before WindowEnd")
}
}
return nil
}
// pick chooses the correct shard from a TemporalSet to use for the given
// expiration time. In the case where two shards have overlapping windows
// the earlier of the two shards will be chosen.
func (ts *TemporalSet) pick(exp time.Time) (*LogShard, error) {
for _, shard := range ts.Shards {
if exp.Before(shard.WindowStart) {
continue
}
if !exp.Before(shard.WindowEnd) {
continue
}
return &shard, nil
}
return nil, fmt.Errorf("no valid shard available for temporal set %q for expiration date %q", ts.Name, exp)
}
// LogDescription contains the information needed to submit certificates
// to a CT log and verify returned receipts. If TemporalSet is non-nil then
// URI and Key should be empty.
type LogDescription struct {
URI string
Key string
SubmitFinalCert bool
*TemporalSet
}
// Info returns the URI and key of the log, either from a plain log description
// or from the earliest valid shard from a temporal log set
func (ld LogDescription) Info(exp time.Time) (string, string, error) {
if ld.TemporalSet == nil {
return ld.URI, ld.Key, nil
}
shard, err := ld.TemporalSet.pick(exp)
if err != nil {
return "", "", err
}
return shard.URI, shard.Key, nil
}
// CTGroup represents a group of CT Logs. Although capable of holding logs
// grouped by any arbitrary feature, is today primarily used to hold logs which
// are all operated by the same legal entity.
type CTGroup struct {
Name string
Logs []LogDescription
}
// CTConfig is the top-level config object expected to be embedded in an
// executable's JSON config struct.
type CTConfig struct {
@ -109,13 +25,3 @@ type CTConfig struct {
// and final certs to the same log.
FinalLogs []string
}
// LogID holds enough information to uniquely identify a CT Log: its log_id
// (the base64-encoding of the SHA-256 hash of its public key) and its human-
// readable name/description. This is used to extract other log parameters
// (such as its URL and public key) from the Chrome Log List.
type LogID struct {
Name string
ID string
SubmitFinal bool
}

View File

@ -1,116 +0,0 @@
package ctconfig
import (
"testing"
"time"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/test"
)
func TestTemporalSetup(t *testing.T) {
for _, tc := range []struct {
ts TemporalSet
err string
}{
{
ts: TemporalSet{},
err: "Name cannot be empty",
},
{
ts: TemporalSet{
Name: "temporal set",
},
err: "temporal set contains no shards",
},
{
ts: TemporalSet{
Name: "temporal set",
Shards: []LogShard{
{
WindowStart: time.Time{},
WindowEnd: time.Time{},
},
},
},
err: "WindowStart must be before WindowEnd",
},
{
ts: TemporalSet{
Name: "temporal set",
Shards: []LogShard{
{
WindowStart: time.Time{}.Add(time.Hour),
WindowEnd: time.Time{},
},
},
},
err: "WindowStart must be before WindowEnd",
},
{
ts: TemporalSet{
Name: "temporal set",
Shards: []LogShard{
{
WindowStart: time.Time{},
WindowEnd: time.Time{}.Add(time.Hour),
},
},
},
err: "",
},
} {
err := tc.ts.Setup()
if err != nil && tc.err != err.Error() {
t.Errorf("got error %q, wanted %q", err, tc.err)
} else if err == nil && tc.err != "" {
t.Errorf("unexpected error %q", err)
}
}
}
func TestLogInfo(t *testing.T) {
ld := LogDescription{
URI: "basic-uri",
Key: "basic-key",
}
uri, key, err := ld.Info(time.Time{})
test.AssertNotError(t, err, "Info failed")
test.AssertEquals(t, uri, ld.URI)
test.AssertEquals(t, key, ld.Key)
fc := clock.NewFake()
ld.TemporalSet = &TemporalSet{}
_, _, err = ld.Info(fc.Now())
test.AssertError(t, err, "Info should fail with a TemporalSet with no viable shards")
ld.TemporalSet.Shards = []LogShard{{WindowStart: fc.Now().Add(time.Hour), WindowEnd: fc.Now().Add(time.Hour * 2)}}
_, _, err = ld.Info(fc.Now())
test.AssertError(t, err, "Info should fail with a TemporalSet with no viable shards")
fc.Add(time.Hour * 4)
now := fc.Now()
ld.TemporalSet.Shards = []LogShard{
{
WindowStart: now.Add(time.Hour * -4),
WindowEnd: now.Add(time.Hour * -2),
URI: "a",
Key: "a",
},
{
WindowStart: now.Add(time.Hour * -2),
WindowEnd: now.Add(time.Hour * 2),
URI: "b",
Key: "b",
},
{
WindowStart: now.Add(time.Hour * 2),
WindowEnd: now.Add(time.Hour * 4),
URI: "c",
Key: "c",
},
}
uri, key, err = ld.Info(now)
test.AssertNotError(t, err, "Info failed")
test.AssertEquals(t, uri, "b")
test.AssertEquals(t, key, "b")
}

View File

@ -2,6 +2,7 @@ package ctpolicy
import (
"context"
"encoding/base64"
"fmt"
"strings"
"time"
@ -30,7 +31,6 @@ type CTPolicy struct {
stagger time.Duration
log blog.Logger
winnerCounter *prometheus.CounterVec
operatorGroupsGauge *prometheus.GaugeVec
shardExpiryGauge *prometheus.GaugeVec
}
@ -45,15 +45,6 @@ func New(pub pubpb.PublisherClient, sctLogs loglist.List, infoLogs loglist.List,
)
stats.MustRegister(winnerCounter)
operatorGroupsGauge := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "ct_operator_group_size_gauge",
Help: "Gauge for CT operators group size, by operator and log source (capable of providing SCT, informational logs, logs we submit final certs to).",
},
[]string{"operator", "source"},
)
stats.MustRegister(operatorGroupsGauge)
shardExpiryGauge := prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "ct_shard_expiration_seconds",
@ -63,26 +54,14 @@ func New(pub pubpb.PublisherClient, sctLogs loglist.List, infoLogs loglist.List,
)
stats.MustRegister(shardExpiryGauge)
for op, group := range sctLogs {
operatorGroupsGauge.WithLabelValues(op, "sctLogs").Set(float64(len(group)))
for _, log := range group {
for _, log := range sctLogs {
if log.EndExclusive.IsZero() {
// Handles the case for non-temporally sharded logs too.
shardExpiryGauge.WithLabelValues(op, log.Name).Set(float64(0))
shardExpiryGauge.WithLabelValues(log.Operator, log.Name).Set(float64(0))
} else {
shardExpiryGauge.WithLabelValues(op, log.Name).Set(float64(log.EndExclusive.Unix()))
shardExpiryGauge.WithLabelValues(log.Operator, log.Name).Set(float64(log.EndExclusive.Unix()))
}
}
}
for op, group := range infoLogs {
operatorGroupsGauge.WithLabelValues(op, "infoLogs").Set(float64(len(group)))
}
for op, group := range finalLogs {
operatorGroupsGauge.WithLabelValues(op, "finalLogs").Set(float64(len(group)))
}
return &CTPolicy{
pub: pub,
@ -92,14 +71,13 @@ func New(pub pubpb.PublisherClient, sctLogs loglist.List, infoLogs loglist.List,
stagger: stagger,
log: log,
winnerCounter: winnerCounter,
operatorGroupsGauge: operatorGroupsGauge,
shardExpiryGauge: shardExpiryGauge,
}
}
type result struct {
log loglist.Log
sct []byte
url string
err error
}
@ -115,73 +93,68 @@ func (ctp *CTPolicy) GetSCTs(ctx context.Context, cert core.CertDER, expiration
subCtx, cancel := context.WithCancel(ctx)
defer cancel()
// This closure will be called in parallel once for each operator group.
getOne := func(i int, g string) ([]byte, string, error) {
// Sleep a little bit to stagger our requests to the later groups. Use `i-1`
// to compute the stagger duration so that the first two groups (indices 0
// This closure will be called in parallel once for each log.
getOne := func(i int, l loglist.Log) ([]byte, error) {
// Sleep a little bit to stagger our requests to the later logs. Use `i-1`
// to compute the stagger duration so that the first two logs (indices 0
// and 1) get negative or zero (i.e. instant) sleep durations. If the
// context gets cancelled (most likely because two logs from other operator
// groups returned SCTs already) before the sleep is complete, quit instead.
// context gets cancelled (most likely because we got enough SCTs from other
// logs already) before the sleep is complete, quit instead.
select {
case <-subCtx.Done():
return nil, "", subCtx.Err()
return nil, subCtx.Err()
case <-time.After(time.Duration(i-1) * ctp.stagger):
}
// Pick a random log from among those in the group. In practice, very few
// operator groups have more than one log, so this loses little flexibility.
url, key, err := ctp.sctLogs.PickOne(g, expiration)
if err != nil {
return nil, "", fmt.Errorf("unable to get log info: %w", err)
}
sct, err := ctp.pub.SubmitToSingleCTWithResult(ctx, &pubpb.Request{
LogURL: url,
LogPublicKey: key,
LogURL: l.Url,
LogPublicKey: base64.StdEncoding.EncodeToString(l.Key),
Der: cert,
Kind: pubpb.SubmissionType_sct,
})
if err != nil {
return nil, url, fmt.Errorf("ct submission to %q (%q) failed: %w", g, url, err)
return nil, fmt.Errorf("ct submission to %q (%q) failed: %w", l.Name, l.Url, err)
}
return sct.Sct, url, nil
return sct.Sct, nil
}
// Ensure that this channel has a buffer equal to the number of goroutines
// we're kicking off, so that they're all guaranteed to be able to write to
// it and exit without blocking and leaking.
results := make(chan result, len(ctp.sctLogs))
// Identify the set of candidate logs whose temporal interval includes this
// cert's expiry. Randomize the order of the logs so that we're not always
// trying to submit to the same two.
logs := ctp.sctLogs.ForTime(expiration).Permute()
// Kick off a collection of goroutines to try to submit the precert to each
// log operator group. Randomize the order of the groups so that we're not
// always trying to submit to the same two operators.
for i, group := range ctp.sctLogs.Permute() {
go func(i int, g string) {
sctDER, url, err := getOne(i, g)
results <- result{sct: sctDER, url: url, err: err}
}(i, group)
// log. Ensure that the results channel has a buffer equal to the number of
// goroutines we're kicking off, so that they're all guaranteed to be able to
// write to it and exit without blocking and leaking.
resChan := make(chan result, len(logs))
for i, log := range logs {
go func(i int, l loglist.Log) {
sctDER, err := getOne(i, l)
resChan <- result{log: l, sct: sctDER, err: err}
}(i, log)
}
go ctp.submitPrecertInformational(cert, expiration)
// Finally, collect SCTs and/or errors from our results channel. We know that
// we will collect len(ctp.sctLogs) results from the channel because every
// goroutine is guaranteed to write one result to the channel.
scts := make(core.SCTDERs, 0)
// we can collect len(logs) results from the channel because every goroutine
// is guaranteed to write one result (either sct or error) to the channel.
results := make([]result, 0)
errs := make([]string, 0)
for range len(ctp.sctLogs) {
res := <-results
for range len(logs) {
res := <-resChan
if res.err != nil {
errs = append(errs, res.err.Error())
if res.url != "" {
ctp.winnerCounter.WithLabelValues(res.url, failed).Inc()
}
ctp.winnerCounter.WithLabelValues(res.log.Url, failed).Inc()
continue
}
scts = append(scts, res.sct)
ctp.winnerCounter.WithLabelValues(res.url, succeeded).Inc()
if len(scts) >= 2 {
results = append(results, res)
ctp.winnerCounter.WithLabelValues(res.log.Url, succeeded).Inc()
scts := compliantSet(results)
if scts != nil {
return scts, nil
}
}
@ -196,6 +169,36 @@ func (ctp *CTPolicy) GetSCTs(ctx context.Context, cert core.CertDER, expiration
return nil, berrors.MissingSCTsError("failed to get 2 SCTs, got %d error(s): %s", len(errs), strings.Join(errs, "; "))
}
// compliantSet returns a slice of SCTs which complies with all relevant CT Log
// Policy requirements, namely that the set of SCTs:
// - contain at least two SCTs, which
// - come from logs run by at least two different operators, and
// - contain at least one RFC6962-compliant (i.e. non-static/tiled) log.
//
// If no such set of SCTs exists, returns nil.
func compliantSet(results []result) core.SCTDERs {
for _, first := range results {
if first.err != nil {
continue
}
for _, second := range results {
if second.err != nil {
continue
}
if first.log.Operator == second.log.Operator {
// The two SCTs must come from different operators.
continue
}
if first.log.Tiled && second.log.Tiled {
// At least one must come from a non-tiled log.
continue
}
return core.SCTDERs{first.sct, second.sct}
}
}
return nil
}
// submitAllBestEffort submits the given certificate or precertificate to every
// log ("informational" for precerts, "final" for certs) configured in the policy.
// It neither waits for these submission to complete, nor tracks their success.
@ -205,8 +208,7 @@ func (ctp *CTPolicy) submitAllBestEffort(blob core.CertDER, kind pubpb.Submissio
logs = ctp.infoLogs
}
for _, group := range logs {
for _, log := range group {
for _, log := range logs {
if log.StartInclusive.After(expiry) || log.EndExclusive.Equal(expiry) || log.EndExclusive.Before(expiry) {
continue
}
@ -216,7 +218,7 @@ func (ctp *CTPolicy) submitAllBestEffort(blob core.CertDER, kind pubpb.Submissio
context.Background(),
&pubpb.Request{
LogURL: log.Url,
LogPublicKey: log.Key,
LogPublicKey: base64.StdEncoding.EncodeToString(log.Key),
Der: blob,
Kind: kind,
},
@ -228,8 +230,6 @@ func (ctp *CTPolicy) submitAllBestEffort(blob core.CertDER, kind pubpb.Submissio
}
}
}
// submitPrecertInformational submits precertificates to any configured
// "informational" logs, but does not care about success or returned SCTs.
func (ctp *CTPolicy) submitPrecertInformational(cert core.CertDER, expiration time.Time) {

View File

@ -1,6 +1,7 @@
package ctpolicy
import (
"bytes"
"context"
"errors"
"strings"
@ -8,6 +9,9 @@ import (
"time"
"github.com/jmhodges/clock"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
"github.com/letsencrypt/boulder/core"
"github.com/letsencrypt/boulder/ctpolicy/loglist"
berrors "github.com/letsencrypt/boulder/errors"
@ -15,8 +19,6 @@ import (
"github.com/letsencrypt/boulder/metrics"
pubpb "github.com/letsencrypt/boulder/publisher/proto"
"github.com/letsencrypt/boulder/test"
"github.com/prometheus/client_golang/prometheus"
"google.golang.org/grpc"
)
type mockPub struct{}
@ -45,7 +47,7 @@ func TestGetSCTs(t *testing.T) {
testCases := []struct {
name string
mock pubpb.PublisherClient
groups loglist.List
logs loglist.List
ctx context.Context
result core.SCTDERs
expectErr string
@ -54,17 +56,11 @@ func TestGetSCTs(t *testing.T) {
{
name: "basic success case",
mock: &mockPub{},
groups: loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
"LogA2": {Url: "UrlA2", Key: "KeyA2"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
logs: loglist.List{
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
{Name: "LogA2", Operator: "OperA", Url: "UrlA2", Key: []byte("KeyA2")},
{Name: "LogB1", Operator: "OperB", Url: "UrlB1", Key: []byte("KeyB1")},
{Name: "LogC1", Operator: "OperC", Url: "UrlC1", Key: []byte("KeyC1")},
},
ctx: context.Background(),
result: core.SCTDERs{[]byte{0}, []byte{0}},
@ -72,36 +68,24 @@ func TestGetSCTs(t *testing.T) {
{
name: "basic failure case",
mock: &mockFailPub{},
groups: loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
"LogA2": {Url: "UrlA2", Key: "KeyA2"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
logs: loglist.List{
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
{Name: "LogA2", Operator: "OperA", Url: "UrlA2", Key: []byte("KeyA2")},
{Name: "LogB1", Operator: "OperB", Url: "UrlB1", Key: []byte("KeyB1")},
{Name: "LogC1", Operator: "OperC", Url: "UrlC1", Key: []byte("KeyC1")},
},
ctx: context.Background(),
expectErr: "failed to get 2 SCTs, got 3 error(s)",
expectErr: "failed to get 2 SCTs, got 4 error(s)",
berrorType: &missingSCTErr,
},
{
name: "parent context timeout failure case",
mock: &mockSlowPub{},
groups: loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
"LogA2": {Url: "UrlA2", Key: "KeyA2"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
logs: loglist.List{
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
{Name: "LogA2", Operator: "OperA", Url: "UrlA2", Key: []byte("KeyA2")},
{Name: "LogB1", Operator: "OperB", Url: "UrlB1", Key: []byte("KeyB1")},
{Name: "LogC1", Operator: "OperC", Url: "UrlC1", Key: []byte("KeyC1")},
},
ctx: expired,
expectErr: "failed to get 2 SCTs before ctx finished",
@ -111,7 +95,7 @@ func TestGetSCTs(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
ctp := New(tc.mock, tc.groups, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
ctp := New(tc.mock, tc.logs, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
ret, err := ctp.GetSCTs(tc.ctx, []byte{0}, time.Time{})
if tc.result != nil {
test.AssertDeepEquals(t, ret, tc.result)
@ -140,15 +124,9 @@ func (mp *mockFailOnePub) SubmitToSingleCTWithResult(_ context.Context, req *pub
func TestGetSCTsMetrics(t *testing.T) {
ctp := New(&mockFailOnePub{badURL: "UrlA1"}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
{Name: "LogB1", Operator: "OperB", Url: "UrlB1", Key: []byte("KeyB1")},
{Name: "LogC1", Operator: "OperC", Url: "UrlC1", Key: []byte("KeyC1")},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
_, err := ctp.GetSCTs(context.Background(), []byte{0}, time.Time{})
test.AssertNotError(t, err, "GetSCTs failed")
@ -159,9 +137,7 @@ func TestGetSCTsMetrics(t *testing.T) {
func TestGetSCTsFailMetrics(t *testing.T) {
// Ensure the proper metrics are incremented when GetSCTs fails.
ctp := New(&mockFailOnePub{badURL: "UrlA1"}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
},
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
_, err := ctp.GetSCTs(context.Background(), []byte{0}, time.Time{})
test.AssertError(t, err, "GetSCTs should have failed")
@ -173,9 +149,7 @@ func TestGetSCTsFailMetrics(t *testing.T) {
defer cancel()
ctp = New(&mockSlowPub{}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
},
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1")},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
_, err = ctp.GetSCTs(ctx, []byte{0}, time.Time{})
test.AssertError(t, err, "GetSCTs should have timed out")
@ -185,78 +159,96 @@ func TestGetSCTsFailMetrics(t *testing.T) {
}
func TestLogListMetrics(t *testing.T) {
// Multiple operator groups with configured logs.
ctp := New(&mockPub{}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
"LogA2": {Url: "UrlA2", Key: "KeyA2"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperA", "source": "sctLogs"}, 2)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperB", "source": "sctLogs"}, 1)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperC", "source": "sctLogs"}, 1)
// Multiple operator groups, no configured logs in one group
ctp = New(&mockPub{}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
"LogA2": {Url: "UrlA2", Key: "KeyA2"},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1"},
},
"OperC": {},
}, nil, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1"},
},
"OperB": {},
"OperC": {
"LogC1": {Url: "UrlC1", Key: "KeyC1"},
},
}, 0, blog.NewMock(), metrics.NoopRegisterer)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperA", "source": "sctLogs"}, 2)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperB", "source": "sctLogs"}, 1)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperC", "source": "sctLogs"}, 0)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperA", "source": "finalLogs"}, 1)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperB", "source": "finalLogs"}, 0)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperC", "source": "finalLogs"}, 1)
// Multiple operator groups with no configured logs.
ctp = New(&mockPub{}, loglist.List{
"OperA": {},
"OperB": {},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperA", "source": "sctLogs"}, 0)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperB", "source": "sctLogs"}, 0)
// Single operator group with no configured logs.
ctp = New(&mockPub{}, loglist.List{
"OperA": {},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
test.AssertMetricWithLabelsEquals(t, ctp.operatorGroupsGauge, prometheus.Labels{"operator": "OperA", "source": "allLogs"}, 0)
fc := clock.NewFake()
Tomorrow := fc.Now().Add(24 * time.Hour)
NextWeek := fc.Now().Add(7 * 24 * time.Hour)
// Multiple operator groups with configured logs.
ctp = New(&mockPub{}, loglist.List{
"OperA": {
"LogA1": {Url: "UrlA1", Key: "KeyA1", Name: "LogA1", EndExclusive: Tomorrow},
"LogA2": {Url: "UrlA2", Key: "KeyA2", Name: "LogA2", EndExclusive: NextWeek},
},
"OperB": {
"LogB1": {Url: "UrlB1", Key: "KeyB1", Name: "LogB1", EndExclusive: Tomorrow},
},
ctp := New(&mockPub{}, loglist.List{
{Name: "LogA1", Operator: "OperA", Url: "UrlA1", Key: []byte("KeyA1"), EndExclusive: Tomorrow},
{Name: "LogA2", Operator: "OperA", Url: "UrlA2", Key: []byte("KeyA2"), EndExclusive: NextWeek},
{Name: "LogB1", Operator: "OperB", Url: "UrlB1", Key: []byte("KeyB1"), EndExclusive: Tomorrow},
}, nil, nil, 0, blog.NewMock(), metrics.NoopRegisterer)
test.AssertMetricWithLabelsEquals(t, ctp.shardExpiryGauge, prometheus.Labels{"operator": "OperA", "logID": "LogA1"}, 86400)
test.AssertMetricWithLabelsEquals(t, ctp.shardExpiryGauge, prometheus.Labels{"operator": "OperA", "logID": "LogA2"}, 604800)
test.AssertMetricWithLabelsEquals(t, ctp.shardExpiryGauge, prometheus.Labels{"operator": "OperB", "logID": "LogB1"}, 86400)
}
func TestCompliantSet(t *testing.T) {
for _, tc := range []struct {
name string
results []result
want core.SCTDERs
}{
{
name: "nil input",
results: nil,
want: nil,
},
{
name: "zero length input",
results: []result{},
want: nil,
},
{
name: "only one result",
results: []result{
{log: loglist.Log{Operator: "A", Tiled: false}, sct: []byte("sct1")},
},
want: nil,
},
{
name: "only one good result",
results: []result{
{log: loglist.Log{Operator: "A", Tiled: false}, sct: []byte("sct1")},
{log: loglist.Log{Operator: "B", Tiled: false}, err: errors.New("oops")},
},
want: nil,
},
{
name: "only one operator",
results: []result{
{log: loglist.Log{Operator: "A", Tiled: false}, sct: []byte("sct1")},
{log: loglist.Log{Operator: "A", Tiled: false}, sct: []byte("sct2")},
},
want: nil,
},
{
name: "all tiled",
results: []result{
{log: loglist.Log{Operator: "A", Tiled: true}, sct: []byte("sct1")},
{log: loglist.Log{Operator: "B", Tiled: true}, sct: []byte("sct2")},
},
want: nil,
},
{
name: "happy path",
results: []result{
{log: loglist.Log{Operator: "A", Tiled: false}, err: errors.New("oops")},
{log: loglist.Log{Operator: "A", Tiled: true}, sct: []byte("sct2")},
{log: loglist.Log{Operator: "A", Tiled: false}, sct: []byte("sct3")},
{log: loglist.Log{Operator: "B", Tiled: false}, err: errors.New("oops")},
{log: loglist.Log{Operator: "B", Tiled: true}, sct: []byte("sct4")},
{log: loglist.Log{Operator: "B", Tiled: false}, sct: []byte("sct6")},
{log: loglist.Log{Operator: "C", Tiled: false}, err: errors.New("oops")},
{log: loglist.Log{Operator: "C", Tiled: true}, sct: []byte("sct8")},
{log: loglist.Log{Operator: "C", Tiled: false}, sct: []byte("sct9")},
},
// The second and sixth results should be picked, because first and fourth
// are skipped for being errors, and fifth is skipped for also being tiled.
want: core.SCTDERs{[]byte("sct2"), []byte("sct6")},
},
} {
t.Run(tc.name, func(t *testing.T) {
got := compliantSet(tc.results)
if len(got) != len(tc.want) {
t.Fatalf("compliantSet(%#v) returned %d SCTs, but want %d", tc.results, len(got), len(tc.want))
}
for i, sct := range tc.want {
if !bytes.Equal(got[i], sct) {
t.Errorf("compliantSet(%#v) returned unexpected SCT at index %d", tc.results, i)
}
}
})
}
}

View File

@ -7,7 +7,7 @@ import (
"fmt"
"math/rand/v2"
"os"
"strings"
"slices"
"time"
"github.com/google/certificate-transparency-go/loglist3"
@ -31,28 +31,23 @@ const Informational purpose = "info"
// necessarily still issuing SCTs today.
const Validation purpose = "lint"
// List represents a list of logs, grouped by their operator, arranged by
// the "v3" schema as published by Chrome:
// https://www.gstatic.com/ct/log_list/v3/log_list_schema.json
// It exports no fields so that consumers don't have to deal with the terrible
// autogenerated names of the structs it wraps.
type List map[string]OperatorGroup
// OperatorGroup represents a group of logs which are all run by the same
// operator organization. It provides constant-time lookup of logs within the
// group by their unique ID.
type OperatorGroup map[string]Log
// List represents a list of logs arranged by the "v3" schema as published by
// Chrome: https://www.gstatic.com/ct/log_list/v3/log_list_schema.json
type List []Log
// Log represents a single log run by an operator. It contains just the info
// necessary to contact a log, and to determine whether that log will accept
// the submission of a certificate with a given expiration.
// necessary to determine whether we want to submit to that log, and how to
// do so.
type Log struct {
Operator string
Name string
Id string
Key []byte
Url string
Key string
StartInclusive time.Time
EndExclusive time.Time
State loglist3.LogStatus
Tiled bool
}
// usableForPurpose returns true if the log state is acceptable for the given
@ -89,15 +84,17 @@ func newHelper(file []byte) (List, error) {
return nil, fmt.Errorf("failed to parse CT Log List: %w", err)
}
result := make(List)
result := make(List, 0)
for _, op := range parsed.Operators {
group := make(OperatorGroup)
for _, log := range op.Logs {
info := Log{
Operator: op.Name,
Name: log.Description,
Id: base64.StdEncoding.EncodeToString(log.LogID),
Key: log.Key,
Url: log.URL,
Key: base64.StdEncoding.EncodeToString(log.Key),
State: log.State.LogStatus(),
Tiled: false,
}
if log.TemporalInterval != nil {
@ -105,9 +102,27 @@ func newHelper(file []byte) (List, error) {
info.EndExclusive = log.TemporalInterval.EndExclusive
}
group[base64.StdEncoding.EncodeToString(log.LogID)] = info
result = append(result, info)
}
for _, log := range op.TiledLogs {
info := Log{
Operator: op.Name,
Name: log.Description,
Id: base64.StdEncoding.EncodeToString(log.LogID),
Key: log.Key,
Url: log.SubmissionURL,
State: log.State.LogStatus(),
Tiled: true,
}
if log.TemporalInterval != nil {
info.StartInclusive = log.TemporalInterval.StartInclusive
info.EndExclusive = log.TemporalInterval.EndExclusive
}
result = append(result, info)
}
result[op.Name] = group
}
return result, nil
@ -136,45 +151,23 @@ func (ll List) SubsetForPurpose(names []string, p purpose) (List, error) {
// those in the given list. It returns an error if any of the given names are
// not found.
func (ll List) subset(names []string) (List, error) {
remaining := make(map[string]struct{}, len(names))
res := make(List, 0)
for _, name := range names {
remaining[name] = struct{}{}
found := false
for _, log := range ll {
if log.Name == name {
if found {
return nil, fmt.Errorf("found multiple logs matching name %q", name)
}
newList := make(List)
for operator, group := range ll {
newGroup := make(OperatorGroup)
for id, log := range group {
if _, found := remaining[log.Name]; !found {
continue
}
newLog := Log{
Name: log.Name,
Url: log.Url,
Key: log.Key,
State: log.State,
StartInclusive: log.StartInclusive,
EndExclusive: log.EndExclusive,
}
newGroup[id] = newLog
delete(remaining, newLog.Name)
}
if len(newGroup) > 0 {
newList[operator] = newGroup
found = true
res = append(res, log)
}
}
if len(remaining) > 0 {
missed := make([]string, 0, len(remaining))
for name := range remaining {
missed = append(missed, fmt.Sprintf("%q", name))
if !found {
return nil, fmt.Errorf("no log found matching name %q", name)
}
return nil, fmt.Errorf("failed to find logs matching name(s): %s", strings.Join(missed, ", "))
}
return newList, nil
return res, nil
}
// forPurpose returns a new log list containing only those logs whose states are
@ -182,88 +175,55 @@ func (ll List) subset(names []string) (List, error) {
// Issuance or Validation and the set of remaining logs is too small to satisfy
// the Google "two operators" log policy.
func (ll List) forPurpose(p purpose) (List, error) {
newList := make(List)
for operator, group := range ll {
newGroup := make(OperatorGroup)
for id, log := range group {
res := make(List, 0)
operators := make(map[string]struct{})
for _, log := range ll {
if !usableForPurpose(log.State, p) {
continue
}
newLog := Log{
Name: log.Name,
Url: log.Url,
Key: log.Key,
State: log.State,
StartInclusive: log.StartInclusive,
EndExclusive: log.EndExclusive,
res = append(res, log)
operators[log.Operator] = struct{}{}
}
newGroup[id] = newLog
}
if len(newGroup) > 0 {
newList[operator] = newGroup
}
}
if len(newList) < 2 && p != Informational {
if len(operators) < 2 && p != Informational {
return nil, errors.New("log list does not have enough groups to satisfy Chrome policy")
}
return newList, nil
return res, nil
}
// OperatorForLogID returns the Name of the Group containing the Log with the
// given ID, or an error if no such log/group can be found.
func (ll List) OperatorForLogID(logID string) (string, error) {
for op, group := range ll {
if _, found := group[logID]; found {
return op, nil
// ForTime returns a new log list containing only those logs whose temporal
// intervals include the given certificate expiration timestamp.
func (ll List) ForTime(expiry time.Time) List {
res := slices.Clone(ll)
res = slices.DeleteFunc(res, func(l Log) bool {
if (l.StartInclusive.IsZero() || l.StartInclusive.Equal(expiry) || l.StartInclusive.Before(expiry)) &&
(l.EndExclusive.IsZero() || l.EndExclusive.After(expiry)) {
return false
}
}
return "", fmt.Errorf("no log with ID %q found", logID)
return true
})
return res
}
// Permute returns the list of operator group names in a randomized order.
func (ll List) Permute() []string {
keys := make([]string, 0, len(ll))
for k := range ll {
keys = append(keys, k)
// Permute returns a new log list containing the exact same logs, but in a
// randomly-shuffled order.
func (ll List) Permute() List {
res := slices.Clone(ll)
rand.Shuffle(len(res), func(i int, j int) {
res[i], res[j] = res[j], res[i]
})
return res
}
result := make([]string, len(ll))
for i, j := range rand.Perm(len(ll)) {
result[i] = keys[j]
}
return result
}
// PickOne returns the URI and Public Key of a single randomly-selected log
// which is run by the given operator and whose temporal interval includes the
// given expiry time. It returns an error if no such log can be found.
func (ll List) PickOne(operator string, expiry time.Time) (string, string, error) {
group, ok := ll[operator]
if !ok {
return "", "", fmt.Errorf("no log operator group named %q", operator)
}
candidates := make([]Log, 0)
for _, log := range group {
if log.StartInclusive.IsZero() || log.EndExclusive.IsZero() {
candidates = append(candidates, log)
continue
}
if (log.StartInclusive.Equal(expiry) || log.StartInclusive.Before(expiry)) && log.EndExclusive.After(expiry) {
candidates = append(candidates, log)
// GetByID returns the Log matching the given ID, or an error if no such
// log can be found.
func (ll List) GetByID(logID string) (Log, error) {
for _, log := range ll {
if log.Id == logID {
return log, nil
}
}
// Ensure rand.Intn below won't panic.
if len(candidates) < 1 {
return "", "", fmt.Errorf("no log found for group %q and expiry %s", operator, expiry)
}
log := candidates[rand.IntN(len(candidates))]
return log.Url, log.Key, nil
return Log{}, fmt.Errorf("no log with ID %q found", logID)
}

View File

@ -5,6 +5,7 @@ import (
"time"
"github.com/google/certificate-transparency-go/loglist3"
"github.com/jmhodges/clock"
"github.com/letsencrypt/boulder/test"
)
@ -15,18 +16,12 @@ func TestNew(t *testing.T) {
func TestSubset(t *testing.T) {
input := List{
"Operator A": {
"ID A1": Log{Name: "Log A1"},
"ID A2": Log{Name: "Log A2"},
},
"Operator B": {
"ID B1": Log{Name: "Log B1"},
"ID B2": Log{Name: "Log B2"},
},
"Operator C": {
"ID C1": Log{Name: "Log C1"},
"ID C2": Log{Name: "Log C2"},
},
Log{Name: "Log A1"},
Log{Name: "Log A2"},
Log{Name: "Log B1"},
Log{Name: "Log B2"},
Log{Name: "Log C1"},
Log{Name: "Log C2"},
}
actual, err := input.subset(nil)
@ -42,13 +37,9 @@ func TestSubset(t *testing.T) {
test.AssertEquals(t, len(actual), 0)
expected := List{
"Operator A": {
"ID A1": Log{Name: "Log A1"},
"ID A2": Log{Name: "Log A2"},
},
"Operator B": {
"ID B1": Log{Name: "Log B1"},
},
Log{Name: "Log B1"},
Log{Name: "Log A1"},
Log{Name: "Log A2"},
}
actual, err = input.subset([]string{"Log B1", "Log A1", "Log A2"})
test.AssertNotError(t, err, "normal usage should not error")
@ -57,154 +48,136 @@ func TestSubset(t *testing.T) {
func TestForPurpose(t *testing.T) {
input := List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
"ID A2": Log{Name: "Log A2", State: loglist3.RejectedLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.UsableLogStatus},
"ID B2": Log{Name: "Log B2", State: loglist3.RetiredLogStatus},
},
"Operator C": {
"ID C1": Log{Name: "Log C1", State: loglist3.PendingLogStatus},
"ID C2": Log{Name: "Log C2", State: loglist3.ReadOnlyLogStatus},
},
Log{Name: "Log A1", Operator: "A", State: loglist3.UsableLogStatus},
Log{Name: "Log A2", Operator: "A", State: loglist3.RejectedLogStatus},
Log{Name: "Log B1", Operator: "B", State: loglist3.UsableLogStatus},
Log{Name: "Log B2", Operator: "B", State: loglist3.RetiredLogStatus},
Log{Name: "Log C1", Operator: "C", State: loglist3.PendingLogStatus},
Log{Name: "Log C2", Operator: "C", State: loglist3.ReadOnlyLogStatus},
}
expected := List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.UsableLogStatus},
},
Log{Name: "Log A1", Operator: "A", State: loglist3.UsableLogStatus},
Log{Name: "Log B1", Operator: "B", State: loglist3.UsableLogStatus},
}
actual, err := input.forPurpose(Issuance)
test.AssertNotError(t, err, "should have two acceptable logs")
test.AssertDeepEquals(t, actual, expected)
input = List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
"ID A2": Log{Name: "Log A2", State: loglist3.RejectedLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.QualifiedLogStatus},
"ID B2": Log{Name: "Log B2", State: loglist3.RetiredLogStatus},
},
"Operator C": {
"ID C1": Log{Name: "Log C1", State: loglist3.PendingLogStatus},
"ID C2": Log{Name: "Log C2", State: loglist3.ReadOnlyLogStatus},
},
Log{Name: "Log A1", Operator: "A", State: loglist3.UsableLogStatus},
Log{Name: "Log A2", Operator: "A", State: loglist3.RejectedLogStatus},
Log{Name: "Log B1", Operator: "B", State: loglist3.QualifiedLogStatus},
Log{Name: "Log B2", Operator: "B", State: loglist3.RetiredLogStatus},
Log{Name: "Log C1", Operator: "C", State: loglist3.PendingLogStatus},
Log{Name: "Log C2", Operator: "C", State: loglist3.ReadOnlyLogStatus},
}
_, err = input.forPurpose(Issuance)
test.AssertError(t, err, "should only have one acceptable log")
expected = List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
},
"Operator C": {
"ID C2": Log{Name: "Log C2", State: loglist3.ReadOnlyLogStatus},
},
Log{Name: "Log A1", Operator: "A", State: loglist3.UsableLogStatus},
Log{Name: "Log C2", Operator: "C", State: loglist3.ReadOnlyLogStatus},
}
actual, err = input.forPurpose(Validation)
test.AssertNotError(t, err, "should have two acceptable logs")
test.AssertDeepEquals(t, actual, expected)
expected = List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.QualifiedLogStatus},
},
"Operator C": {
"ID C1": Log{Name: "Log C1", State: loglist3.PendingLogStatus},
},
Log{Name: "Log A1", Operator: "A", State: loglist3.UsableLogStatus},
Log{Name: "Log B1", Operator: "B", State: loglist3.QualifiedLogStatus},
Log{Name: "Log C1", Operator: "C", State: loglist3.PendingLogStatus},
}
actual, err = input.forPurpose(Informational)
test.AssertNotError(t, err, "should have three acceptable logs")
test.AssertDeepEquals(t, actual, expected)
}
func TestOperatorForLogID(t *testing.T) {
func TestForTime(t *testing.T) {
fc := clock.NewFake()
fc.Set(time.Now())
input := List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.QualifiedLogStatus},
},
Log{Name: "Fully Bound", StartInclusive: fc.Now().Add(-time.Hour), EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Open End", StartInclusive: fc.Now().Add(-time.Hour)},
Log{Name: "Open Start", EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Fully Open"},
}
actual, err := input.OperatorForLogID("ID B1")
test.AssertNotError(t, err, "should have found log")
test.AssertEquals(t, actual, "Operator B")
expected := List{
Log{Name: "Fully Bound", StartInclusive: fc.Now().Add(-time.Hour), EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Open End", StartInclusive: fc.Now().Add(-time.Hour)},
Log{Name: "Open Start", EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Fully Open"},
}
actual := input.ForTime(fc.Now())
test.AssertDeepEquals(t, actual, expected)
_, err = input.OperatorForLogID("Other ID")
test.AssertError(t, err, "should not have found log")
expected = List{
Log{Name: "Fully Bound", StartInclusive: fc.Now().Add(-time.Hour), EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Open End", StartInclusive: fc.Now().Add(-time.Hour)},
Log{Name: "Open Start", EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Fully Open"},
}
actual = input.ForTime(fc.Now().Add(-time.Hour))
test.AssertDeepEquals(t, actual, expected)
expected = List{
Log{Name: "Open Start", EndExclusive: fc.Now().Add(time.Hour)},
Log{Name: "Fully Open"},
}
actual = input.ForTime(fc.Now().Add(-2 * time.Hour))
test.AssertDeepEquals(t, actual, expected)
expected = List{
Log{Name: "Open End", StartInclusive: fc.Now().Add(-time.Hour)},
Log{Name: "Fully Open"},
}
actual = input.ForTime(fc.Now().Add(time.Hour))
test.AssertDeepEquals(t, actual, expected)
}
func TestPermute(t *testing.T) {
input := List{
"Operator A": {
"ID A1": Log{Name: "Log A1", State: loglist3.UsableLogStatus},
"ID A2": Log{Name: "Log A2", State: loglist3.RejectedLogStatus},
},
"Operator B": {
"ID B1": Log{Name: "Log B1", State: loglist3.QualifiedLogStatus},
"ID B2": Log{Name: "Log B2", State: loglist3.RetiredLogStatus},
},
"Operator C": {
"ID C1": Log{Name: "Log C1", State: loglist3.PendingLogStatus},
"ID C2": Log{Name: "Log C2", State: loglist3.ReadOnlyLogStatus},
},
Log{Name: "Log A1"},
Log{Name: "Log A2"},
Log{Name: "Log B1"},
Log{Name: "Log B2"},
Log{Name: "Log C1"},
Log{Name: "Log C2"},
}
foundIndices := make(map[string]map[int]int)
for _, log := range input {
foundIndices[log.Name] = make(map[int]int)
}
for range 100 {
actual := input.Permute()
test.AssertEquals(t, len(actual), 3)
test.AssertSliceContains(t, actual, "Operator A")
test.AssertSliceContains(t, actual, "Operator B")
test.AssertSliceContains(t, actual, "Operator C")
for index, log := range actual {
foundIndices[log.Name][index]++
}
}
func TestPickOne(t *testing.T) {
date0 := time.Date(2020, 1, 1, 0, 0, 0, 0, time.UTC)
date1 := time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC)
date2 := time.Date(2022, 1, 1, 0, 0, 0, 0, time.UTC)
for name, counts := range foundIndices {
for index, count := range counts {
if count == 0 {
t.Errorf("Log %s appeared at index %d too few times", name, index)
}
}
}
}
func TestGetByID(t *testing.T) {
input := List{
"Operator A": {
"ID A1": Log{Name: "Log A1"},
},
Log{Name: "Log A1", Id: "ID A1"},
Log{Name: "Log B1", Id: "ID B1"},
}
_, _, err := input.PickOne("Operator B", date0)
test.AssertError(t, err, "should have failed to find operator")
input = List{
"Operator A": {
"ID A1": Log{Name: "Log A1", StartInclusive: date0, EndExclusive: date1},
},
}
_, _, err = input.PickOne("Operator A", date2)
test.AssertError(t, err, "should have failed to find log")
_, _, err = input.PickOne("Operator A", date1)
test.AssertError(t, err, "should have failed to find log")
_, _, err = input.PickOne("Operator A", date0)
test.AssertNotError(t, err, "should have found a log")
_, _, err = input.PickOne("Operator A", date0.Add(time.Hour))
test.AssertNotError(t, err, "should have found a log")
expected := Log{Name: "Log A1", Id: "ID A1"}
actual, err := input.GetByID("ID A1")
test.AssertNotError(t, err, "should have found log")
test.AssertDeepEquals(t, actual, expected)
input = List{
"Operator A": {
"ID A1": Log{Name: "Log A1", StartInclusive: date0, EndExclusive: date1, Key: "KA1", Url: "UA1"},
"ID A2": Log{Name: "Log A2", StartInclusive: date1, EndExclusive: date2, Key: "KA2", Url: "UA2"},
"ID B1": Log{Name: "Log B1", StartInclusive: date0, EndExclusive: date1, Key: "KB1", Url: "UB1"},
"ID B2": Log{Name: "Log B2", StartInclusive: date1, EndExclusive: date2, Key: "KB2", Url: "UB2"},
},
}
url, key, err := input.PickOne("Operator A", date0.Add(time.Hour))
test.AssertNotError(t, err, "should have found a log")
test.AssertSliceContains(t, []string{"UA1", "UB1"}, url)
test.AssertSliceContains(t, []string{"KA1", "KB1"}, key)
_, err = input.GetByID("Other ID")
test.AssertError(t, err, "should not have found log")
}

View File

@ -129,6 +129,18 @@ func (m *WrappedMap) BeginTx(ctx context.Context) (Transaction, error) {
}, err
}
func (m *WrappedMap) ColumnsForModel(model interface{}) ([]string, error) {
tbl, err := m.dbMap.TableFor(reflect.TypeOf(model), true)
if err != nil {
return nil, err
}
var columns []string
for _, col := range tbl.Columns {
columns = append(columns, col.ColumnName)
}
return columns, nil
}
// WrappedTransaction wraps a *borp.Transaction such that its major functions
// wrap error results in ErrDatabaseOp instances before returning them to the
// caller.

View File

@ -85,6 +85,10 @@ func (mi *MultiInserter) query() (string, []interface{}) {
// Insert inserts all the collected rows into the database represented by
// `queryer`.
func (mi *MultiInserter) Insert(ctx context.Context, db Execer) error {
if len(mi.values) == 0 {
return nil
}
query, queryArgs := mi.query()
res, err := db.ExecContext(ctx, query, queryArgs...)
if err != nil {

View File

@ -1,7 +1,7 @@
services:
boulder:
environment:
FAKE_DNS: 10.77.77.77
FAKE_DNS: 64.112.117.122
BOULDER_CONFIG_DIR: test/config-next
GOFLAGS: -mod=vendor
GOCACHE: /boulder/.gocache/go-build-next

View File

@ -11,9 +11,9 @@ services:
GO_VERSION: 1.24.1
environment:
# To solve HTTP-01 and TLS-ALPN-01 challenges, change the IP in FAKE_DNS
# to the IP address where your ACME client's solver is listening.
# FAKE_DNS: 172.17.0.1
FAKE_DNS: 10.77.77.77
# to the IP address where your ACME client's solver is listening. This is
# pointing at the boulder service's "public" IP, where challtestsrv is.
FAKE_DNS: 64.112.117.122
BOULDER_CONFIG_DIR: test/config
GOCACHE: /boulder/.gocache/go-build
GOFLAGS: -mod=vendor
@ -24,12 +24,10 @@ services:
networks:
bouldernet:
ipv4_address: 10.77.77.77
integrationtestnet:
ipv4_address: 10.88.88.88
redisnet:
ipv4_address: 10.33.33.33
consulnet:
ipv4_address: 10.55.55.55
publicnet:
ipv4_address: 64.112.117.122
publicnet2:
ipv4_address: 64.112.117.134
# Use consul as a backup to Docker's embedded DNS server. If there's a name
# Docker's DNS server doesn't know about, it will forward the query to this
# IP (running consul).
@ -38,12 +36,17 @@ services:
# are configured via the ServerAddress field of cmd.GRPCClientConfig.
# TODO: Remove this when ServerAddress is deprecated in favor of SRV records
# and DNSAuthority.
dns: 10.55.55.10
dns: 10.77.77.10
extra_hosts:
# Allow the boulder container to be reached as "ca.example.org", so that
# we can put that name inside our integration test certs (e.g. as a crl
# Allow the boulder container to be reached as "ca.example.org", so we
# can put that name inside our integration test certs (e.g. as a crl
# url) and have it look like a publicly-accessible name.
- "ca.example.org:10.77.77.77"
# TODO(#8215): Move s3-test-srv to a separate service.
- "ca.example.org:64.112.117.122"
# Allow the boulder container to be reached as "integration.trust", for
# similar reasons, but intended for use as a SAN rather than a CRLDP.
# TODO(#8215): Move observer's probe target to a separate service.
- "integration.trust:64.112.117.122"
ports:
- 4001:4001 # ACMEv2
- 4002:4002 # OCSP
@ -76,7 +79,7 @@ services:
- setup
bmysql:
image: mariadb:10.5
image: mariadb:10.6.22
networks:
bouldernet:
aliases:
@ -91,6 +94,7 @@ services:
command: mysqld --bind-address=0.0.0.0 --slow-query-log --log-output=TABLE --log-queries-not-using-indexes=ON
logging:
driver: none
bproxysql:
image: proxysql/proxysql:2.5.4
# The --initial flag force resets the ProxySQL database on startup. By
@ -113,8 +117,12 @@ services:
- ./test/:/test/:cached
command: redis-server /test/redis-ocsp.config
networks:
redisnet:
ipv4_address: 10.33.33.2
bouldernet:
# TODO(#8215): Remove this static IP allocation (and similar below) when
# we tear down ocsp-responder. We only have it because ocsp-responder
# requires IPs in its "ShardAddrs" config, while ratelimit redis
# supports looking up shards via hostname and SRV record.
ipv4_address: 10.77.77.2
bredis_2:
image: redis:6.2.7
@ -122,8 +130,8 @@ services:
- ./test/:/test/:cached
command: redis-server /test/redis-ocsp.config
networks:
redisnet:
ipv4_address: 10.33.33.3
bouldernet:
ipv4_address: 10.77.77.3
bredis_3:
image: redis:6.2.7
@ -131,8 +139,8 @@ services:
- ./test/:/test/:cached
command: redis-server /test/redis-ratelimits.config
networks:
redisnet:
ipv4_address: 10.33.33.4
bouldernet:
ipv4_address: 10.77.77.4
bredis_4:
image: redis:6.2.7
@ -140,16 +148,14 @@ services:
- ./test/:/test/:cached
command: redis-server /test/redis-ratelimits.config
networks:
redisnet:
ipv4_address: 10.33.33.5
bouldernet:
ipv4_address: 10.77.77.5
bconsul:
image: hashicorp/consul:1.15.4
volumes:
- ./test/:/test/:cached
networks:
consulnet:
ipv4_address: 10.55.55.10
bouldernet:
ipv4_address: 10.77.77.10
command: "consul agent -dev -config-format=hcl -config-file=/test/consul/config.hcl"
@ -157,27 +163,42 @@ services:
bjaeger:
image: jaegertracing/all-in-one:1.50
networks:
bouldernet:
ipv4_address: 10.77.77.17
- bouldernet
bpkimetal:
image: ghcr.io/pkimetal/pkimetal:v1.20.0
networks:
bouldernet:
ipv4_address: 10.77.77.9
- bouldernet
networks:
# This network is primarily used for boulder services. It is also used by
# challtestsrv, which is used in the integration tests.
# This network represents the data-center internal network. It is used for
# boulder services and their infrastructure, such as consul, mariadb, and
# redis.
bouldernet:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.77.77.0/24
# Only issue DHCP addresses in the top half of the range, to avoid
# conflict with static addresses.
ip_range: 10.77.77.128/25
# This network represents the public internet. It uses a real public IP space
# (that Let's Encrypt controls) so that our integration tests are happy to
# validate and issue for it. It is used by challtestsrv, which binds to
# 64.112.117.122:80 and :443 for its HTTP-01 challenge responder.
#
# TODO(#8215): Put akamai-test-srv and s3-test-srv on this network.
publicnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 64.112.117.0/25
# This network is used for two things in the integration tests:
# - challtestsrv binds to 10.88.88.88:443 for its tls-alpn-01 challenge
# - challtestsrv binds to 64.112.117.134:443 for its tls-alpn-01 challenge
# responder, to avoid interfering with the HTTPS port used for testing
# HTTP->HTTPS redirects during http-01 challenges. Note: this could
# probably be updated in the future so that challtestsrv can handle
@ -185,24 +206,13 @@ networks:
# - test/v2_integration.py has some test cases that start their own HTTP
# server instead of relying on challtestsrv, because they want very
# specific behavior. For these cases, v2_integration.py creates a Python
# HTTP server and binds it to 10.88.88.88:80.
integrationtestnet:
# HTTP server and binds it to 64.112.117.134:80.
#
# TODO(#8215): Deprecate this network, replacing it with individual IPs within
# the existing publicnet.
publicnet2:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.88.88.0/24
redisnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.33.33.0/24
consulnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.55.55.0/24
- subnet: 64.112.117.128/25

View File

@ -236,7 +236,7 @@ order finalization and does not offer the new-cert endpoint.
* 3-4: RA does the following:
* Verify the PKCS#10 CSR in the certificate request object
* Verify that the CSR has a non-zero number of domain names
* Verify that the CSR has a non-zero number of identifiers
* Verify that the public key in the CSR is different from the account key
* For each authorization referenced in the certificate request
* Retrieve the authorization from the database
@ -303,7 +303,7 @@ ACME v2:
* 2-4: RA does the following:
* Verify the PKCS#10 CSR in the certificate request object
* Verify that the CSR has a non-zero number of domain names
* Verify that the CSR has a non-zero number of identifiers
* Verify that the public key in the CSR is different from the account key
* Retrieve and verify the status and expiry of the order object
* For each identifier referenced in the order request

View File

@ -23,13 +23,13 @@ docker compose up boulder
Then, in a different window, run the following to connect to `bredis_1`:
```shell
./test/redis-cli.sh -h 10.33.33.2
./test/redis-cli.sh -h 10.77.77.2
```
Similarly, to connect to `bredis_2`:
```shell
./test/redis-cli.sh -h 10.33.33.3
./test/redis-cli.sh -h 10.77.77.3
```
You can pass any IP address for the -h (host) parameter. The full list of IP
@ -40,7 +40,7 @@ You may want to go a level deeper and communicate with a Redis node using the
Redis protocol. Here's the command to do that (run from the Boulder root):
```shell
openssl s_client -connect 10.33.33.2:4218 \
openssl s_client -connect 10.77.77.2:4218 \
-CAfile test/certs/ipki/minica.pem \
-cert test/certs/ipki/localhost/cert.pem \
-key test/certs/ipki/localhost/key.pem

92
email/cache.go Normal file
View File

@ -0,0 +1,92 @@
package email
import (
"crypto/sha256"
"encoding/hex"
"sync"
"github.com/golang/groupcache/lru"
"github.com/prometheus/client_golang/prometheus"
)
type EmailCache struct {
sync.Mutex
cache *lru.Cache
requests *prometheus.CounterVec
}
func NewHashedEmailCache(maxEntries int, stats prometheus.Registerer) *EmailCache {
requests := prometheus.NewCounterVec(prometheus.CounterOpts{
Name: "email_cache_requests",
}, []string{"status"})
stats.MustRegister(requests)
return &EmailCache{
cache: lru.New(maxEntries),
requests: requests,
}
}
func hashEmail(email string) string {
sum := sha256.Sum256([]byte(email))
return hex.EncodeToString(sum[:])
}
func (c *EmailCache) Seen(email string) bool {
if c == nil {
// If the cache is nil we assume it was not configured.
return false
}
hash := hashEmail(email)
c.Lock()
defer c.Unlock()
_, ok := c.cache.Get(hash)
if !ok {
c.requests.WithLabelValues("miss").Inc()
return false
}
c.requests.WithLabelValues("hit").Inc()
return true
}
func (c *EmailCache) Remove(email string) {
if c == nil {
// If the cache is nil we assume it was not configured.
return
}
hash := hashEmail(email)
c.Lock()
defer c.Unlock()
c.cache.Remove(hash)
}
// StoreIfAbsent stores the email in the cache if it is not already present, as
// a single atomic operation. It returns true if the email was stored and false
// if it was already in the cache. If the cache is nil, true is always returned.
func (c *EmailCache) StoreIfAbsent(email string) bool {
if c == nil {
// If the cache is nil we assume it was not configured.
return true
}
hash := hashEmail(email)
c.Lock()
defer c.Unlock()
_, ok := c.cache.Get(hash)
if ok {
c.requests.WithLabelValues("hit").Inc()
return false
}
c.cache.Add(hash, nil)
c.requests.WithLabelValues("miss").Inc()
return true
}

View File

@ -17,8 +17,8 @@ import (
// contactsQueueCap limits the queue size to prevent unbounded growth. This
// value is adjustable as needed. Each RFC 5321 email address, encoded in UTF-8,
// is at most 320 bytes. Storing 10,000 emails requires ~3.44 MB of memory.
const contactsQueueCap = 10000
// is at most 320 bytes. Storing 100,000 emails requires ~34.4 MB of memory.
const contactsQueueCap = 100000
var ErrQueueFull = errors.New("email-exporter queue is full")
@ -40,7 +40,9 @@ type ExporterImpl struct {
maxConcurrentRequests int
limiter *rate.Limiter
client PardotClient
emailCache *EmailCache
emailsHandledCounter prometheus.Counter
pardotErrorCounter prometheus.Counter
log blog.Logger
}
@ -53,7 +55,7 @@ var _ emailpb.ExporterServer = (*ExporterImpl)(nil)
// is assigned 40% (20,000 requests), it should also receive 40% of the max
// concurrent requests (e.g., 2 out of 5). For more details, see:
// https://developer.salesforce.com/docs/marketing/pardot/guide/overview.html?q=rate%20limits
func NewExporterImpl(client PardotClient, perDayLimit float64, maxConcurrentRequests int, scope prometheus.Registerer, logger blog.Logger) *ExporterImpl {
func NewExporterImpl(client PardotClient, cache *EmailCache, perDayLimit float64, maxConcurrentRequests int, scope prometheus.Registerer, logger blog.Logger) *ExporterImpl {
limiter := rate.NewLimiter(rate.Limit(perDayLimit/86400.0), maxConcurrentRequests)
emailsHandledCounter := prometheus.NewCounter(prometheus.CounterOpts{
@ -62,12 +64,20 @@ func NewExporterImpl(client PardotClient, perDayLimit float64, maxConcurrentRequ
})
scope.MustRegister(emailsHandledCounter)
pardotErrorCounter := prometheus.NewCounter(prometheus.CounterOpts{
Name: "email_exporter_errors",
Help: "Total number of Pardot API errors encountered by the email exporter",
})
scope.MustRegister(pardotErrorCounter)
impl := &ExporterImpl{
maxConcurrentRequests: maxConcurrentRequests,
limiter: limiter,
toSend: make([]string, 0, contactsQueueCap),
client: client,
emailCache: cache,
emailsHandledCounter: emailsHandledCounter,
pardotErrorCounter: pardotErrorCounter,
log: logger,
}
impl.wake = sync.NewCond(&impl.Mutex)
@ -137,6 +147,11 @@ func (impl *ExporterImpl) Start(daemonCtx context.Context) {
impl.toSend = impl.toSend[:last]
impl.Unlock()
if !impl.emailCache.StoreIfAbsent(email) {
// Another worker has already processed this email.
continue
}
err := impl.limiter.Wait(daemonCtx)
if err != nil && !errors.Is(err, context.Canceled) {
impl.log.Errf("Unexpected limiter.Wait() error: %s", err)
@ -145,11 +160,14 @@ func (impl *ExporterImpl) Start(daemonCtx context.Context) {
err = impl.client.SendContact(email)
if err != nil {
impl.emailCache.Remove(email)
impl.pardotErrorCounter.Inc()
impl.log.Errf("Sending Contact to Pardot: %s", err)
}
} else {
impl.emailsHandledCounter.Inc()
}
}
}
for range impl.maxConcurrentRequests {
impl.drainWG.Add(1)

View File

@ -12,6 +12,8 @@ import (
blog "github.com/letsencrypt/boulder/log"
"github.com/letsencrypt/boulder/metrics"
"github.com/letsencrypt/boulder/test"
"github.com/prometheus/client_golang/prometheus"
)
var ctx = context.Background()
@ -35,9 +37,8 @@ func newMockPardotClientImpl() (PardotClient, *mockPardotClientImpl) {
// SendContact adds an email to CreatedContacts.
func (m *mockPardotClientImpl) SendContact(email string) error {
m.Lock()
defer m.Unlock()
m.CreatedContacts = append(m.CreatedContacts, email)
m.Unlock()
return nil
}
@ -55,7 +56,7 @@ func (m *mockPardotClientImpl) getCreatedContacts() []string {
// cleanup() must be called.
func setup() (*ExporterImpl, *mockPardotClientImpl, func(), func()) {
mockClient, clientImpl := newMockPardotClientImpl()
exporter := NewExporterImpl(mockClient, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
exporter := NewExporterImpl(mockClient, nil, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
return exporter, clientImpl,
func() { exporter.Start(daemonCtx) },
@ -88,6 +89,9 @@ func TestSendContacts(t *testing.T) {
}
test.AssertSliceContains(t, gotContacts, wantContacts[0])
test.AssertSliceContains(t, gotContacts, wantContacts[1])
// Check that the error counter was not incremented.
test.AssertMetricWithLabelsEquals(t, exporter.pardotErrorCounter, prometheus.Labels{}, 0)
}
func TestSendContactsQueueFull(t *testing.T) {
@ -130,3 +134,92 @@ func TestSendContactsQueueDrains(t *testing.T) {
test.AssertEquals(t, 100, len(clientImpl.getCreatedContacts()))
}
type mockAlwaysFailClient struct{}
func (m *mockAlwaysFailClient) SendContact(email string) error {
return fmt.Errorf("simulated failure")
}
func TestSendContactsErrorMetrics(t *testing.T) {
t.Parallel()
mockClient := &mockAlwaysFailClient{}
exporter := NewExporterImpl(mockClient, nil, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
_, err := exporter.SendContacts(ctx, &emailpb.SendContactsRequest{
Emails: []string{"test@example.com"},
})
test.AssertNotError(t, err, "Error creating contacts")
// Drain the queue.
cancel()
exporter.Drain()
// Check that the error counter was incremented.
test.AssertMetricWithLabelsEquals(t, exporter.pardotErrorCounter, prometheus.Labels{}, 1)
}
func TestSendContactDeduplication(t *testing.T) {
t.Parallel()
cache := NewHashedEmailCache(1000, metrics.NoopRegisterer)
mockClient, clientImpl := newMockPardotClientImpl()
exporter := NewExporterImpl(mockClient, cache, 1000000, 5, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
_, err := exporter.SendContacts(ctx, &emailpb.SendContactsRequest{
Emails: []string{"duplicate@example.com", "duplicate@example.com"},
})
test.AssertNotError(t, err, "Error enqueuing contacts")
// Drain the queue.
cancel()
exporter.Drain()
contacts := clientImpl.getCreatedContacts()
test.AssertEquals(t, 1, len(contacts))
test.AssertEquals(t, "duplicate@example.com", contacts[0])
// Only one successful send should be recorded.
test.AssertMetricWithLabelsEquals(t, exporter.emailsHandledCounter, prometheus.Labels{}, 1)
if !cache.Seen("duplicate@example.com") {
t.Errorf("duplicate@example.com should have been cached after send")
}
}
func TestSendContactErrorRemovesFromCache(t *testing.T) {
t.Parallel()
cache := NewHashedEmailCache(1000, metrics.NoopRegisterer)
fc := &mockAlwaysFailClient{}
exporter := NewExporterImpl(fc, cache, 1000000, 1, metrics.NoopRegisterer, blog.NewMock())
daemonCtx, cancel := context.WithCancel(context.Background())
exporter.Start(daemonCtx)
_, err := exporter.SendContacts(ctx, &emailpb.SendContactsRequest{
Emails: []string{"error@example.com"},
})
test.AssertNotError(t, err, "enqueue failed")
// Drain the queue.
cancel()
exporter.Drain()
// The email should have been evicted from the cache after send encountered
// an error.
if cache.Seen("error@example.com") {
t.Errorf("error@example.com should have been evicted from cache after send errors")
}
// Check that the error counter was incremented.
test.AssertMetricWithLabelsEquals(t, exporter.pardotErrorCounter, prometheus.Labels{}, 1)
}

View File

@ -85,7 +85,6 @@ func NewPardotClientImpl(clk clock.Clock, businessUnit, clientId, clientSecret,
clientSecret: clientSecret,
contactsURL: contactsURL,
tokenURL: tokenURL,
token: &oAuthToken{},
clk: clk,
}, nil

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: exporter.proto
@ -12,6 +12,7 @@ import (
emptypb "google.golang.org/protobuf/types/known/emptypb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -22,21 +23,18 @@ const (
)
type SendContactsRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Emails []string `protobuf:"bytes,1,rep,name=emails,proto3" json:"emails,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SendContactsRequest) Reset() {
*x = SendContactsRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_exporter_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SendContactsRequest) String() string {
return protoimpl.X.MessageStringOf(x)
@ -46,7 +44,7 @@ func (*SendContactsRequest) ProtoMessage() {}
func (x *SendContactsRequest) ProtoReflect() protoreflect.Message {
mi := &file_exporter_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -70,7 +68,7 @@ func (x *SendContactsRequest) GetEmails() []string {
var File_exporter_proto protoreflect.FileDescriptor
var file_exporter_proto_rawDesc = []byte{
var file_exporter_proto_rawDesc = string([]byte{
0x0a, 0x0e, 0x65, 0x78, 0x70, 0x6f, 0x72, 0x74, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x12, 0x05, 0x65, 0x6d, 0x61, 0x69, 0x6c, 0x1a, 0x1b, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70,
@ -86,22 +84,22 @@ var file_exporter_proto_rawDesc = []byte{
0x6d, 0x2f, 0x6c, 0x65, 0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x2f, 0x62, 0x6f,
0x75, 0x6c, 0x64, 0x65, 0x72, 0x2f, 0x65, 0x6d, 0x61, 0x69, 0x6c, 0x2f, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
})
var (
file_exporter_proto_rawDescOnce sync.Once
file_exporter_proto_rawDescData = file_exporter_proto_rawDesc
file_exporter_proto_rawDescData []byte
)
func file_exporter_proto_rawDescGZIP() []byte {
file_exporter_proto_rawDescOnce.Do(func() {
file_exporter_proto_rawDescData = protoimpl.X.CompressGZIP(file_exporter_proto_rawDescData)
file_exporter_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_exporter_proto_rawDesc), len(file_exporter_proto_rawDesc)))
})
return file_exporter_proto_rawDescData
}
var file_exporter_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_exporter_proto_goTypes = []interface{}{
var file_exporter_proto_goTypes = []any{
(*SendContactsRequest)(nil), // 0: email.SendContactsRequest
(*emptypb.Empty)(nil), // 1: google.protobuf.Empty
}
@ -120,25 +118,11 @@ func file_exporter_proto_init() {
if File_exporter_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_exporter_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SendContactsRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_exporter_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_exporter_proto_rawDesc), len(file_exporter_proto_rawDesc)),
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
@ -149,7 +133,6 @@ func file_exporter_proto_init() {
MessageInfos: file_exporter_proto_msgTypes,
}.Build()
File_exporter_proto = out.File
file_exporter_proto_rawDesc = nil
file_exporter_proto_goTypes = nil
file_exporter_proto_depIdxs = nil
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.1
// source: exporter.proto
@ -50,20 +50,24 @@ func (c *exporterClient) SendContacts(ctx context.Context, in *SendContactsReque
// ExporterServer is the server API for Exporter service.
// All implementations must embed UnimplementedExporterServer
// for forward compatibility
// for forward compatibility.
type ExporterServer interface {
SendContacts(context.Context, *SendContactsRequest) (*emptypb.Empty, error)
mustEmbedUnimplementedExporterServer()
}
// UnimplementedExporterServer must be embedded to have forward compatible implementations.
type UnimplementedExporterServer struct {
}
// UnimplementedExporterServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedExporterServer struct{}
func (UnimplementedExporterServer) SendContacts(context.Context, *SendContactsRequest) (*emptypb.Empty, error) {
return nil, status.Errorf(codes.Unimplemented, "method SendContacts not implemented")
}
func (UnimplementedExporterServer) mustEmbedUnimplementedExporterServer() {}
func (UnimplementedExporterServer) testEmbeddedByValue() {}
// UnsafeExporterServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ExporterServer will
@ -73,6 +77,13 @@ type UnsafeExporterServer interface {
}
func RegisterExporterServer(s grpc.ServiceRegistrar, srv ExporterServer) {
// If the following call pancis, it indicates UnimplementedExporterServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Exporter_ServiceDesc, srv)
}

View File

@ -23,17 +23,16 @@ type Config struct {
InsertAuthzsIndividually bool
EnforceMultiCAA bool
EnforceMPIC bool
MPICFullResults bool
UnsplitIssuance bool
ExpirationMailerUsesJoin bool
DOH bool
IgnoreAccountContacts bool
// ServeRenewalInfo exposes the renewalInfo endpoint in the directory and for
// GET requests. WARNING: This feature is a draft and highly unstable.
ServeRenewalInfo bool
// ExpirationMailerUsesJoin enables using a JOIN query in expiration-mailer
// rather than a SELECT from certificateStatus followed by thousands of
// one-row SELECTs from certificates.
ExpirationMailerUsesJoin bool
// CertCheckerChecksValidations enables an extra query for each certificate
// checked, to find the relevant authzs. Since this query might be
// expensive, we gate it behind a feature flag.
@ -52,9 +51,6 @@ type Config struct {
// for the cert URL to appear.
AsyncFinalize bool
// DOH enables DNS-over-HTTPS queries for validation
DOH bool
// CheckIdentifiersPaused checks if any of the identifiers in the order are
// currently paused at NewOrder time. If any are paused, an error is
// returned to the Subscriber indicating that the order cannot be processed
@ -81,11 +77,6 @@ type Config struct {
// removing pending authz reuse.
NoPendingAuthzReuse bool
// MPICFullResults causes the VA to wait for all remote (MPIC) results, rather
// than cancelling outstanding requests after enough successes or failures for
// the result to be determined.
MPICFullResults bool
// StoreARIReplacesInOrders causes the SA to store and retrieve the optional
// ARI replaces field in the orders table.
StoreARIReplacesInOrders bool

117
go.mod
View File

@ -3,66 +3,68 @@ module github.com/letsencrypt/boulder
go 1.24.0
require (
github.com/aws/aws-sdk-go-v2 v1.32.2
github.com/aws/aws-sdk-go-v2/config v1.27.43
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3
github.com/aws/smithy-go v1.22.0
github.com/aws/aws-sdk-go-v2 v1.36.5
github.com/aws/aws-sdk-go-v2/config v1.29.17
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0
github.com/aws/smithy-go v1.22.4
github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941
github.com/go-jose/go-jose/v4 v4.1.0
github.com/go-logr/stdr v1.2.2
github.com/go-sql-driver/mysql v1.5.0
github.com/go-sql-driver/mysql v1.9.1
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da
github.com/google/certificate-transparency-go v1.1.6
github.com/google/certificate-transparency-go v1.3.2-0.20250507091337-0eddb39e94f8
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1
github.com/jmhodges/clock v1.2.0
github.com/letsencrypt/borp v0.0.0-20240620175310-a78493c6e2bd
github.com/letsencrypt/challtestsrv v1.3.2
github.com/letsencrypt/challtestsrv v1.3.3
github.com/letsencrypt/pkcs11key/v4 v4.0.0
github.com/letsencrypt/validator/v10 v10.0.0-20230215210743-a0c7dfc17158
github.com/miekg/dns v1.1.61
github.com/miekg/pkcs11 v1.1.1
github.com/nxadm/tail v1.4.11
github.com/prometheus/client_golang v1.15.1
github.com/prometheus/client_model v0.4.0
github.com/prometheus/client_golang v1.22.0
github.com/prometheus/client_model v0.6.1
github.com/redis/go-redis/extra/redisotel/v9 v9.5.3
github.com/redis/go-redis/v9 v9.7.3
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399
github.com/weppos/publicsuffix-go v0.40.3-0.20250307081557-c05521c3453a
github.com/zmap/zcrypto v0.0.0-20231219022726-a1f61fb1661c
github.com/zmap/zlint/v3 v3.6.4
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0
go.opentelemetry.io/otel v1.30.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0
go.opentelemetry.io/otel/sdk v1.30.0
go.opentelemetry.io/otel/trace v1.30.0
golang.org/x/crypto v0.36.0
golang.org/x/net v0.37.0
golang.org/x/sync v0.12.0
golang.org/x/term v0.30.0
golang.org/x/text v0.23.0
google.golang.org/grpc v1.66.1
google.golang.org/protobuf v1.34.2
github.com/zmap/zcrypto v0.0.0-20250129210703-03c45d0bae98
github.com/zmap/zlint/v3 v3.6.6
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0
go.opentelemetry.io/otel v1.36.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0
go.opentelemetry.io/otel/sdk v1.36.0
go.opentelemetry.io/otel/trace v1.36.0
golang.org/x/crypto v0.38.0
golang.org/x/net v0.40.0
golang.org/x/sync v0.14.0
golang.org/x/term v0.32.0
golang.org/x/text v0.25.0
golang.org/x/time v0.11.0
google.golang.org/grpc v1.72.1
google.golang.org/protobuf v1.36.6
gopkg.in/yaml.v3 v3.0.1
)
require (
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.41 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.70 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.2 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
@ -70,34 +72,23 @@ require (
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/poy/onpar v1.1.2 // indirect
github.com/prometheus/common v0.42.0 // indirect
github.com/prometheus/procfs v0.9.0 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 // indirect
go.opentelemetry.io/otel/metric v1.30.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
golang.org/x/mod v0.18.0 // indirect
golang.org/x/sys v0.31.0 // indirect
golang.org/x/time v0.10.0
golang.org/x/tools v0.22.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.6.0 // indirect
golang.org/x/mod v0.22.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/tools v0.29.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
k8s.io/klog/v2 v2.100.1 // indirect
)
// Versions of go-sql-driver/mysql >1.5.0 introduce performance regressions for
// us, so we exclude them.
// This version is required by parts of the honeycombio/beeline-go package
exclude github.com/go-sql-driver/mysql v1.6.0
// This version is required by borp
exclude github.com/go-sql-driver/mysql v1.7.1

275
go.sum
View File

@ -1,48 +1,48 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go/compute/metadata v0.2.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g=
github.com/a8m/expect v1.0.0/go.mod h1:4IwSCMumY49ScypDnjNbYEjgVeqy1/U2cEs3Lat96eA=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/aws/aws-sdk-go-v2 v1.32.2 h1:AkNLZEyYMLnx/Q/mSKkcMqwNFXMAvFto9bNsHqcTduI=
github.com/aws/aws-sdk-go-v2 v1.32.2/go.mod h1:2SK5n0a2karNTv5tbP1SjsX0uhttou00v/HpXKM1ZUo=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 h1:pT3hpW0cOHRJx8Y0DfJUEQuqPild8jRGmSFmBgvydr0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6/go.mod h1:j/I2++U0xX+cr44QjHay4Cvxj6FUbnxrgmqN3H1jTZA=
github.com/aws/aws-sdk-go-v2/config v1.27.43 h1:p33fDDihFC390dhhuv8nOmX419wjOSDQRb+USt20RrU=
github.com/aws/aws-sdk-go-v2/config v1.27.43/go.mod h1:pYhbtvg1siOOg8h5an77rXle9tVG8T+BWLWAo7cOukc=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41 h1:7gXo+Axmp+R4Z+AK8YFQO0ZV3L0gizGINCOWxSLY9W8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.41/go.mod h1:u4Eb8d3394YLubphT4jLEwN1rLNq2wFOlT6OuxFwPzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17 h1:TMH3f/SCAWdNtXXVPPu5D6wrr4G5hI1rAxbcocKfC7Q=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.17/go.mod h1:1ZRXLdTpzdJb9fwTMXiLipENRxkGMTn1sfKexGllQCw=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21 h1:UAsR3xA31QGf79WzpG/ixT9FZvQlh5HY1NRqSHBNOCk=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.21/go.mod h1:JNr43NFf5L9YaG3eKTm7HQzls9J+A9YYcGI5Quh1r2Y=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21 h1:6jZVETqmYCadGFvrYEQfC5fAQmlo80CeL5psbno6r0s=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.21/go.mod h1:1SR0GbLlnN3QUmYaflZNiH1ql+1qrSiB2vwcJ+4UM60=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21 h1:7edmS3VOBDhK00b/MwGtGglCm7hhwNYnjJs/PgFdMQE=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.21/go.mod h1:Q9o5h4HoIWG8XfzxqiuK/CGUbepCJ8uTlaE3bAbxytQ=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 h1:TToQNkvGguu209puTojY/ozlqy2d/SFNcoLIqTFi42g=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0/go.mod h1:0jp+ltwkf+SwG2fm/PKo8t4y8pJSgOCO4D8Lz3k0aHQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2 h1:4FMHqLfk0efmTqhXVRL5xYRqlEBNBiRI7N6w4jsEdd4=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.2/go.mod h1:LWoqeWlK9OZeJxsROW2RqrSPvQHKTpp69r/iDjwsSaw=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2 h1:s7NA1SOw8q/5c0wr8477yOPp0z+uBaXBnLE0XYb0POA=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.2/go.mod h1:fnjjWyAW/Pj5HYOxl9LJqWtEwS7W2qgcRLWP+uWbss0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2 h1:t7iUP9+4wdc5lt3E41huP+GvQZJD38WLsgVp4iOtAjg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.2/go.mod h1:/niFCtmuQNxqx9v8WAPq5qh7EH25U4BF6tjoyq9bObM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3 h1:xxHGZ+wUgZNACQmxtdvP5tgzfsxGS3vPpTP5Hy3iToE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.65.3/go.mod h1:cB6oAuus7YXRZhWCc1wIwPywwZ1XwweNp2TVAEGYeB8=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2 h1:bSYXVyUzoTHoKalBmwaZxs97HU9DWWI3ehHSAMa7xOk=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.2/go.mod h1:skMqY7JElusiOUjMJMOv1jJsP7YUg7DrhgqZZWuzu1U=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2 h1:AhmO1fHINP9vFYUE0LHzCWg/LfUWUF+zFPEcY9QXb7o=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.2/go.mod h1:o8aQygT2+MVP0NaV6kbdE1YnnIM8RRVQzoeUH45GOdI=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2 h1:CiS7i0+FUe+/YY1GvIBLLrR/XNGZ4CtM1Ll0XavNuVo=
github.com/aws/aws-sdk-go-v2/service/sts v1.32.2/go.mod h1:HtaiBI8CjYoNVde8arShXb94UbQQi9L4EMr6D+xGBwo=
github.com/aws/smithy-go v1.22.0 h1:uunKnWlcoL3zO7q+gG2Pk53joueEOsnNB28QdMsmiMM=
github.com/aws/smithy-go v1.22.0/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/aws/aws-sdk-go-v2 v1.36.5 h1:0OF9RiEMEdDdZEMqF9MRjevyxAQcf6gY+E7vwBILFj0=
github.com/aws/aws-sdk-go-v2 v1.36.5/go.mod h1:EYrzvCCN9CMUTa5+6lf6MM4tq3Zjp8UhSGR/cBsjai0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11 h1:12SpdwU8Djs+YGklkinSSlcrPyj3H4VifVsKf78KbwA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.11/go.mod h1:dd+Lkp6YmMryke+qxW/VnKyhMBDTYP41Q2Bb+6gNZgY=
github.com/aws/aws-sdk-go-v2/config v1.29.17 h1:jSuiQ5jEe4SAMH6lLRMY9OVC+TqJLP5655pBGjmnjr0=
github.com/aws/aws-sdk-go-v2/config v1.29.17/go.mod h1:9P4wwACpbeXs9Pm9w1QTh6BwWwJjwYvJ1iCt5QbCXh8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70 h1:ONnH5CM16RTXRkS8Z1qg7/s2eDOhHhaXVd72mmyv4/0=
github.com/aws/aws-sdk-go-v2/credentials v1.17.70/go.mod h1:M+lWhhmomVGgtuPOhO85u4pEa3SmssPTdcYpP/5J/xc=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32 h1:KAXP9JSHO1vKGCr5f4O6WmlVKLFFXgWYAGoJosorxzU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.32/go.mod h1:h4Sg6FQdexC1yYG9RDnOvLbW1a/P986++/Y/a+GyEM8=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36 h1:SsytQyTMHMDPspp+spo7XwXTP44aJZZAC7fBV2C5+5s=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.36/go.mod h1:Q1lnJArKRXkenyog6+Y+zr7WDpk4e6XlR6gs20bbeNo=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36 h1:i2vNHQiXUvKhs3quBR6aqlgJaiaexz/aNvdCktW/kAM=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.36/go.mod h1:UdyGa7Q91id/sdyHPwth+043HhmP6yP9MBHgbZM0xo8=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 h1:bIqFDwgGXXN1Kpp99pDOdKMTTb5d2KyU5X/BZxjOkRo=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3/go.mod h1:H5O/EsxDWyU+LP/V8i5sm8cxoZgc2fdNR9bxlOFrQTo=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36 h1:GMYy2EOWfzdP3wfVAGXBNKY5vK4K8vMET4sYOYltmqs=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.36/go.mod h1:gDhdAV6wL3PmPqBhiPbnlS447GoWs8HTTOYef9/9Inw=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4 h1:CXV68E2dNqhuynZJPB80bhPQwAKqBWVer887figW6Jc=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.4/go.mod h1:/xFi9KtvBXP97ppCz1TAEvU1Uf66qvid89rbem3wCzQ=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4 h1:nAP2GYbfh8dd2zGZqFRSMlq+/F6cMPBUuCsGAMkN074=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.4/go.mod h1:LT10DsiGjLWh4GbjInf9LQejkYEhBgBCjLG5+lvk4EE=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17 h1:t0E6FzREdtCsiLIoLCWsYliNsRBgyGD/MCK571qk4MI=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.17/go.mod h1:ygpklyoaypuyDvOM5ujWGrYWpAK3h7ugnmKCU/76Ys4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17 h1:qcLWgdhq45sDM9na4cvXax9dyLitn8EYBRl8Ak4XtG4=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.17/go.mod h1:M+jkjBFZ2J6DJrjMv2+vkBbuht6kxJYtJiwoVgX4p4U=
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0 h1:1GmCadhKR3J2sMVKs2bAYq9VnwYeCqfRyZzD4RASGlA=
github.com/aws/aws-sdk-go-v2/service/s3 v1.81.0/go.mod h1:kUklwasNoCn5YpyAqC/97r6dzTA1SRKJfKq16SXeoDU=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5 h1:AIRJ3lfb2w/1/8wOOSqYb9fUKGwQbtysJ2H1MofRUPg=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.5/go.mod h1:b7SiVprpU+iGazDUqvRSLf5XmCdn+JtT1on7uNL6Ipc=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3 h1:BpOxT3yhLwSJ77qIY3DoHAQjZsc4HEGfMCE4NGy3uFg=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.3/go.mod h1:vq/GQR1gOFLquZMSrxUK/cpvKCNVYibNyJ1m7JrU88E=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0 h1:NFOJ/NXEGV4Rq//71Hs1jC/NvPs1ezajK+yQmkwnPV0=
github.com/aws/aws-sdk-go-v2/service/sts v1.34.0/go.mod h1:7ph2tGpfQvwzgistp2+zga9f+bCjlQJPkPUmMgDSD7w=
github.com/aws/smithy-go v1.22.4 h1:uqXzVZNuNexwc/xrh6Tb56u89WDlJY6HS+KC0S4QSjw=
github.com/aws/smithy-go v1.22.4/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@ -51,14 +51,12 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
@ -86,7 +84,6 @@ github.com/go-jose/go-jose/v4 v4.1.0/go.mod h1:GG/vqmYm3Von2nYiB2vGTXzdoNKE5tix5
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
@ -98,8 +95,8 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-sql-driver/mysql v1.9.1 h1:FrjNGn/BsJQjVRuSa8CBrM5BWA9BWoXXat3KrtSb/iI=
github.com/go-sql-driver/mysql v1.9.1/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
@ -110,23 +107,15 @@ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/certificate-transparency-go v1.1.6 h1:SW5K3sr7ptST/pIvNkSVWMiJqemRmkjJPPT0jzXdOOY=
github.com/google/certificate-transparency-go v1.1.6/go.mod h1:0OJjOsOk+wj6aYQgP7FU0ioQ0AJUmnWPFMqTjQeazPQ=
github.com/google/certificate-transparency-go v1.3.2-0.20250507091337-0eddb39e94f8 h1:1RSWsOSxq2gk4pD/63bhsPwoOXgz2yXVadxXPbwZ0ec=
github.com/google/certificate-transparency-go v1.3.2-0.20250507091337-0eddb39e94f8/go.mod h1:6Rm5w0Mlv87LyBNOCgfKYjdIBBpF42XpXGsbQvQGomQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github/v50 v50.2.0/go.mod h1:VBY8FB6yPIjrtKhozXv4FQupxKLS6H4m6xFZlT43q8Q=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
@ -137,8 +126,8 @@ github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 h1:pRhl55Yx1eC7BZ1N+BBWwn
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0/go.mod h1:XKMd7iuf/RGPSMJ/U4HP0zS2Z9Fh8Ps9a+6X26m/tmI=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 h1:asbCHRVmodnJTuQ3qamDwqVOIjwqUPTYmYuemVOx+Ys=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0/go.mod h1:ggCgvZ2r7uOoQjOyu2Y1NhHmEPPzzuhWgcza5M1Ji1I=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jmhodges/clock v1.2.0 h1:eq4kys+NI0PLngzaHEe7AmPT90XMGIEySD1JfV1PDIs=
@ -147,6 +136,8 @@ github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@ -160,8 +151,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/letsencrypt/borp v0.0.0-20240620175310-a78493c6e2bd h1:3c+LdlAOEcW1qmG8gtkMCyAEoslmj6XCmniB+926kMM=
github.com/letsencrypt/borp v0.0.0-20240620175310-a78493c6e2bd/go.mod h1:gMSMCNKhxox/ccR923EJsIvHeVVYfCABGbirqa0EwuM=
github.com/letsencrypt/challtestsrv v1.3.2 h1:pIDLBCLXR3B1DLmOmkkqg29qVa7DDozBnsOpL9PxmAY=
github.com/letsencrypt/challtestsrv v1.3.2/go.mod h1:Ur4e4FvELUXLGhkMztHOsPIsvGxD/kzSJninOrkM+zc=
github.com/letsencrypt/challtestsrv v1.3.3 h1:ki02PH84fo6IOe/A+zt1/kfRBp2JrtauEaa5xwjg4/Q=
github.com/letsencrypt/challtestsrv v1.3.3/go.mod h1:Ur4e4FvELUXLGhkMztHOsPIsvGxD/kzSJninOrkM+zc=
github.com/letsencrypt/pkcs11key/v4 v4.0.0 h1:qLc/OznH7xMr5ARJgkZCCWk+EomQkiNTOoOF5LAgagc=
github.com/letsencrypt/pkcs11key/v4 v4.0.0/go.mod h1:EFUvBDay26dErnNb70Nd0/VW3tJiIbETBPTl9ATXQag=
github.com/letsencrypt/validator/v10 v10.0.0-20230215210743-a0c7dfc17158 h1:HGFsIltYMUiB5eoFSowFzSoXkocM2k9ctmJ57QMGjys=
@ -172,11 +163,9 @@ github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czP
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-sqlite3 v1.14.17 h1:mCRHCLDUBXgpKAqIKsaAaAsrAlbkeomtRFKXh2L6YIM=
github.com/mattn/go-sqlite3 v1.14.17/go.mod h1:2eHXhiwb8IkHr+BDWZGa96P6+rkvnG63S2DGjv9HUNg=
github.com/mattn/go-sqlite3 v1.14.26 h1:h72fc7d3zXGhHpwjWw+fPOBxYUupuKlbhUAQi5n6t58=
github.com/mattn/go-sqlite3 v1.14.26/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/miekg/dns v1.1.43/go.mod h1:+evo5L0630/F6ca/Z9+GAqzhjGyn8/c+TBaOyfEl0V4=
github.com/miekg/dns v1.1.61 h1:nLxbwF3XxhwVSm8g9Dghm9MHPaUZuqhPiGL+675ZmEs=
github.com/miekg/dns v1.1.61/go.mod h1:mnAarhS3nWaW+NVP2wTkYVIZyHNJ098SJZUki3eykwQ=
@ -187,6 +176,8 @@ github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mreiferson/go-httpclient v0.0.0-20160630210159-31f0106b4474/go.mod h1:OQA4XLvDbMgS8P0CevmM4m9Q3Jq4phKUzcocxuGJ5m8=
github.com/mreiferson/go-httpclient v0.0.0-20201222173833-5e475fde3a4d/go.mod h1:OQA4XLvDbMgS8P0CevmM4m9Q3Jq4phKUzcocxuGJ5m8=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nelsam/hel/v2 v2.3.2/go.mod h1:1ZTGfU2PFTOd5mx22i5O0Lc2GY933lQ2wb/ggy+rL3w=
github.com/nxadm/tail v1.4.11 h1:8feyoE3OzPrcshW5/MJ4sGESc5cqmGkGCWlco4l0bqY=
@ -204,20 +195,20 @@ github.com/poy/onpar v1.1.2 h1:QaNrNiZx0+Nar5dLgTVp5mXkyoVFIbepjyEoGSnhbAY=
github.com/poy/onpar v1.1.2/go.mod h1:6X8FLNoxyr9kkmnlqpK6LSoiOtrO6MICtWwEuWkLjzg=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.15.1 h1:8tXpTmJbyH5lydzFPoxSIJ0J46jdh3tylbvM1xCv0LI=
github.com/prometheus/client_golang v1.15.1/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt1N9XgF6zxWmaC0xOk=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.42.0 h1:EKsfXEYo4JpWMHH5cg+KOUWeuJSov1Id8zGR8eeI1YM=
github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc=
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI=
github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3 h1:1/BDligzCa40GTllkDnY3Y5DTHuKCONbB2JcRyIfl20=
github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3/go.mod h1:3dZmcLn3Qw6FLlWASn1g4y+YO9ycEFUOM+bhBmzLVKQ=
@ -226,8 +217,8 @@ github.com/redis/go-redis/extra/redisotel/v9 v9.5.3/go.mod h1:7f/FMrf5RRRVHXgfk7
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.3.1 h1:xkr+Oxo4BOQKmkn/B9eMK0g5Kg/983T9DqqPHwYqD+8=
github.com/sergi/go-diff v1.3.1/go.mod h1:aMJSSKb2lpPvRNec0+w3fl7LP9IOFzdc9Pa4NFbPK1I=
@ -235,7 +226,7 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.3.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
@ -246,9 +237,15 @@ github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399 h1:e/5i7d4oYZ+C1wj2THlRK+oAhjeS/TRQwMfkIuet3w0=
@ -256,7 +253,7 @@ github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399/go.mod h1:LdwHT
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/weppos/publicsuffix-go v0.13.0/go.mod h1:z3LCPQ38eedDQSwmsSRW4Y7t2L8Ln16JPQ02lHAdn5k=
github.com/weppos/publicsuffix-go v0.30.2-0.20230730094716-a20f9abcc222/go.mod h1:s41lQh6dIsDWIC1OWh7ChWJXLH0zkJ9KHZVqA7vHyuQ=
github.com/weppos/publicsuffix-go v0.40.3-0.20250127173806-e489a31678ca/go.mod h1:43Dfyxu2dpmLg56at26Q4k9gwf3yWSUiwk8kGnwzULk=
github.com/weppos/publicsuffix-go v0.40.3-0.20250307081557-c05521c3453a h1:YTfQ27VVE3PLzEZnGeSrxSKXMOs0JM2lfK0u4qT3/Mk=
github.com/weppos/publicsuffix-go v0.40.3-0.20250307081557-c05521c3453a/go.mod h1:Uao6F2ZmUjG3hDVL4Bn43YHRLuLapqXWKOa9GWk9JC0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
@ -269,30 +266,34 @@ github.com/zmap/zcertificate v0.0.0-20180516150559-0e3d58b1bac4/go.mod h1:5iU54t
github.com/zmap/zcertificate v0.0.1/go.mod h1:q0dlN54Jm4NVSSuzisusQY0hqDWvu92C+TWveAxiVWk=
github.com/zmap/zcrypto v0.0.0-20201128221613-3719af1573cf/go.mod h1:aPM7r+JOkfL+9qSB4KbYjtoEzJqUK50EXkkJabeNJDQ=
github.com/zmap/zcrypto v0.0.0-20201211161100-e54a5822fb7e/go.mod h1:aPM7r+JOkfL+9qSB4KbYjtoEzJqUK50EXkkJabeNJDQ=
github.com/zmap/zcrypto v0.0.0-20231219022726-a1f61fb1661c h1:U1b4THKcgOpJ+kILupuznNwPiURtwVW3e9alJvji9+s=
github.com/zmap/zcrypto v0.0.0-20231219022726-a1f61fb1661c/go.mod h1:GSDpFDD4TASObxvfZfvpZZ3OWHIUHMlhVWlkOe4ewVk=
github.com/zmap/zcrypto v0.0.0-20250129210703-03c45d0bae98 h1:Qp98bmMm9JHPPOaLi2Nb6oWoZ+1OyOMWI7PPeJrirI0=
github.com/zmap/zcrypto v0.0.0-20250129210703-03c45d0bae98/go.mod h1:YTUyN/U1oJ7RzCEY5hUweYxbVUu7X+11wB7OXZT15oE=
github.com/zmap/zlint/v3 v3.0.0/go.mod h1:paGwFySdHIBEMJ61YjoqT4h7Ge+fdYG4sUQhnTb1lJ8=
github.com/zmap/zlint/v3 v3.6.4 h1:r2kHfRF7mIsxW0IH4Og2iZnrlpCLTZBFjnXy1x/ZnZI=
github.com/zmap/zlint/v3 v3.6.4/go.mod h1:KQLVUquVaO5YJDl5a4k/7RPIbIW2v66+sRoBPNZusI8=
github.com/zmap/zlint/v3 v3.6.6 h1:tH7RJM9bDmh7IonlLEkFIkIn8XDYDYjehhUPgpLVqYA=
github.com/zmap/zlint/v3 v3.6.6/go.mod h1:6yXG+CBOQBRpMCOnpIVPUUL296m5HYksZC9bj5LZkwE=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0 h1:hCq2hNMwsegUvPzI7sPOvtO9cqyy5GbWt/Ybp2xrx8Q=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.55.0/go.mod h1:LqaApwGx/oUmzsbqxkzuBvyoPpkxk3JQWnqfVrJ3wCA=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 h1:ZIg3ZT/aQ7AfKqdwp7ECpOK6vHqquXXuyTjIO8ZdmPs=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0/go.mod h1:DQAwmETtZV00skUwgD6+0U89g80NKsJE3DCKeLLPQMI=
go.opentelemetry.io/otel v1.30.0 h1:F2t8sK4qf1fAmY9ua4ohFS/K+FUuOPemHUIXHtktrts=
go.opentelemetry.io/otel v1.30.0/go.mod h1:tFw4Br9b7fOS+uEao81PJjVMjW/5fvNCbpsDIXqP0pc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0 h1:lsInsfvhVIfOI6qHVyysXMNDnjO9Npvl7tlDPJFBVd4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.30.0/go.mod h1:KQsVNh4OjgjTG0G6EiNi1jVpnaeeKsKMRwbLN+f1+8M=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0 h1:m0yTiGDLUvVYaTFbAvCkVYIYcvwKt3G7OLoN77NUs/8=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.30.0/go.mod h1:wBQbT4UekBfegL2nx0Xk1vBcnzyBPsIVm9hRG4fYcr4=
go.opentelemetry.io/otel/metric v1.30.0 h1:4xNulvn9gjzo4hjg+wzIKG7iNFEaBMX00Qd4QIZs7+w=
go.opentelemetry.io/otel/metric v1.30.0/go.mod h1:aXTfST94tswhWEb+5QjlSqG+cZlmyXy/u8jFpor3WqQ=
go.opentelemetry.io/otel/sdk v1.30.0 h1:cHdik6irO49R5IysVhdn8oaiR9m8XluDaJAs4DfOrYE=
go.opentelemetry.io/otel/sdk v1.30.0/go.mod h1:p14X4Ok8S+sygzblytT1nqG98QG2KYKv++HE0LY/mhg=
go.opentelemetry.io/otel/trace v1.30.0 h1:7UBkkYzeg3C7kQX8VAidWh2biiQbtAKjyIML8dQ9wmc=
go.opentelemetry.io/otel/trace v1.30.0/go.mod h1:5EyKqTzzmyqB9bwtCCq6pDLktPK6fmGf/Dph+8VI02o=
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0 h1:q4XOmH/0opmeuJtPsbFNivyl7bCt7yRBbeEm2sC/XtQ=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.61.0/go.mod h1:snMWehoOh2wsEwnvvwtDyFCxVeDAODenXHtn5vzrKjo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0 h1:JgtbA0xkWHnTmYk7YusopJFX6uleBmAuZ8n05NEh8nQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.36.0/go.mod h1:179AK5aar5R3eS9FucPy6rggvU0g52cvKId8pv4+v0c=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.6.0 h1:jQjP+AQyTf+Fe7OKj/MfkDrmK4MNVtw2NpXsf9fefDI=
go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
@ -305,14 +306,14 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20201124201722-c8d3bf9c5392/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20201208171446-5f87f3452ae9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
@ -321,15 +322,14 @@ golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.18.0 h1:5+9lSbEzPSdWkH32vYPBwEpX8KwDbM52Ud9xBUvNlb0=
golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@ -337,16 +337,16 @@ golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.12.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.6.0/go.mod h1:ycmewcwgD4Rpr3eZJLSB4Kyyljb3qDh40vJ8STE5HKw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -358,8 +358,10 @@ golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -373,52 +375,50 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20201126233918-771906719818/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
golang.org/x/term v0.32.0 h1:DR4lr0TjUs3epypdhTOkMmuF5CDFJ/8pOnbzMZPQ7bg=
golang.org/x/term v0.32.0/go.mod h1:uZG1FhGx848Sqfsq4/DlJr3xGGsYMu/L5GW4abiaEPQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.10.0 h1:3usCWA8tQn0L8+hFJQNgzpWbd89begxN66o1Ojdn5L4=
golang.org/x/time v0.10.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -429,28 +429,23 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.22.0 h1:gqSGLZqv+AI9lIQzniJ0nZDRG5GBPsSi+DRNHWNz6yA=
golang.org/x/tools v0.22.0/go.mod h1:aCwcsjqvq7Yqt6TNyX7QMU2enbQ/Gt0bo6krSeEri+c=
golang.org/x/tools v0.29.0 h1:Xx0h3TtM9rzQpQuR4dKLrdglAmCEN5Oi+P74JdhdzXE=
golang.org/x/tools v0.29.0/go.mod h1:KMQVMRsVxU6nHCFXrBPhDB8XncLNLM0lIy/F14RP588=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 h1:hjSy6tcFQZ171igDaN5QHOw2n6vx40juYbC/x67CEhc=
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:qpvKtACPCQhAdu3PyQgV4l3LMXZEtft7y8QcarRsp9I=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237 h1:Kog3KlB4xevJlAcbbbzPfRG0+X9fdoGM+UBRKVz6Wr0=
google.golang.org/genproto/googleapis/api v0.0.0-20250519155744-55703ea1f237/go.mod h1:ezi0AVyMKDWy5xAncvjLWH7UcLBB5n7y2fQ8MzjJcto=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237 h1:cJfm9zPbe1e873mHJzmQ1nwVEeRDU/T1wXDK2kUSU34=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250519155744-55703ea1f237/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.66.1 h1:hO5qAXR19+/Z44hmvIM4dQFMSYX9XcWsByfoxutBpAM=
google.golang.org/grpc v1.66.1/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
google.golang.org/grpc v1.72.1 h1:HR03wO6eyZ7lknl75XlxABNVLLFc2PAb6mHlYh756mA=
google.golang.org/grpc v1.72.1/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -466,5 +461,3 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=

View File

@ -14,11 +14,13 @@ import (
"github.com/letsencrypt/boulder/cmd"
bcreds "github.com/letsencrypt/boulder/grpc/creds"
// 'grpc/health' is imported for its init function, which causes clients to
// rely on the Health Service for load-balancing.
// 'grpc/internal/resolver/dns' is imported for its init function, which
// registers the SRV resolver.
"google.golang.org/grpc/balancer/roundrobin"
// 'grpc/health' is imported for its init function, which causes clients to
// rely on the Health Service for load-balancing as long as a
// "healthCheckConfig" is specified in the gRPC service config.
_ "google.golang.org/grpc/health"
_ "github.com/letsencrypt/boulder/grpc/internal/resolver/dns"
@ -46,13 +48,11 @@ func ClientSetup(c *cmd.GRPCClientConfig, tlsConfig *tls.Config, statsRegistry p
unaryInterceptors := []grpc.UnaryClientInterceptor{
cmi.Unary,
cmi.metrics.grpcMetrics.UnaryClientInterceptor(),
otelgrpc.UnaryClientInterceptor(),
}
streamInterceptors := []grpc.StreamClientInterceptor{
cmi.Stream,
cmi.metrics.grpcMetrics.StreamClientInterceptor(),
otelgrpc.StreamClientInterceptor(),
}
target, hostOverride, err := c.MakeTargetAndHostOverride()
@ -61,12 +61,27 @@ func ClientSetup(c *cmd.GRPCClientConfig, tlsConfig *tls.Config, statsRegistry p
}
creds := bcreds.NewClientCredentials(tlsConfig.RootCAs, tlsConfig.Certificates, hostOverride)
return grpc.Dial(
return grpc.NewClient(
target,
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"loadBalancingConfig": [{"%s":{}}]}`, roundrobin.Name)),
grpc.WithDefaultServiceConfig(
fmt.Sprintf(
// By setting the service name to an empty string in
// healthCheckConfig, we're instructing the gRPC client to query
// the overall health status of each server. The grpc-go health
// server, as constructed by health.NewServer(), unconditionally
// sets the overall service (e.g. "") status to SERVING. If a
// specific service name were set, the server would need to
// explicitly transition that service to SERVING; otherwise,
// clients would receive a NOT_FOUND status and the connection
// would be marked as unhealthy (TRANSIENT_FAILURE).
`{"healthCheckConfig": {"serviceName": ""},"loadBalancingConfig": [{"%s":{}}]}`,
roundrobin.Name,
),
),
grpc.WithTransportCredentials(creds),
grpc.WithChainUnaryInterceptor(unaryInterceptors...),
grpc.WithChainStreamInterceptor(streamInterceptors...),
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)
}

View File

@ -27,17 +27,19 @@ import (
"errors"
"fmt"
"net"
"net/netip"
"strconv"
"strings"
"sync"
"time"
"github.com/letsencrypt/boulder/bdns"
"github.com/letsencrypt/boulder/grpc/internal/backoff"
"github.com/letsencrypt/boulder/grpc/noncebalancer"
"google.golang.org/grpc/grpclog"
"google.golang.org/grpc/resolver"
"google.golang.org/grpc/serviceconfig"
"github.com/letsencrypt/boulder/bdns"
"github.com/letsencrypt/boulder/grpc/internal/backoff"
"github.com/letsencrypt/boulder/grpc/noncebalancer"
)
var logger = grpclog.Component("srv")
@ -292,11 +294,11 @@ func (d *dnsResolver) lookup() (*resolver.State, error) {
// If addr is an IPv4 address, return the addr and ok = true.
// If addr is an IPv6 address, return the addr enclosed in square brackets and ok = true.
func formatIP(addr string) (addrIP string, ok bool) {
ip := net.ParseIP(addr)
if ip == nil {
ip, err := netip.ParseAddr(addr)
if err != nil {
return "", false
}
if ip.To4() != nil {
if ip.Is4() {
return addr, true
}
return "[" + addr + "]", true

View File

@ -115,19 +115,12 @@ func setupTest(noSubConns bool) (*Balancer, balancer.Picker, []*subConn) {
return b, p, subConns
}
// subConn implements the balancer.SubConn interface.
// subConn is a test mock which implements the balancer.SubConn interface.
type subConn struct {
balancer.SubConn
addrs []resolver.Address
}
func (s *subConn) UpdateAddresses(addrs []resolver.Address) {
s.addrs = addrs
}
func (s *subConn) Connect() {}
func (s *subConn) GetOrBuildProducer(balancer.ProducerBuilder) (p balancer.Producer, close func()) {
panic("unimplemented")
}
func (s *subConn) Shutdown() {}

View File

@ -7,7 +7,7 @@ package grpc
import (
"fmt"
"net"
"net/netip"
"time"
"github.com/go-jose/go-jose/v4"
@ -18,12 +18,12 @@ import (
corepb "github.com/letsencrypt/boulder/core/proto"
"github.com/letsencrypt/boulder/identifier"
"github.com/letsencrypt/boulder/probs"
"github.com/letsencrypt/boulder/revocation"
sapb "github.com/letsencrypt/boulder/sa/proto"
vapb "github.com/letsencrypt/boulder/va/proto"
)
var ErrMissingParameters = CodedError(codes.FailedPrecondition, "required RPC parameter was missing")
var ErrInvalidParameters = CodedError(codes.InvalidArgument, "RPC parameter was invalid")
// This file defines functions to translate between the protobuf types and the
// code types.
@ -131,17 +131,17 @@ func ValidationRecordToPB(record core.ValidationRecord) (*corepb.ValidationRecor
addrsTried := make([][]byte, len(record.AddressesTried))
var err error
for i, v := range record.AddressesResolved {
addrs[i] = []byte(v)
addrs[i] = v.AsSlice()
}
for i, v := range record.AddressesTried {
addrsTried[i] = []byte(v)
addrsTried[i] = v.AsSlice()
}
addrUsed, err := record.AddressUsed.MarshalText()
if err != nil {
return nil, err
}
return &corepb.ValidationRecord{
Hostname: record.DnsName,
Hostname: record.Hostname,
Port: record.Port,
AddressesResolved: addrs,
AddressUsed: addrUsed,
@ -155,21 +155,29 @@ func PBToValidationRecord(in *corepb.ValidationRecord) (record core.ValidationRe
if in == nil {
return core.ValidationRecord{}, ErrMissingParameters
}
addrs := make([]net.IP, len(in.AddressesResolved))
addrs := make([]netip.Addr, len(in.AddressesResolved))
for i, v := range in.AddressesResolved {
addrs[i] = net.IP(v)
netIP, ok := netip.AddrFromSlice(v)
if !ok {
return core.ValidationRecord{}, ErrInvalidParameters
}
addrsTried := make([]net.IP, len(in.AddressesTried))
addrs[i] = netIP
}
addrsTried := make([]netip.Addr, len(in.AddressesTried))
for i, v := range in.AddressesTried {
addrsTried[i] = net.IP(v)
netIP, ok := netip.AddrFromSlice(v)
if !ok {
return core.ValidationRecord{}, ErrInvalidParameters
}
var addrUsed net.IP
addrsTried[i] = netIP
}
var addrUsed netip.Addr
err = addrUsed.UnmarshalText(in.AddressUsed)
if err != nil {
return
}
return core.ValidationRecord{
DnsName: in.Hostname,
Hostname: in.Hostname,
Port: in.Port,
AddressesResolved: addrs,
AddressUsed: addrUsed,
@ -343,58 +351,8 @@ func newOrderValid(order *corepb.Order) bool {
return !(order.RegistrationID == 0 || order.Expires == nil || len(order.Identifiers) == 0)
}
func CertToPB(cert core.Certificate) *corepb.Certificate {
return &corepb.Certificate{
RegistrationID: cert.RegistrationID,
Serial: cert.Serial,
Digest: cert.Digest,
Der: cert.DER,
Issued: timestamppb.New(cert.Issued),
Expires: timestamppb.New(cert.Expires),
}
}
func PBToCert(pb *corepb.Certificate) core.Certificate {
return core.Certificate{
RegistrationID: pb.RegistrationID,
Serial: pb.Serial,
Digest: pb.Digest,
DER: pb.Der,
Issued: pb.Issued.AsTime(),
Expires: pb.Expires.AsTime(),
}
}
func CertStatusToPB(certStatus core.CertificateStatus) *corepb.CertificateStatus {
return &corepb.CertificateStatus{
Serial: certStatus.Serial,
Status: string(certStatus.Status),
OcspLastUpdated: timestamppb.New(certStatus.OCSPLastUpdated),
RevokedDate: timestamppb.New(certStatus.RevokedDate),
RevokedReason: int64(certStatus.RevokedReason),
LastExpirationNagSent: timestamppb.New(certStatus.LastExpirationNagSent),
NotAfter: timestamppb.New(certStatus.NotAfter),
IsExpired: certStatus.IsExpired,
IssuerID: certStatus.IssuerNameID,
}
}
func PBToCertStatus(pb *corepb.CertificateStatus) core.CertificateStatus {
return core.CertificateStatus{
Serial: pb.Serial,
Status: core.OCSPStatus(pb.Status),
OCSPLastUpdated: pb.OcspLastUpdated.AsTime(),
RevokedDate: pb.RevokedDate.AsTime(),
RevokedReason: revocation.Reason(pb.RevokedReason),
LastExpirationNagSent: pb.LastExpirationNagSent.AsTime(),
NotAfter: pb.NotAfter.AsTime(),
IsExpired: pb.IsExpired,
IssuerNameID: pb.IssuerID,
}
}
// PBToAuthzMap converts a protobuf map of domains mapped to protobuf authorizations to a
// golang map[string]*core.Authorization.
// PBToAuthzMap converts a protobuf map of identifiers mapped to protobuf
// authorizations to a golang map[string]*core.Authorization.
func PBToAuthzMap(pb *sapb.Authorizations) (map[identifier.ACMEIdentifier]*core.Authorization, error) {
m := make(map[identifier.ACMEIdentifier]*core.Authorization, len(pb.Authzs))
for _, v := range pb.Authzs {

View File

@ -2,7 +2,7 @@ package grpc
import (
"encoding/json"
"net"
"net/netip"
"testing"
"time"
@ -69,15 +69,15 @@ func TestChallenge(t *testing.T) {
test.AssertNotError(t, err, "PBToChallenge failed")
test.AssertDeepEquals(t, recon, chall)
ip := net.ParseIP("1.1.1.1")
ip := netip.MustParseAddr("1.1.1.1")
chall.ValidationRecord = []core.ValidationRecord{
{
DnsName: "example.com",
Hostname: "example.com",
Port: "2020",
AddressesResolved: []net.IP{ip},
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
URL: "https://example.com:2020",
AddressesTried: []net.IP{ip},
AddressesTried: []netip.Addr{ip},
},
}
chall.Error = &probs.ProblemDetails{Type: probs.TLSProblem, Detail: "asd", HTTPStatus: 200}
@ -111,14 +111,14 @@ func TestChallenge(t *testing.T) {
}
func TestValidationRecord(t *testing.T) {
ip := net.ParseIP("1.1.1.1")
ip := netip.MustParseAddr("1.1.1.1")
vr := core.ValidationRecord{
DnsName: "exampleA.com",
Hostname: "exampleA.com",
Port: "80",
AddressesResolved: []net.IP{ip},
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
URL: "http://exampleA.com",
AddressesTried: []net.IP{ip},
AddressesTried: []netip.Addr{ip},
ResolverAddrs: []string{"resolver:5353"},
}
@ -132,23 +132,23 @@ func TestValidationRecord(t *testing.T) {
}
func TestValidationResult(t *testing.T) {
ip := net.ParseIP("1.1.1.1")
ip := netip.MustParseAddr("1.1.1.1")
vrA := core.ValidationRecord{
DnsName: "exampleA.com",
Hostname: "exampleA.com",
Port: "443",
AddressesResolved: []net.IP{ip},
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
URL: "https://exampleA.com",
AddressesTried: []net.IP{ip},
AddressesTried: []netip.Addr{ip},
ResolverAddrs: []string{"resolver:5353"},
}
vrB := core.ValidationRecord{
DnsName: "exampleB.com",
Hostname: "exampleB.com",
Port: "443",
AddressesResolved: []net.IP{ip},
AddressesResolved: []netip.Addr{ip},
AddressUsed: ip,
URL: "https://exampleB.com",
AddressesTried: []net.IP{ip},
AddressesTried: []netip.Addr{ip},
ResolverAddrs: []string{"resolver:5353"},
}
result := []core.ValidationRecord{vrA, vrB}
@ -267,23 +267,6 @@ func TestAuthz(t *testing.T) {
test.AssertDeepEquals(t, inAuthzNilExpires, outAuthz2)
}
func TestCert(t *testing.T) {
now := time.Now().Round(0).UTC()
cert := core.Certificate{
RegistrationID: 1,
Serial: "serial",
Digest: "digest",
DER: []byte{255},
Issued: now,
Expires: now.Add(time.Hour),
}
certPB := CertToPB(cert)
outCert := PBToCert(certPB)
test.AssertDeepEquals(t, cert, outCert)
}
func TestOrderValid(t *testing.T) {
created := time.Now()
expires := created.Add(1 * time.Hour)

View File

@ -3,6 +3,7 @@ package grpc
import (
"fmt"
"net"
"net/netip"
"strings"
"google.golang.org/grpc/resolver"
@ -91,7 +92,8 @@ func parseResolverIPAddress(addr string) (*resolver.Address, error) {
// empty (e.g. :80), the local system is assumed.
host = "127.0.0.1"
}
if net.ParseIP(host) == nil {
_, err = netip.ParseAddr(host)
if err != nil {
// Host is a DNS name or an IPv6 address without brackets.
return nil, fmt.Errorf("address %q is not an IP address", addr)
}

View File

@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"net"
"slices"
"strings"
"time"
@ -123,11 +124,20 @@ func (sb *serverBuilder) Build(tlsConfig *tls.Config, statsRegistry prometheus.R
// This is the names which are allowlisted at the server level, plus the union
// of all names which are allowlisted for any individual service.
acceptedSANs := make(map[string]struct{})
var acceptedSANsSlice []string
for _, service := range sb.cfg.Services {
for _, name := range service.ClientNames {
acceptedSANs[name] = struct{}{}
if !slices.Contains(acceptedSANsSlice, name) {
acceptedSANsSlice = append(acceptedSANsSlice, name)
}
}
}
// Ensure that the health service has the same ClientNames as the other
// services, so that health checks can be performed by clients which are
// allowed to connect to the server.
sb.cfg.Services[healthpb.Health_ServiceDesc.ServiceName].ClientNames = acceptedSANsSlice
creds, err := bcreds.NewServerCredentials(tlsConfig, acceptedSANs)
if err != nil {
@ -224,8 +234,12 @@ func (sb *serverBuilder) Build(tlsConfig *tls.Config, statsRegistry prometheus.R
// initLongRunningCheck initializes a goroutine which will periodically check
// the health of the provided service and update the health server accordingly.
//
// TODO(#8255): Remove the service parameter and instead rely on transitioning
// the overall health of the server (e.g. "") instead of individual services.
func (sb *serverBuilder) initLongRunningCheck(shutdownCtx context.Context, service string, checkImpl func(context.Context) error) {
// Set the initial health status for the service.
sb.healthSrv.SetServingStatus("", healthpb.HealthCheckResponse_NOT_SERVING)
sb.healthSrv.SetServingStatus(service, healthpb.HealthCheckResponse_NOT_SERVING)
// check is a helper function that checks the health of the service and, if
@ -249,10 +263,13 @@ func (sb *serverBuilder) initLongRunningCheck(shutdownCtx context.Context, servi
}
if next != healthpb.HealthCheckResponse_SERVING {
sb.logger.Errf("transitioning overall health from %q to %q, due to: %s", last, next, err)
sb.logger.Errf("transitioning health of %q from %q to %q, due to: %s", service, last, next, err)
} else {
sb.logger.Infof("transitioning overall health from %q to %q", last, next)
sb.logger.Infof("transitioning health of %q from %q to %q", service, last, next)
}
sb.healthSrv.SetServingStatus("", next)
sb.healthSrv.SetServingStatus(service, next)
return next
}

View File

@ -11,7 +11,7 @@ import (
"google.golang.org/grpc/health"
)
func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
func TestServerBuilderInitLongRunningCheck(t *testing.T) {
t.Parallel()
hs := health.NewServer()
mockLogger := blog.NewMock()
@ -41,8 +41,8 @@ func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
// - ~100ms 3rd check failed, SERVING to NOT_SERVING
serving := mockLogger.GetAllMatching(".*\"NOT_SERVING\" to \"SERVING\"")
notServing := mockLogger.GetAllMatching((".*\"SERVING\" to \"NOT_SERVING\""))
test.Assert(t, len(serving) == 1, "expected one serving log line")
test.Assert(t, len(notServing) == 1, "expected one not serving log line")
test.Assert(t, len(serving) == 2, "expected two serving log lines")
test.Assert(t, len(notServing) == 2, "expected two not serving log lines")
mockLogger.Clear()
@ -67,6 +67,6 @@ func Test_serverBuilder_initLongRunningCheck(t *testing.T) {
// - ~100ms 3rd check passed, NOT_SERVING to SERVING
serving = mockLogger.GetAllMatching(".*\"NOT_SERVING\" to \"SERVING\"")
notServing = mockLogger.GetAllMatching((".*\"SERVING\" to \"NOT_SERVING\""))
test.Assert(t, len(serving) == 2, "expected two serving log lines")
test.Assert(t, len(notServing) == 1, "expected one not serving log line")
test.Assert(t, len(serving) == 4, "expected four serving log lines")
test.Assert(t, len(notServing) == 2, "expected two not serving log lines")
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.34.1
// protoc-gen-go v1.36.5
// protoc v3.20.1
// source: interceptors_test.proto
@ -12,6 +12,7 @@ import (
durationpb "google.golang.org/protobuf/types/known/durationpb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -22,21 +23,18 @@ const (
)
type Time struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
Duration *durationpb.Duration `protobuf:"bytes,2,opt,name=duration,proto3" json:"duration,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Time) Reset() {
*x = Time{}
if protoimpl.UnsafeEnabled {
mi := &file_interceptors_test_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Time) String() string {
return protoimpl.X.MessageStringOf(x)
@ -46,7 +44,7 @@ func (*Time) ProtoMessage() {}
func (x *Time) ProtoReflect() protoreflect.Message {
mi := &file_interceptors_test_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -70,7 +68,7 @@ func (x *Time) GetDuration() *durationpb.Duration {
var File_interceptors_test_proto protoreflect.FileDescriptor
var file_interceptors_test_proto_rawDesc = []byte{
var file_interceptors_test_proto_rawDesc = string([]byte{
0x0a, 0x17, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x63, 0x65, 0x70, 0x74, 0x6f, 0x72, 0x73, 0x5f, 0x74,
0x65, 0x73, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72, 0x61, 0x74,
@ -85,22 +83,22 @@ var file_interceptors_test_proto_rawDesc = []byte{
0x2f, 0x6c, 0x65, 0x74, 0x73, 0x65, 0x6e, 0x63, 0x72, 0x79, 0x70, 0x74, 0x2f, 0x62, 0x6f, 0x75,
0x6c, 0x64, 0x65, 0x72, 0x2f, 0x67, 0x72, 0x70, 0x63, 0x2f, 0x74, 0x65, 0x73, 0x74, 0x5f, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
})
var (
file_interceptors_test_proto_rawDescOnce sync.Once
file_interceptors_test_proto_rawDescData = file_interceptors_test_proto_rawDesc
file_interceptors_test_proto_rawDescData []byte
)
func file_interceptors_test_proto_rawDescGZIP() []byte {
file_interceptors_test_proto_rawDescOnce.Do(func() {
file_interceptors_test_proto_rawDescData = protoimpl.X.CompressGZIP(file_interceptors_test_proto_rawDescData)
file_interceptors_test_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_interceptors_test_proto_rawDesc), len(file_interceptors_test_proto_rawDesc)))
})
return file_interceptors_test_proto_rawDescData
}
var file_interceptors_test_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
var file_interceptors_test_proto_goTypes = []interface{}{
var file_interceptors_test_proto_goTypes = []any{
(*Time)(nil), // 0: Time
(*durationpb.Duration)(nil), // 1: google.protobuf.Duration
}
@ -120,25 +118,11 @@ func file_interceptors_test_proto_init() {
if File_interceptors_test_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_interceptors_test_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Time); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_interceptors_test_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_interceptors_test_proto_rawDesc), len(file_interceptors_test_proto_rawDesc)),
NumEnums: 0,
NumMessages: 1,
NumExtensions: 0,
@ -149,7 +133,6 @@ func file_interceptors_test_proto_init() {
MessageInfos: file_interceptors_test_proto_msgTypes,
}.Build()
File_interceptors_test_proto = out.File
file_interceptors_test_proto_rawDesc = nil
file_interceptors_test_proto_goTypes = nil
file_interceptors_test_proto_depIdxs = nil
}

View File

@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.20.1
// source: interceptors_test.proto
@ -15,8 +15,8 @@ import (
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.62.0 or later.
const _ = grpc.SupportPackageIsVersion8
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Chiller_Chill_FullMethodName = "/Chiller/Chill"
@ -50,21 +50,25 @@ func (c *chillerClient) Chill(ctx context.Context, in *Time, opts ...grpc.CallOp
// ChillerServer is the server API for Chiller service.
// All implementations must embed UnimplementedChillerServer
// for forward compatibility
// for forward compatibility.
type ChillerServer interface {
// Sleep for the given amount of time, and return the amount of time slept.
Chill(context.Context, *Time) (*Time, error)
mustEmbedUnimplementedChillerServer()
}
// UnimplementedChillerServer must be embedded to have forward compatible implementations.
type UnimplementedChillerServer struct {
}
// UnimplementedChillerServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedChillerServer struct{}
func (UnimplementedChillerServer) Chill(context.Context, *Time) (*Time, error) {
return nil, status.Errorf(codes.Unimplemented, "method Chill not implemented")
}
func (UnimplementedChillerServer) mustEmbedUnimplementedChillerServer() {}
func (UnimplementedChillerServer) testEmbeddedByValue() {}
// UnsafeChillerServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to ChillerServer will
@ -74,6 +78,13 @@ type UnsafeChillerServer interface {
}
func RegisterChillerServer(s grpc.ServiceRegistrar, srv ChillerServer) {
// If the following call pancis, it indicates UnimplementedChillerServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Chiller_ServiceDesc, srv)
}

View File

@ -0,0 +1,26 @@
Address Block,Name,RFC,Allocation Date,Termination Date,Source,Destination,Forwardable,Globally Reachable,Reserved-by-Protocol
0.0.0.0/8,"""This network""","[RFC791], Section 3.2",1981-09,N/A,True,False,False,False,True
0.0.0.0/32,"""This host on this network""","[RFC1122], Section 3.2.1.3",1981-09,N/A,True,False,False,False,True
10.0.0.0/8,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
100.64.0.0/10,Shared Address Space,[RFC6598],2012-04,N/A,True,True,True,False,False
127.0.0.0/8,Loopback,"[RFC1122], Section 3.2.1.3",1981-09,N/A,False [1],False [1],False [1],False [1],True
169.254.0.0/16,Link Local,[RFC3927],2005-05,N/A,True,True,False,False,True
172.16.0.0/12,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
192.0.0.0/24 [2],IETF Protocol Assignments,"[RFC6890], Section 2.1",2010-01,N/A,False,False,False,False,False
192.0.0.0/29,IPv4 Service Continuity Prefix,[RFC7335],2011-06,N/A,True,True,True,False,False
192.0.0.8/32,IPv4 dummy address,[RFC7600],2015-03,N/A,True,False,False,False,False
192.0.0.9/32,Port Control Protocol Anycast,[RFC7723],2015-10,N/A,True,True,True,True,False
192.0.0.10/32,Traversal Using Relays around NAT Anycast,[RFC8155],2017-02,N/A,True,True,True,True,False
"192.0.0.170/32, 192.0.0.171/32",NAT64/DNS64 Discovery,"[RFC8880][RFC7050], Section 2.2",2013-02,N/A,False,False,False,False,True
192.0.2.0/24,Documentation (TEST-NET-1),[RFC5737],2010-01,N/A,False,False,False,False,False
192.31.196.0/24,AS112-v4,[RFC7535],2014-12,N/A,True,True,True,True,False
192.52.193.0/24,AMT,[RFC7450],2014-12,N/A,True,True,True,True,False
192.88.99.0/24,Deprecated (6to4 Relay Anycast),[RFC7526],2001-06,2015-03,,,,,
192.168.0.0/16,Private-Use,[RFC1918],1996-02,N/A,True,True,True,False,False
192.175.48.0/24,Direct Delegation AS112 Service,[RFC7534],1996-01,N/A,True,True,True,True,False
198.18.0.0/15,Benchmarking,[RFC2544],1999-03,N/A,True,True,True,False,False
198.51.100.0/24,Documentation (TEST-NET-2),[RFC5737],2010-01,N/A,False,False,False,False,False
203.0.113.0/24,Documentation (TEST-NET-3),[RFC5737],2010-01,N/A,False,False,False,False,False
240.0.0.0/4,Reserved,"[RFC1112], Section 4",1989-08,N/A,False,False,False,False,True
255.255.255.255/32,Limited Broadcast,"[RFC8190]
[RFC919], Section 7",1984-10,N/A,False,True,False,False,True
1 Address Block Name RFC Allocation Date Termination Date Source Destination Forwardable Globally Reachable Reserved-by-Protocol
2 0.0.0.0/8 "This network" [RFC791], Section 3.2 1981-09 N/A True False False False True
3 0.0.0.0/32 "This host on this network" [RFC1122], Section 3.2.1.3 1981-09 N/A True False False False True
4 10.0.0.0/8 Private-Use [RFC1918] 1996-02 N/A True True True False False
5 100.64.0.0/10 Shared Address Space [RFC6598] 2012-04 N/A True True True False False
6 127.0.0.0/8 Loopback [RFC1122], Section 3.2.1.3 1981-09 N/A False [1] False [1] False [1] False [1] True
7 169.254.0.0/16 Link Local [RFC3927] 2005-05 N/A True True False False True
8 172.16.0.0/12 Private-Use [RFC1918] 1996-02 N/A True True True False False
9 192.0.0.0/24 [2] IETF Protocol Assignments [RFC6890], Section 2.1 2010-01 N/A False False False False False
10 192.0.0.0/29 IPv4 Service Continuity Prefix [RFC7335] 2011-06 N/A True True True False False
11 192.0.0.8/32 IPv4 dummy address [RFC7600] 2015-03 N/A True False False False False
12 192.0.0.9/32 Port Control Protocol Anycast [RFC7723] 2015-10 N/A True True True True False
13 192.0.0.10/32 Traversal Using Relays around NAT Anycast [RFC8155] 2017-02 N/A True True True True False
14 192.0.0.170/32, 192.0.0.171/32 NAT64/DNS64 Discovery [RFC8880][RFC7050], Section 2.2 2013-02 N/A False False False False True
15 192.0.2.0/24 Documentation (TEST-NET-1) [RFC5737] 2010-01 N/A False False False False False
16 192.31.196.0/24 AS112-v4 [RFC7535] 2014-12 N/A True True True True False
17 192.52.193.0/24 AMT [RFC7450] 2014-12 N/A True True True True False
18 192.88.99.0/24 Deprecated (6to4 Relay Anycast) [RFC7526] 2001-06 2015-03
19 192.168.0.0/16 Private-Use [RFC1918] 1996-02 N/A True True True False False
20 192.175.48.0/24 Direct Delegation AS112 Service [RFC7534] 1996-01 N/A True True True True False
21 198.18.0.0/15 Benchmarking [RFC2544] 1999-03 N/A True True True False False
22 198.51.100.0/24 Documentation (TEST-NET-2) [RFC5737] 2010-01 N/A False False False False False
23 203.0.113.0/24 Documentation (TEST-NET-3) [RFC5737] 2010-01 N/A False False False False False
24 240.0.0.0/4 Reserved [RFC1112], Section 4 1989-08 N/A False False False False True
25 255.255.255.255/32 Limited Broadcast [RFC8190] [RFC919], Section 7 1984-10 N/A False True False False True

View File

@ -0,0 +1,28 @@
Address Block,Name,RFC,Allocation Date,Termination Date,Source,Destination,Forwardable,Globally Reachable,Reserved-by-Protocol
::1/128,Loopback Address,[RFC4291],2006-02,N/A,False,False,False,False,True
::/128,Unspecified Address,[RFC4291],2006-02,N/A,True,False,False,False,True
::ffff:0:0/96,IPv4-mapped Address,[RFC4291],2006-02,N/A,False,False,False,False,True
64:ff9b::/96,IPv4-IPv6 Translat.,[RFC6052],2010-10,N/A,True,True,True,True,False
64:ff9b:1::/48,IPv4-IPv6 Translat.,[RFC8215],2017-06,N/A,True,True,True,False,False
100::/64,Discard-Only Address Block,[RFC6666],2012-06,N/A,True,True,True,False,False
100:0:0:1::/64,Dummy IPv6 Prefix,[RFC9780],2025-04,N/A,True,False,False,False,False
2001::/23,IETF Protocol Assignments,[RFC2928],2000-09,N/A,False [1],False [1],False [1],False [1],False
2001::/32,TEREDO,"[RFC4380]
[RFC8190]",2006-01,N/A,True,True,True,N/A [2],False
2001:1::1/128,Port Control Protocol Anycast,[RFC7723],2015-10,N/A,True,True,True,True,False
2001:1::2/128,Traversal Using Relays around NAT Anycast,[RFC8155],2017-02,N/A,True,True,True,True,False
2001:1::3/128,DNS-SD Service Registration Protocol Anycast,[RFC9665],2024-04,N/A,True,True,True,True,False
2001:2::/48,Benchmarking,[RFC5180][RFC Errata 1752],2008-04,N/A,True,True,True,False,False
2001:3::/32,AMT,[RFC7450],2014-12,N/A,True,True,True,True,False
2001:4:112::/48,AS112-v6,[RFC7535],2014-12,N/A,True,True,True,True,False
2001:10::/28,Deprecated (previously ORCHID),[RFC4843],2007-03,2014-03,,,,,
2001:20::/28,ORCHIDv2,[RFC7343],2014-07,N/A,True,True,True,True,False
2001:30::/28,Drone Remote ID Protocol Entity Tags (DETs) Prefix,[RFC9374],2022-12,N/A,True,True,True,True,False
2001:db8::/32,Documentation,[RFC3849],2004-07,N/A,False,False,False,False,False
2002::/16 [3],6to4,[RFC3056],2001-02,N/A,True,True,True,N/A [3],False
2620:4f:8000::/48,Direct Delegation AS112 Service,[RFC7534],2011-05,N/A,True,True,True,True,False
3fff::/20,Documentation,[RFC9637],2024-07,N/A,False,False,False,False,False
5f00::/16,Segment Routing (SRv6) SIDs,[RFC9602],2024-04,N/A,True,True,True,False,False
fc00::/7,Unique-Local,"[RFC4193]
[RFC8190]",2005-10,N/A,True,True,True,False [4],False
fe80::/10,Link-Local Unicast,[RFC4291],2006-02,N/A,True,True,False,False,True
1 Address Block Name RFC Allocation Date Termination Date Source Destination Forwardable Globally Reachable Reserved-by-Protocol
2 ::1/128 Loopback Address [RFC4291] 2006-02 N/A False False False False True
3 ::/128 Unspecified Address [RFC4291] 2006-02 N/A True False False False True
4 ::ffff:0:0/96 IPv4-mapped Address [RFC4291] 2006-02 N/A False False False False True
5 64:ff9b::/96 IPv4-IPv6 Translat. [RFC6052] 2010-10 N/A True True True True False
6 64:ff9b:1::/48 IPv4-IPv6 Translat. [RFC8215] 2017-06 N/A True True True False False
7 100::/64 Discard-Only Address Block [RFC6666] 2012-06 N/A True True True False False
8 100:0:0:1::/64 Dummy IPv6 Prefix [RFC9780] 2025-04 N/A True False False False False
9 2001::/23 IETF Protocol Assignments [RFC2928] 2000-09 N/A False [1] False [1] False [1] False [1] False
10 2001::/32 TEREDO [RFC4380] [RFC8190] 2006-01 N/A True True True N/A [2] False
11 2001:1::1/128 Port Control Protocol Anycast [RFC7723] 2015-10 N/A True True True True False
12 2001:1::2/128 Traversal Using Relays around NAT Anycast [RFC8155] 2017-02 N/A True True True True False
13 2001:1::3/128 DNS-SD Service Registration Protocol Anycast [RFC9665] 2024-04 N/A True True True True False
14 2001:2::/48 Benchmarking [RFC5180][RFC Errata 1752] 2008-04 N/A True True True False False
15 2001:3::/32 AMT [RFC7450] 2014-12 N/A True True True True False
16 2001:4:112::/48 AS112-v6 [RFC7535] 2014-12 N/A True True True True False
17 2001:10::/28 Deprecated (previously ORCHID) [RFC4843] 2007-03 2014-03
18 2001:20::/28 ORCHIDv2 [RFC7343] 2014-07 N/A True True True True False
19 2001:30::/28 Drone Remote ID Protocol Entity Tags (DETs) Prefix [RFC9374] 2022-12 N/A True True True True False
20 2001:db8::/32 Documentation [RFC3849] 2004-07 N/A False False False False False
21 2002::/16 [3] 6to4 [RFC3056] 2001-02 N/A True True True N/A [3] False
22 2620:4f:8000::/48 Direct Delegation AS112 Service [RFC7534] 2011-05 N/A True True True True False
23 3fff::/20 Documentation [RFC9637] 2024-07 N/A False False False False False
24 5f00::/16 Segment Routing (SRv6) SIDs [RFC9602] 2024-04 N/A True True True False False
25 fc00::/7 Unique-Local [RFC4193] [RFC8190] 2005-10 N/A True True True False [4] False
26 fe80::/10 Link-Local Unicast [RFC4291] 2006-02 N/A True True False False True

179
iana/ip.go Normal file
View File

@ -0,0 +1,179 @@
package iana
import (
"bytes"
"encoding/csv"
"errors"
"fmt"
"io"
"net/netip"
"regexp"
"slices"
"strings"
_ "embed"
)
type reservedPrefix struct {
// addressFamily is "IPv4" or "IPv6".
addressFamily string
// The other fields are defined in:
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
addressBlock netip.Prefix
name string
rfc string
// The BRs' requirement that we not issue for Reserved IP Addresses only
// cares about presence in one of these registries, not any of the other
// metadata fields tracked by the registries. Therefore, we ignore the
// Allocation Date, Termination Date, Source, Destination, Forwardable,
// Globally Reachable, and Reserved By Protocol columns.
}
var (
reservedPrefixes []reservedPrefix
// https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
//go:embed data/iana-ipv4-special-registry-1.csv
ipv4Registry []byte
// https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
//go:embed data/iana-ipv6-special-registry-1.csv
ipv6Registry []byte
)
// init parses and loads the embedded IANA special-purpose address registry CSV
// files for all address families, panicking if any one fails.
func init() {
ipv4Prefixes, err := parseReservedPrefixFile(ipv4Registry, "IPv4")
if err != nil {
panic(err)
}
ipv6Prefixes, err := parseReservedPrefixFile(ipv6Registry, "IPv6")
if err != nil {
panic(err)
}
// Add multicast addresses, which aren't in the IANA registries.
//
// TODO(#8237): Move these entries to IP address blocklists once they're
// implemented.
additionalPrefixes := []reservedPrefix{
{
addressFamily: "IPv4",
addressBlock: netip.MustParsePrefix("224.0.0.0/4"),
name: "Multicast Addresses",
rfc: "[RFC3171]",
},
{
addressFamily: "IPv6",
addressBlock: netip.MustParsePrefix("ff00::/8"),
name: "Multicast Addresses",
rfc: "[RFC4291]",
},
}
reservedPrefixes = slices.Concat(ipv4Prefixes, ipv6Prefixes, additionalPrefixes)
// Sort the list of reserved prefixes in descending order of prefix size, so
// that checks will match the most-specific reserved prefix first.
slices.SortFunc(reservedPrefixes, func(a, b reservedPrefix) int {
if a.addressBlock.Bits() == b.addressBlock.Bits() {
return 0
}
if a.addressBlock.Bits() > b.addressBlock.Bits() {
return -1
}
return 1
})
}
// Define regexps we'll use to clean up poorly formatted registry entries.
var (
// 2+ sequential whitespace characters. The csv package takes care of
// newlines automatically.
ianaWhitespacesRE = regexp.MustCompile(`\s{2,}`)
// Footnotes at the end, like `[2]`.
ianaFootnotesRE = regexp.MustCompile(`\[\d+\]$`)
)
// parseReservedPrefixFile parses and returns the IANA special-purpose address
// registry CSV data for a single address family, or returns an error if parsing
// fails.
func parseReservedPrefixFile(registryData []byte, addressFamily string) ([]reservedPrefix, error) {
if addressFamily != "IPv4" && addressFamily != "IPv6" {
return nil, fmt.Errorf("failed to parse reserved address registry: invalid address family %q", addressFamily)
}
if registryData == nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry: empty", addressFamily)
}
reader := csv.NewReader(bytes.NewReader(registryData))
// Parse the header row.
record, err := reader.Read()
if err != nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry header: %w", addressFamily, err)
}
if record[0] != "Address Block" || record[1] != "Name" || record[2] != "RFC" {
return nil, fmt.Errorf("failed to parse reserved %s address registry header: must begin with \"Address Block\", \"Name\" and \"RFC\"", addressFamily)
}
// Parse the records.
var prefixes []reservedPrefix
for {
row, err := reader.Read()
if errors.Is(err, io.EOF) {
// Finished parsing the file.
if len(prefixes) < 1 {
return nil, fmt.Errorf("failed to parse reserved %s address registry: no rows after header", addressFamily)
}
break
} else if err != nil {
return nil, err
} else if len(row) < 3 {
return nil, fmt.Errorf("failed to parse reserved %s address registry: incomplete row", addressFamily)
}
// Remove any footnotes, then handle each comma-separated prefix.
for _, prefixStr := range strings.Split(ianaFootnotesRE.ReplaceAllLiteralString(row[0], ""), ",") {
prefix, err := netip.ParsePrefix(strings.TrimSpace(prefixStr))
if err != nil {
return nil, fmt.Errorf("failed to parse reserved %s address registry: couldn't parse entry %q as an IP address prefix: %s", addressFamily, prefixStr, err)
}
prefixes = append(prefixes, reservedPrefix{
addressFamily: addressFamily,
addressBlock: prefix,
name: row[1],
// Replace any whitespace sequences with a single space.
rfc: ianaWhitespacesRE.ReplaceAllLiteralString(row[2], " "),
})
}
}
return prefixes, nil
}
// IsReservedAddr returns an error if an IP address is part of a reserved range.
func IsReservedAddr(ip netip.Addr) error {
for _, rpx := range reservedPrefixes {
if rpx.addressBlock.Contains(ip) {
return fmt.Errorf("IP address is in a reserved address block: %s: %s", rpx.rfc, rpx.name)
}
}
return nil
}
// IsReservedPrefix returns an error if an IP address prefix overlaps with a
// reserved range.
func IsReservedPrefix(prefix netip.Prefix) error {
for _, rpx := range reservedPrefixes {
if rpx.addressBlock.Overlaps(prefix) {
return fmt.Errorf("IP address is in a reserved address block: %s: %s", rpx.rfc, rpx.name)
}
}
return nil
}

96
iana/ip_test.go Normal file
View File

@ -0,0 +1,96 @@
package iana
import (
"net/netip"
"strings"
"testing"
)
func TestIsReservedAddr(t *testing.T) {
t.Parallel()
cases := []struct {
ip string
want string
}{
{"127.0.0.1", "Loopback"}, // second-lowest IP in a reserved /8, common mistaken request
{"128.0.0.1", ""}, // second-lowest IP just above a reserved /8
{"192.168.254.254", "Private-Use"}, // highest IP in a reserved /16
{"192.169.255.255", ""}, // highest IP in the /16 above a reserved /16
{"::", "Unspecified Address"}, // lowest possible IPv6 address, reserved, possible parsing edge case
{"::1", "Loopback Address"}, // reserved, common mistaken request
{"::2", ""}, // surprisingly unreserved
{"fe80::1", "Link-Local Unicast"}, // second-lowest IP in a reserved /10
{"febf:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "Link-Local Unicast"}, // highest IP in a reserved /10
{"fec0::1", ""}, // second-lowest IP just above a reserved /10
{"192.0.0.170", "NAT64/DNS64 Discovery"}, // first of two reserved IPs that are comma-split in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"192.0.0.171", "NAT64/DNS64 Discovery"}, // second of two reserved IPs that are comma-split in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"2001:1::1", "Port Control Protocol Anycast"}, // reserved IP that comes after a line with a line break in IANA's CSV; also a more-specific of a larger reserved block that comes first
{"2002::", "6to4"}, // lowest IP in a reserved /16 that has a footnote in IANA's CSV
{"2002:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "6to4"}, // highest IP in a reserved /16 that has a footnote in IANA's CSV
{"0100::", "Discard-Only Address Block"}, // part of a reserved block in a non-canonical IPv6 format
{"0100::0000:ffff:ffff:ffff:ffff", "Discard-Only Address Block"}, // part of a reserved block in a non-canonical IPv6 format
{"0100::0002:0000:0000:0000:0000", ""}, // non-reserved but in a non-canonical IPv6 format
// TODO(#8237): Move these entries to IP address blocklists once they're
// implemented.
{"ff00::1", "Multicast Addresses"}, // second-lowest IP in a reserved /8 we hardcode
{"ff10::1", "Multicast Addresses"}, // in the middle of a reserved /8 we hardcode
{"ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff", "Multicast Addresses"}, // highest IP in a reserved /8 we hardcode
}
for _, tc := range cases {
t.Run(tc.ip, func(t *testing.T) {
t.Parallel()
err := IsReservedAddr(netip.MustParseAddr(tc.ip))
if err == nil && tc.want != "" {
t.Errorf("Got success, wanted error for %#v", tc.ip)
}
if err != nil && !strings.Contains(err.Error(), tc.want) {
t.Errorf("%#v: got %q, want %q", tc.ip, err.Error(), tc.want)
}
})
}
}
func TestIsReservedPrefix(t *testing.T) {
t.Parallel()
cases := []struct {
cidr string
want bool
}{
{"172.16.0.0/12", true},
{"172.16.0.0/32", true},
{"172.16.0.1/32", true},
{"172.31.255.0/24", true},
{"172.31.255.255/24", true},
{"172.31.255.255/32", true},
{"172.32.0.0/24", false},
{"172.32.0.1/32", false},
{"100::/64", true},
{"100::/128", true},
{"100::1/128", true},
{"100::1:ffff:ffff:ffff:ffff/128", true},
{"100:0:0:2::/64", false},
{"100:0:0:2::1/128", false},
}
for _, tc := range cases {
t.Run(tc.cidr, func(t *testing.T) {
t.Parallel()
err := IsReservedPrefix(netip.MustParsePrefix(tc.cidr))
if err != nil && !tc.want {
t.Error(err)
}
if err == nil && tc.want {
t.Errorf("Wanted error for %#v, got success", tc.cidr)
}
})
}
}

Some files were not shown because too many files have changed in this diff Show More