Compare commits

...

10 Commits

Author SHA1 Message Date
Shiloh Heurich 473b4059c4
feat: Add core definitions for dns-account-01 (#8140)
## Summary

This PR introduces the foundational components required to support
the `dns-account-01` challenge type, as specified in draft-ietf-acme-dns-account-label-00:
https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account-label/.

It focuses only on core definitions and SA support. PA/VA/RA logic will be in
a follow-up change.

Core Definitions & Logic:
- //core/objects.go: Added `ChallengeTypeDNSAccount01` constant and
  updated validation methods
- //core/challenges.go: Added `DNSAccountChallenge01` constructor
  and factory support

Storage Authority (SA) Support:
- //sa/model.go: Added `dns-account-01` to challenge type mappings

Testing:
- //core/*_test.go: Basic definition and validation tests
- //sa/sa_test.go: Database round-trip tests for `dns-account-01`
  challenges

Dependencies:
- Updated github.com/eggsampler/acme/v3 to release version v3.6.2
2025-07-29 09:27:04 -07:00
Aaron Gable 440c6957f9
CA: Truncate notBefore and notAfter to second-level precision (#8319)
When generating the validity period of a to-be-issued certificate,
truncate the notBefore timestamp to second-level precision, trimming off
any nanoseconds which won't be represented in the final certificate. Do
the same for the notAfter, although this should be a no-op since only
whole numbers of seconds are used to compute it from the notBefore.

It's possible that this could cause some of the maxBackdate calculations
to fail, because truncation can cause the notBefore timestamp to move up
to (nearly) 1 second earlier. However, this only becomes a concern in
practice if maxBackdate is set to 10 seconds or less.

This results in cleaner logs, since Go only prints the fractional
seconds portion of a timestamp if it is non-zero:
https://go.dev/play/p/iAeSX3VMrJD

Fixes https://github.com/letsencrypt/boulder/issues/8318
2025-07-28 15:09:55 -07:00
Samantha Frank 80c75ab435
docker: Update CI mariadb from 10.6.22 to 10.11.13 (#8321)
Closes #8307
2025-07-28 17:48:23 -04:00
Jacob Hoffman-Andrews 85d1e3cf5e
sa: use internal fqdnSet model instead of core.FQDNSet (#8314)
Fixes https://github.com/letsencrypt/boulder/issues/8112
2025-07-23 16:49:04 -07:00
James Renken 04ae9ebcda
bad-key-revoker: Add delay to mitigate race condition (#8301)
Add a `MaxExpectedReplicationLag` parameter to `bad-key-revoker`. Wait
that interval before searching for certificates to revoke.

The interval is set to only 100ms in both `test/config` and
`test/config-next` so that integration tests don't require long sleeps.
The default value within BKR is, and the production value should be,
higher.

Part of #5686
2025-07-21 14:18:19 -07:00
dependabot[bot] a3c1e62049
build(deps): bump github.com/redis/go-redis/v9 from 9.7.3 to 9.10.0 (#8313)
Bumps github.com/redis/go-redis/v9 from 9.7.3 to 9.10.0

Commits: https://github.com/redis/go-redis/compare/v9.7.3...v9.10.0
Latest changelog: https://github.com/redis/go-redis/releases/tag/v9.10.0
2025-07-18 15:17:22 -07:00
Aaron Gable cd59eed63d
Ceremony: use pre-existing SKID during cross-signing (#8311)
When cross-signing a pre-existing root, the cross-sign's Subject Key
Identifier field needs to exactly match the existing cert's Subject Key
Identifier. Rather than recompute it, copy it directly from the
"to-be-cross-signed" cert.
2025-07-18 13:08:32 -07:00
Aaron Gable 5a5ae229a0
Ceremony: allow shortening of Subject Organization Name (#8310)
In general, the ceremony tool requires that any Unrestricted Cross Sign
(see Baseline Requirements, Section 7.1.2.2.3) must have a Subject
Organization Name which is identical to the issuer's Organization Name.
Allow a special case whereby a cert (such as ISRG Root X1) which has
Subject Organization Name "Internet Security Research Group" can
cross-certify a cert (such as the upcoming Root YR) which has the
shorter string "ISRG" for that same field.

---

> [!WARNING]
> ~~Do not merge before
https://github.com/letsencrypt/boulder/pull/8309~~
2025-07-17 17:36:05 -07:00
Aaron Gable d5bb88b975
Ceremony: remove support for delegated CRL and OCSP signers (#8309)
Delegated CRL Signers are forbidden by the Baseline Requirements, and we
haven't used Delegated OCSP Responders since 2020. This code is dead,
and creates unnecessary complexity, so remove it.

At the same time, improve our README to reflect these changes and
resolve several formatting lint warnings.
2025-07-17 16:28:26 -07:00
Aaron Gable b9dbcdbba2
Dependabot: add a 30-day cooldown between go dependency updates (#8312)
Documentation for the "cooldown" config parameter is here:
https://docs.github.com/en/code-security/dependabot/working-with-dependabot/dependabot-options-reference#cooldown-
2025-07-17 14:57:58 -07:00
59 changed files with 2626 additions and 1126 deletions

View File

@ -14,6 +14,8 @@ updates:
schedule:
interval: "weekly"
day: "wednesday"
cooldown:
default-days: 30
- package-ecosystem: "github-actions"
directory: "/"
schedule:

View File

@ -46,16 +46,17 @@ type revoker interface {
}
type badKeyRevoker struct {
dbMap *db.WrappedMap
maxRevocations int
serialBatchSize int
raClient revoker
logger blog.Logger
clk clock.Clock
backoffIntervalBase time.Duration
backoffIntervalMax time.Duration
backoffFactor float64
backoffTicker int
dbMap *db.WrappedMap
maxRevocations int
serialBatchSize int
raClient revoker
logger blog.Logger
clk clock.Clock
backoffIntervalBase time.Duration
backoffIntervalMax time.Duration
backoffFactor float64
backoffTicker int
maxExpectedReplicationLag time.Duration
}
// uncheckedBlockedKey represents a row in the blockedKeys table
@ -76,8 +77,10 @@ func (bkr *badKeyRevoker) countUncheckedKeys(ctx context.Context) (int, error) {
&count,
`SELECT COUNT(*)
FROM (SELECT 1 FROM blockedKeys
WHERE extantCertificatesChecked = false
WHERE extantCertificatesChecked = false AND added < ? - INTERVAL ? SECOND
LIMIT ?) AS a`,
bkr.clk.Now(),
bkr.maxExpectedReplicationLag.Seconds(),
blockedKeysGaugeLimit,
)
return count, err
@ -90,8 +93,10 @@ func (bkr *badKeyRevoker) selectUncheckedKey(ctx context.Context) (uncheckedBloc
&row,
`SELECT keyHash, revokedBy
FROM blockedKeys
WHERE extantCertificatesChecked = false
WHERE extantCertificatesChecked = false AND added < ? - INTERVAL ? SECOND
LIMIT 1`,
bkr.clk.Now(),
bkr.maxExpectedReplicationLag.Seconds(),
)
return row, err
}
@ -275,6 +280,7 @@ type Config struct {
// is higher than MaximumRevocations bad-key-revoker will error out and refuse to
// progress until this is addressed.
MaximumRevocations int `validate:"gte=0"`
// FindCertificatesBatchSize specifies the maximum number of serials to select from the
// keyHashToSerial table at once
FindCertificatesBatchSize int `validate:"required"`
@ -288,6 +294,13 @@ type Config struct {
// algorithm will wait before retrying in the event of error
// or no work to do.
BackoffIntervalMax config.Duration `validate:"-"`
// MaxExpectedReplicationLag specifies the minimum duration
// bad-key-revoker should wait before searching for certificates
// matching a blockedKeys row. This should be just slightly greater than
// the database's maximum replication lag, and always well under 24
// hours.
MaxExpectedReplicationLag config.Duration `validate:"-"`
}
Syslog cmd.SyslogConfig
@ -330,15 +343,16 @@ func main() {
rac := rapb.NewRegistrationAuthorityClient(conn)
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: config.BadKeyRevoker.MaximumRevocations,
serialBatchSize: config.BadKeyRevoker.FindCertificatesBatchSize,
raClient: rac,
logger: logger,
clk: clk,
backoffIntervalMax: config.BadKeyRevoker.BackoffIntervalMax.Duration,
backoffIntervalBase: config.BadKeyRevoker.Interval.Duration,
backoffFactor: 1.3,
dbMap: dbMap,
maxRevocations: config.BadKeyRevoker.MaximumRevocations,
serialBatchSize: config.BadKeyRevoker.FindCertificatesBatchSize,
raClient: rac,
logger: logger,
clk: clk,
backoffIntervalMax: config.BadKeyRevoker.BackoffIntervalMax.Duration,
backoffIntervalBase: config.BadKeyRevoker.Interval.Duration,
backoffFactor: 1.3,
maxExpectedReplicationLag: config.BadKeyRevoker.MaxExpectedReplicationLag.Duration,
}
// If `BackoffIntervalMax` was not set via the config, set it to 60
@ -354,6 +368,14 @@ func main() {
bkr.backoffIntervalBase = time.Second
}
// If `MaxExpectedReplicationLag` was not set via the config, then set
// `bkr.maxExpectedReplicationLag` to a default 22 seconds. This is based on
// ProxySQL's max_replication_lag for bad-key-revoker (10s), times two, plus
// two seconds.
if bkr.maxExpectedReplicationLag == 0 {
bkr.maxExpectedReplicationLag = time.Second * 22
}
// Run bad-key-revoker in a loop. Backoff if no work or errors.
for {
noWork, err := bkr.invoke(context.Background())

View File

@ -45,6 +45,12 @@ func insertBlockedRow(t *testing.T, dbMap *db.WrappedMap, fc clock.Clock, hash [
test.AssertNotError(t, err, "failed to add test row")
}
func fcBeforeRepLag(clk clock.Clock, bkr *badKeyRevoker) clock.FakeClock {
fc := clock.NewFake()
fc.Set(clk.Now().Add(-bkr.maxExpectedReplicationLag - time.Second))
return fc
}
func TestSelectUncheckedRows(t *testing.T) {
ctx := context.Background()
@ -55,12 +61,15 @@ func TestSelectUncheckedRows(t *testing.T) {
fc := clock.NewFake()
bkr := &badKeyRevoker{
dbMap: dbMap,
logger: blog.NewMock(),
clk: fc,
dbMap: dbMap,
logger: blog.NewMock(),
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
hashA, hashB, hashC := randHash(t), randHash(t), randHash(t)
// insert a blocked key that's marked as already checked
insertBlockedRow(t, dbMap, fc, hashA, 1, true)
count, err := bkr.countUncheckedKeys(ctx)
test.AssertNotError(t, err, "countUncheckedKeys failed")
@ -68,11 +77,14 @@ func TestSelectUncheckedRows(t *testing.T) {
_, err = bkr.selectUncheckedKey(ctx)
test.AssertError(t, err, "selectUncheckedKey didn't fail with no rows to process")
test.Assert(t, db.IsNoRows(err), "returned error is not sql.ErrNoRows")
insertBlockedRow(t, dbMap, fc, hashB, 1, false)
// insert a blocked key that's due to be checked
insertBlockedRow(t, dbMap, fcBeforeRepLag(fc, bkr), hashB, 1, false)
// insert a freshly blocked key, so it's not yet due to be checked
insertBlockedRow(t, dbMap, fc, hashC, 1, false)
count, err = bkr.countUncheckedKeys(ctx)
test.AssertNotError(t, err, "countUncheckedKeys failed")
test.AssertEquals(t, count, 2)
test.AssertEquals(t, count, 1)
row, err := bkr.selectUncheckedKey(ctx)
test.AssertNotError(t, err, "selectUncheckKey failed")
test.AssertByteEquals(t, row.KeyHash, hashB)
@ -191,7 +203,13 @@ func TestFindUnrevokedNoRows(t *testing.T) {
)
test.AssertNotError(t, err, "failed to insert test keyHashToSerial row")
bkr := &badKeyRevoker{dbMap: dbMap, serialBatchSize: 1, maxRevocations: 10, clk: fc}
bkr := &badKeyRevoker{
dbMap: dbMap,
serialBatchSize: 1,
maxRevocations: 10,
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
_, err = bkr.findUnrevoked(ctx, uncheckedBlockedKey{KeyHash: hashA})
test.Assert(t, db.IsNoRows(err), "expected NoRows error")
}
@ -207,7 +225,13 @@ func TestFindUnrevoked(t *testing.T) {
regID := insertRegistration(t, dbMap, fc)
bkr := &badKeyRevoker{dbMap: dbMap, serialBatchSize: 1, maxRevocations: 10, clk: fc}
bkr := &badKeyRevoker{
dbMap: dbMap,
serialBatchSize: 1,
maxRevocations: 10,
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
hashA := randHash(t)
// insert valid, unexpired
@ -251,7 +275,11 @@ func TestRevokeCerts(t *testing.T) {
fc := clock.NewFake()
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap, raClient: mr, clk: fc}
bkr := &badKeyRevoker{
dbMap: dbMap,
raClient: mr,
clk: fc,
}
err = bkr.revokeCerts([]unrevokedCertificate{
{ID: 0, Serial: "ff"},
@ -269,11 +297,20 @@ func TestCertificateAbsent(t *testing.T) {
defer test.ResetBoulderTestDatabase(t)()
fc := clock.NewFake()
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: 1,
serialBatchSize: 1,
raClient: &mockRevoker{},
logger: blog.NewMock(),
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
// populate DB with all the test data
regIDA := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDA, false)
insertBlockedRow(t, dbMap, fcBeforeRepLag(fc, bkr), hashA, regIDA, false)
// Add an entry to keyHashToSerial but not to certificateStatus or certificate
// status, and expect an error.
@ -286,14 +323,6 @@ func TestCertificateAbsent(t *testing.T) {
)
test.AssertNotError(t, err, "failed to insert test keyHashToSerial row")
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: 1,
serialBatchSize: 1,
raClient: &mockRevoker{},
logger: blog.NewMock(),
clk: fc,
}
_, err = bkr.invoke(ctx)
test.AssertError(t, err, "expected error when row in keyHashToSerial didn't have a matching cert")
}
@ -309,12 +338,13 @@ func TestInvoke(t *testing.T) {
mr := &mockRevoker{}
bkr := &badKeyRevoker{
dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
logger: blog.NewMock(),
clk: fc,
dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
logger: blog.NewMock(),
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
// populate DB with all the test data
@ -323,7 +353,7 @@ func TestInvoke(t *testing.T) {
regIDC := insertRegistration(t, dbMap, fc)
regIDD := insertRegistration(t, dbMap, fc)
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDC, false)
insertBlockedRow(t, dbMap, fcBeforeRepLag(fc, bkr), hashA, regIDC, false)
insertGoodCert(t, dbMap, fc, hashA, "ff", regIDA)
insertGoodCert(t, dbMap, fc, hashA, "ee", regIDB)
insertGoodCert(t, dbMap, fc, hashA, "dd", regIDC)
@ -344,7 +374,7 @@ func TestInvoke(t *testing.T) {
// add a row with no associated valid certificates
hashB := randHash(t)
insertBlockedRow(t, dbMap, fc, hashB, regIDC, false)
insertBlockedRow(t, dbMap, fcBeforeRepLag(fc, bkr), hashB, regIDC, false)
insertCert(t, dbMap, fc, hashB, "bb", regIDA, Expired, Revoked)
noWork, err = bkr.invoke(ctx)
@ -375,11 +405,12 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
mr := &mockRevoker{}
bkr := &badKeyRevoker{dbMap: dbMap,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
logger: blog.NewMock(),
clk: fc,
maxRevocations: 10,
serialBatchSize: 1,
raClient: mr,
logger: blog.NewMock(),
clk: fc,
maxExpectedReplicationLag: time.Second * 22,
}
// populate DB with all the test data
@ -389,7 +420,7 @@ func TestInvokeRevokerHasNoExtantCerts(t *testing.T) {
hashA := randHash(t)
insertBlockedRow(t, dbMap, fc, hashA, regIDA, false)
insertBlockedRow(t, dbMap, fcBeforeRepLag(fc, bkr), hashA, regIDA, false)
insertGoodCert(t, dbMap, fc, hashA, "ee", regIDB)
insertGoodCert(t, dbMap, fc, hashA, "dd", regIDB)

View File

@ -1,21 +1,20 @@
# `ceremony`
```
```sh
ceremony --config path/to/config.yml
```
`ceremony` is a tool designed for Certificate Authority specific key and certificate ceremonies. The main design principle is that unlike most ceremony tooling there is a single user input, a configuration file, which is required to complete a root, intermediate, or key ceremony. The goal is to make ceremonies as simple as possible and allow for simple verification of a single file, instead of verification of a large number of independent commands.
`ceremony` has these modes:
* `root` - generates a signing key on HSM and creates a self-signed root certificate that uses the generated key, outputting a PEM public key, and a PEM certificate. After generating such a root for public trust purposes, it should be submitted to [as many root programs as is possible/practical](https://github.com/daknob/root-programs).
* `intermediate` - creates a intermediate certificate and signs it using a signing key already on a HSM, outputting a PEM certificate
* `cross-csr` - creates a CSR for signing by a third party, outputting a PEM CSR.
* `cross-certificate` - issues a certificate for one root, signed by another root. This is distinct from an intermediate because there is no path length constraint and there are no EKUs.
* `ocsp-signer` - creates a delegated OCSP signing certificate and signs it using a signing key already on a HSM, outputting a PEM certificate
* `crl-signer` - creates a delegated CRL signing certificate and signs it using a signing key already on a HSM, outputting a PEM certificate
* `key` - generates a signing key on HSM, outputting a PEM public key
* `ocsp-response` - creates a OCSP response for the provided certificate and signs it using a signing key already on a HSM, outputting a base64 encoded response
* `crl` - creates a CRL with the IDP extension and `onlyContainsCACerts = true` from the provided profile and signs it using a signing key already on a HSM, outputting a PEM CRL
- `root`: generates a signing key on HSM and creates a self-signed root certificate that uses the generated key, outputting a PEM public key, and a PEM certificate. After generating such a root for public trust purposes, it should be submitted to [as many root programs as is possible/practical](https://github.com/daknob/root-programs).
- `intermediate`: creates a intermediate certificate and signs it using a signing key already on a HSM, outputting a PEM certificate
- `cross-csr`: creates a CSR for signing by a third party, outputting a PEM CSR.
- `cross-certificate`: issues a certificate for one root, signed by another root. This is distinct from an intermediate because there is no path length constraint and there are no EKUs.
- `key`: generates a signing key on HSM, outputting a PEM public key
- `ocsp-response`: creates a OCSP response for the provided certificate and signs it using a signing key already on a HSM, outputting a base64 encoded response
- `crl`: creates a CRL with the IDP extension and `onlyContainsCACerts = true` from the provided profile and signs it using a signing key already on a HSM, outputting a PEM CRL
These modes are set in the `ceremony-type` field of the configuration file.
@ -29,23 +28,29 @@ This tool always generates key pairs such that the public and private key are bo
- `ceremony-type`: string describing the ceremony type, `root`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `store-key-in-slot` | Specifies which HSM object slot the generated signing key should be stored in. |
| `store-key-with-label` | Specifies the HSM object label for the generated signing key. Both public and private key objects are stored with this label. |
- `key`: object containing key generation related fields.
| Field | Description |
| --- | --- |
| `type` | Specifies the type of key to be generated, either `rsa` or `ecdsa`. If `rsa` the generated key will have an exponent of 65537 and a modulus length specified by `rsa-mod-length`. If `ecdsa` the curve is specified by `ecdsa-curve`. |
| `ecdsa-curve` | Specifies the ECDSA curve to use when generating key, either `P-224`, `P-256`, `P-384`, or `P-521`. |
| `rsa-mod-length` | Specifies the length of the RSA modulus, either `2048` or `4096`.
| `rsa-mod-length` | Specifies the length of the RSA modulus, either `2048` or `4096`. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `public-key-path` | Path to store generated PEM public key. |
| `certificate-path` | Path to store signed PEM certificate. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format).
Example:
@ -76,25 +81,31 @@ certificate-profile:
This config generates a ECDSA P-384 key in the HSM with the object label `root signing key` and uses this key to sign a self-signed certificate. The public key for the key generated is written to `/home/user/root-signing-pub.pem` and the certificate is written to `/home/user/root-cert.pem`.
### Intermediate or Cross-Certificate ceremony
### Intermediate ceremony
- `ceremony-type`: string describing the ceremony type, `intermediate` or `cross-certificate`.
- `ceremony-type`: string describing the ceremony type, `intermediate`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `public-key-path` | Path to PEM subject public key for certificate. |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
| `public-key-path` | Path to PEM subject public key for certificate. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `certificate-path` | Path to store signed PEM certificate. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format).
Example:
@ -106,8 +117,8 @@ pkcs11:
signing-key-slot: 0
signing-key-label: root signing key
inputs:
public-key-path: /home/user/intermediate-signing-pub.pem
issuer-certificate-path: /home/user/root-cert.pem
public-key-path: /home/user/intermediate-signing-pub.pem
outputs:
certificate-path: /home/user/intermediate-cert.pem
certificate-profile:
@ -131,26 +142,95 @@ certificate-profile:
This config generates an intermediate certificate signed by a key in the HSM, identified by the object label `root signing key` and the object ID `ffff`. The subject key used is taken from `/home/user/intermediate-signing-pub.pem` and the issuer is `/home/user/root-cert.pem`, the resulting certificate is written to `/home/user/intermediate-cert.pem`.
Note: Intermediate certificates always include the extended key usages id-kp-serverAuth as required by 7.1.2.2.g of the CABF Baseline Requirements. Since we also include id-kp-clientAuth in end-entity certificates in boulder we also include it in intermediates, if this changes we may remove this inclusion.
Note: Intermediate certificates always include the extended key usages id-kp-serverAuth as required by 7.1.2.2.g of the CABF Baseline Requirements.
### Cross-CSR ceremony
### Cross-Certificate ceremony
- `ceremony-type`: string describing the ceremony type, `cross-csr`.
- `ceremony-type`: string describing the ceremony type, `cross-certificate`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
| `public-key-path` | Path to PEM subject public key for certificate. |
| `certificate-to-cross-sign-path` | Path to PEM self-signed certificate that this ceremony is a cross-sign of. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `certificate-path` | Path to store signed PEM certificate. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format).
Example:
```yaml
ceremony-type: cross-certificate
pkcs11:
module: /usr/lib/opensc-pkcs11.so
signing-key-slot: 0
signing-key-label: root signing key
inputs:
issuer-certificate-path: /home/user/root-cert.pem
public-key-path: /home/user/root-signing-pub-2.pem
certificate-to-cross-sign-path: /home/user/root-cert-2.pem
outputs:
certificate-path: /home/user/root-cert-2-cross.pem
certificate-profile:
signature-algorithm: ECDSAWithSHA384
common-name: CA root 2
organization: good guys
country: US
not-before: 2020-01-01 12:00:00
not-after: 2040-01-01 12:00:00
ocsp-url: http://good-guys.com/ocsp
crl-url: http://good-guys.com/crl
issuer-url: http://good-guys.com/root
policies:
- oid: 1.2.3
- oid: 4.5.6
key-usages:
- Digital Signature
- Cert Sign
- CRL Sign
```
This config generates a cross-sign of the already-created "CA root 2", issued from the similarly-already-created "CA root". The subject key used is taken from `/home/user/root-signing-pub-2.pem`. The EKUs and Subject Key Identifier are taken from `/home/user/root-cert-2-cross.pem`. The issuer is `/home/user/root-cert.pem`, and the Issuer and Authority Key Identifier fields are taken from that cert. The resulting certificate is written to `/home/user/root-cert-2-cross.pem`.
### Cross-CSR ceremony
- `ceremony-type`: string describing the ceremony type, `cross-csr`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `public-key-path` | Path to PEM subject public key for certificate. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `csr-path` | Path to store PEM CSR for cross-signing, optional. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format). Should only include Subject related fields `common-name`, `organization`, `country`.
Example:
@ -173,119 +253,28 @@ certificate-profile:
This config generates a CSR signed by a key in the HSM, identified by the object label `intermediate signing key`, and writes it to `/home/user/csr.pem`.
### OCSP Signing Certificate ceremony
- `ceremony-type`: string describing the ceremony type, `ocsp-signer`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `public-key-path` | Path to PEM subject public key for certificate. |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `certificate-path` | Path to store signed PEM certificate. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format). The key-usages, ocsp-url, and crl-url fields must not be set.
When generating an OCSP signing certificate the key usages field will be set to just Digital Signature and an EKU extension will be included with the id-kp-OCSPSigning usage. Additionally an id-pkix-ocsp-nocheck extension will be included in the certificate.
Example:
```yaml
ceremony-type: ocsp-signer
pkcs11:
module: /usr/lib/opensc-pkcs11.so
signing-key-slot: 0
signing-key-label: intermediate signing key
inputs:
public-key-path: /home/user/ocsp-signer-signing-pub.pem
issuer-certificate-path: /home/user/intermediate-cert.pem
outputs:
certificate-path: /home/user/ocsp-signer-cert.pem
certificate-profile:
signature-algorithm: ECDSAWithSHA384
common-name: CA OCSP signer
organization: good guys
country: US
not-before: 2020-01-01 12:00:00
not-after: 2040-01-01 12:00:00
issuer-url: http://good-guys.com/root
```
This config generates a delegated OCSP signing certificate signed by a key in the HSM, identified by the object label `intermediate signing key` and the object ID `ffff`. The subject key used is taken from `/home/user/ocsp-signer-signing-pub.pem` and the issuer is `/home/user/intermediate-cert.pem`, the resulting certificate is written to `/home/user/ocsp-signer-cert.pem`.
### CRL Signing Certificate ceremony
- `ceremony-type`: string describing the ceremony type, `crl-signer`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `public-key-path` | Path to PEM subject public key for certificate. |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `certificate-path` | Path to store signed PEM certificate. |
- `certificate-profile`: object containing profile for certificate to generate. Fields are documented [below](#certificate-profile-format). The key-usages, ocsp-url, and crl-url fields must not be set.
When generating a CRL signing certificate the key usages field will be set to just CRL Sign.
Example:
```yaml
ceremony-type: crl-signer
pkcs11:
module: /usr/lib/opensc-pkcs11.so
signing-key-slot: 0
signing-key-label: intermediate signing key
inputs:
public-key-path: /home/user/crl-signer-signing-pub.pem
issuer-certificate-path: /home/user/intermediate-cert.pem
outputs:
certificate-path: /home/user/crl-signer-cert.pem
certificate-profile:
signature-algorithm: ECDSAWithSHA384
common-name: CA CRL signer
organization: good guys
country: US
not-before: 2020-01-01 12:00:00
not-after: 2040-01-01 12:00:00
issuer-url: http://good-guys.com/root
```
This config generates a delegated CRL signing certificate signed by a key in the HSM, identified by the object label `intermediate signing key` and the object ID `ffff`. The subject key used is taken from `/home/user/crl-signer-signing-pub.pem` and the issuer is `/home/user/intermediate-cert.pem`, the resulting certificate is written to `/home/user/crl-signer-cert.pem`.
### Key ceremony
- `ceremony-type`: string describing the ceremony type, `key`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `store-key-in-slot` | Specifies which HSM object slot the generated signing key should be stored in. |
| `store-key-with-label` | Specifies the HSM object label for the generated signing key. Both public and private key objects are stored with this label. |
- `key`: object containing key generation related fields.
| Field | Description |
| --- | --- |
| `type` | Specifies the type of key to be generated, either `rsa` or `ecdsa`. If `rsa` the generated key will have an exponent of 65537 and a modulus length specified by `rsa-mod-length`. If `ecdsa` the curve is specified by `ecdsa-curve`. |
| `ecdsa-curve` | Specifies the ECDSA curve to use when generating key, either `P-224`, `P-256`, `P-384`, or `P-521`. |
| `rsa-mod-length` | Specifies the length of the RSA modulus, either `2048` or `4096`.
| `rsa-mod-length` | Specifies the length of the RSA modulus, either `2048` or `4096`. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `public-key-path` | Path to store generated PEM public key. |
@ -311,23 +300,30 @@ This config generates an ECDSA P-384 key in the HSM with the object label `inter
- `ceremony-type`: string describing the ceremony type, `ocsp-response`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `certificate-path` | Path to PEM certificate to create a response for. |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
| `delegated-issuer-certificate-path` | Path to PEM delegated issuer certificate, if one is being used. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `response-path` | Path to store signed base64 encoded response. |
- `ocsp-profile`: object containing profile for the OCSP response.
| Field | Description |
| --- | --- |
| `this-update` | Specifies the OCSP response thisUpdate date, in the format `2006-01-02 15:04:05`. The time will be interpreted as UTC. |
@ -359,21 +355,28 @@ This config generates a OCSP response signed by a key in the HSM, identified by
- `ceremony-type`: string describing the ceremony type, `crl`.
- `pkcs11`: object containing PKCS#11 related fields.
| Field | Description |
| --- | --- |
| `module` | Path to the PKCS#11 module to use to communicate with a HSM. |
| `pin` | Specifies the login PIN, should only be provided if the HSM device requires one to interact with the slot. |
| `signing-key-slot` | Specifies which HSM object slot the signing key is in. |
| `signing-key-label` | Specifies the HSM object label for the signing keypair's public key. |
- `inputs`: object containing paths for inputs
| Field | Description |
| --- | --- |
| `issuer-certificate-path` | Path to PEM issuer certificate. |
- `outputs`: object containing paths to write outputs.
| Field | Description |
| --- | --- |
| `crl-path` | Path to store signed PEM CRL. |
- `crl-profile`: object containing profile for the CRL.
| Field | Description |
| --- | --- |
| `this-update` | Specifies the CRL thisUpdate date, in the format `2006-01-02 15:04:05`. The time will be interpreted as UTC. |

View File

@ -76,8 +76,6 @@ type certType int
const (
rootCert certType = iota
intermediateCert
ocspCert
crlCert
crossCert
requestCert
)
@ -153,23 +151,12 @@ func (profile *certProfile) verifyProfile(ct certType) error {
}
// BR 7.1.2.10.5 CA Certificate Certificate Policies
// OID 2.23.140.1.2.1 is an anyPolicy
// OID 2.23.140.1.2.1 is CABF BRs Domain Validated
if len(profile.Policies) != 1 || profile.Policies[0].OID != "2.23.140.1.2.1" {
return errors.New("policy should be exactly BRs domain-validated for subordinate CAs")
}
}
if ct == ocspCert || ct == crlCert {
if len(profile.KeyUsages) != 0 {
return errors.New("key-usages cannot be set for a delegated signer")
}
if profile.CRLURL != "" {
return errors.New("crl-url cannot be set for a delegated signer")
}
if profile.OCSPURL != "" {
return errors.New("ocsp-url cannot be set for a delegated signer")
}
}
return nil
}
@ -194,8 +181,6 @@ var stringToKeyUsage = map[string]x509.KeyUsage{
"Cert Sign": x509.KeyUsageCertSign,
}
var oidOCSPNoCheck = asn1.ObjectIdentifier{1, 3, 6, 1, 5, 5, 7, 48, 1, 5}
func generateSKID(pk []byte) ([]byte, error) {
var pkixPublicKey struct {
Algo pkix.AlgorithmIdentifier
@ -252,11 +237,6 @@ func makeTemplate(randReader io.Reader, profile *certProfile, pubKey []byte, tbc
}
ku |= kuBit
}
if ct == ocspCert {
ku = x509.KeyUsageDigitalSignature
} else if ct == crlCert {
ku = x509.KeyUsageCRLSign
}
if ku == 0 {
return nil, errors.New("at least one key usage must be set")
}
@ -296,14 +276,6 @@ func makeTemplate(randReader io.Reader, profile *certProfile, pubKey []byte, tbc
// BR 7.1.2.1.2 Root CA Extensions
// Extension Presence Critical Description
// extKeyUsage MUST NOT N -
case ocspCert:
cert.ExtKeyUsage = []x509.ExtKeyUsage{x509.ExtKeyUsageOCSPSigning}
// ASN.1 NULL is 0x05, 0x00
ocspNoCheckExt := pkix.Extension{Id: oidOCSPNoCheck, Value: []byte{5, 0}}
cert.ExtraExtensions = append(cert.ExtraExtensions, ocspNoCheckExt)
cert.IsCA = false
case crlCert:
cert.IsCA = false
case requestCert, intermediateCert:
// id-kp-serverAuth is included in intermediate certificates, as required by
// Section 7.1.2.10.6 of the CA/BF Baseline Requirements.
@ -314,6 +286,8 @@ func makeTemplate(randReader io.Reader, profile *certProfile, pubKey []byte, tbc
case crossCert:
cert.ExtKeyUsage = tbcs.ExtKeyUsage
cert.MaxPathLenZero = tbcs.MaxPathLenZero
// The SKID needs to match the previous SKID, no matter how it was computed.
cert.SubjectKeyId = tbcs.SubjectKeyId
}
for _, policyConfig := range profile.Policies {

View File

@ -1,7 +1,6 @@
package main
import (
"bytes"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
@ -174,73 +173,6 @@ func TestMakeTemplateRestrictedCrossCertificate(t *testing.T) {
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageServerAuth)
}
func TestMakeTemplateOCSP(t *testing.T) {
s, ctx := pkcs11helpers.NewSessionWithMock()
ctx.GenerateRandomFunc = realRand
randReader := newRandReader(s)
profile := &certProfile{
SignatureAlgorithm: "SHA256WithRSA",
CommonName: "common name",
Organization: "organization",
Country: "country",
OCSPURL: "ocsp",
CRLURL: "crl",
IssuerURL: "issuer",
NotAfter: "2018-05-18 11:31:00",
NotBefore: "2018-05-18 11:31:00",
}
pubKey := samplePubkey()
cert, err := makeTemplate(randReader, profile, pubKey, nil, ocspCert)
test.AssertNotError(t, err, "makeTemplate failed")
test.Assert(t, !cert.IsCA, "IsCA is set")
// Check KU is only KeyUsageDigitalSignature
test.AssertEquals(t, cert.KeyUsage, x509.KeyUsageDigitalSignature)
// Check there is a single EKU with id-kp-OCSPSigning
test.AssertEquals(t, len(cert.ExtKeyUsage), 1)
test.AssertEquals(t, cert.ExtKeyUsage[0], x509.ExtKeyUsageOCSPSigning)
// Check ExtraExtensions contains a single id-pkix-ocsp-nocheck
hasExt := false
asnNULL := []byte{5, 0}
for _, ext := range cert.ExtraExtensions {
if ext.Id.Equal(oidOCSPNoCheck) {
if hasExt {
t.Error("template contains multiple id-pkix-ocsp-nocheck extensions")
}
hasExt = true
if !bytes.Equal(ext.Value, asnNULL) {
t.Errorf("id-pkix-ocsp-nocheck has unexpected content: want %x, got %x", asnNULL, ext.Value)
}
}
}
test.Assert(t, hasExt, "template doesn't contain id-pkix-ocsp-nocheck extensions")
}
func TestMakeTemplateCRL(t *testing.T) {
s, ctx := pkcs11helpers.NewSessionWithMock()
ctx.GenerateRandomFunc = realRand
randReader := newRandReader(s)
profile := &certProfile{
SignatureAlgorithm: "SHA256WithRSA",
CommonName: "common name",
Organization: "organization",
Country: "country",
OCSPURL: "ocsp",
CRLURL: "crl",
IssuerURL: "issuer",
NotAfter: "2018-05-18 11:31:00",
NotBefore: "2018-05-18 11:31:00",
}
pubKey := samplePubkey()
cert, err := makeTemplate(randReader, profile, pubKey, nil, crlCert)
test.AssertNotError(t, err, "makeTemplate failed")
test.Assert(t, !cert.IsCA, "IsCA is set")
test.AssertEquals(t, cert.KeyUsage, x509.KeyUsageCRLSign)
}
func TestVerifyProfile(t *testing.T) {
for _, tc := range []struct {
profile certProfile
@ -366,114 +298,6 @@ func TestVerifyProfile(t *testing.T) {
},
certType: []certType{rootCert},
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
KeyUsages: []string{"j"},
},
certType: []certType{ocspCert},
expectedErr: "key-usages cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
CRLURL: "i",
},
certType: []certType{ocspCert},
expectedErr: "crl-url cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
OCSPURL: "h",
},
certType: []certType{ocspCert},
expectedErr: "ocsp-url cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
},
certType: []certType{ocspCert},
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
KeyUsages: []string{"j"},
},
certType: []certType{crlCert},
expectedErr: "key-usages cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
CRLURL: "i",
},
certType: []certType{crlCert},
expectedErr: "crl-url cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
OCSPURL: "h",
},
certType: []certType{crlCert},
expectedErr: "ocsp-url cannot be set for a delegated signer",
},
{
profile: certProfile{
NotBefore: "a",
NotAfter: "b",
SignatureAlgorithm: "c",
CommonName: "d",
Organization: "e",
Country: "f",
IssuerURL: "g",
},
certType: []certType{crlCert},
},
{
profile: certProfile{
NotBefore: "a",

View File

@ -7,8 +7,9 @@ import (
"fmt"
"log"
"github.com/letsencrypt/boulder/pkcs11helpers"
"github.com/miekg/pkcs11"
"github.com/letsencrypt/boulder/pkcs11helpers"
)
var stringToCurve = map[string]elliptic.Curve{
@ -70,7 +71,7 @@ func ecPub(
return nil, err
}
if pubKey.Curve != expectedCurve {
return nil, errors.New("Returned EC parameters doesn't match expected curve")
return nil, errors.New("returned EC parameters doesn't match expected curve")
}
log.Printf("\tX: %X\n", pubKey.X.Bytes())
log.Printf("\tY: %X\n", pubKey.Y.Bytes())

View File

@ -7,8 +7,9 @@ import (
"fmt"
"log"
"github.com/letsencrypt/boulder/pkcs11helpers"
"github.com/miekg/pkcs11"
"github.com/letsencrypt/boulder/pkcs11helpers"
)
type hsmRandReader struct {
@ -49,7 +50,7 @@ func generateKey(session *pkcs11helpers.Session, label string, outputPath string
{Type: pkcs11.CKA_LABEL, Value: []byte(label)},
})
if err != pkcs11helpers.ErrNoObject {
return nil, fmt.Errorf("expected no preexisting objects with label %q in slot for key storage. got error: %s", label, err)
return nil, fmt.Errorf("expected no preexisting objects with label %q in slot for key storage. got error: %w", label, err)
}
var pubKey crypto.PublicKey
@ -58,25 +59,25 @@ func generateKey(session *pkcs11helpers.Session, label string, outputPath string
case "rsa":
pubKey, keyID, err = rsaGenerate(session, label, config.RSAModLength)
if err != nil {
return nil, fmt.Errorf("failed to generate RSA key pair: %s", err)
return nil, fmt.Errorf("failed to generate RSA key pair: %w", err)
}
case "ecdsa":
pubKey, keyID, err = ecGenerate(session, label, config.ECDSACurve)
if err != nil {
return nil, fmt.Errorf("failed to generate ECDSA key pair: %s", err)
return nil, fmt.Errorf("failed to generate ECDSA key pair: %w", err)
}
}
der, err := x509.MarshalPKIXPublicKey(pubKey)
if err != nil {
return nil, fmt.Errorf("Failed to marshal public key: %s", err)
return nil, fmt.Errorf("failed to marshal public key: %w", err)
}
pemBytes := pem.EncodeToMemory(&pem.Block{Type: "PUBLIC KEY", Bytes: der})
log.Printf("Public key PEM:\n%s\n", pemBytes)
err = writeFile(outputPath, pemBytes)
if err != nil {
return nil, fmt.Errorf("Failed to write public key to %q: %s", outputPath, err)
return nil, fmt.Errorf("failed to write public key to %q: %w", outputPath, err)
}
log.Printf("Public key written to %q\n", outputPath)

View File

@ -239,7 +239,7 @@ type intermediateConfig struct {
SkipLints []string `yaml:"skip-lints"`
}
func (ic intermediateConfig) validate(ct certType) error {
func (ic intermediateConfig) validate() error {
err := ic.PKCS11.validate()
if err != nil {
return err
@ -260,7 +260,7 @@ func (ic intermediateConfig) validate(ct certType) error {
}
// Certificate profile
err = ic.CertProfile.verifyProfile(ct)
err = ic.CertProfile.verifyProfile(intermediateCert)
if err != nil {
return err
}
@ -504,7 +504,7 @@ func loadCert(filename string) (*x509.Certificate, error) {
log.Printf("Loaded certificate from %s\n", filename)
block, _ := pem.Decode(certPEM)
if block == nil {
return nil, fmt.Errorf("No data in cert PEM file %s", filename)
return nil, fmt.Errorf("no data in cert PEM file %q", filename)
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
@ -599,7 +599,7 @@ func loadPubKey(filename string) (crypto.PublicKey, []byte, error) {
log.Printf("Loaded public key from %s\n", filename)
block, _ := pem.Decode(keyPEM)
if block == nil {
return nil, nil, fmt.Errorf("No data in cert PEM file %s", filename)
return nil, nil, fmt.Errorf("no data in cert PEM file %q", filename)
}
key, err := x509.ParsePKIXPublicKey(block.Bytes)
if err != nil {
@ -658,17 +658,14 @@ func rootCeremony(configBytes []byte) error {
return nil
}
func intermediateCeremony(configBytes []byte, ct certType) error {
if ct != intermediateCert && ct != ocspCert && ct != crlCert {
return fmt.Errorf("wrong certificate type provided")
}
func intermediateCeremony(configBytes []byte) error {
var config intermediateConfig
err := strictyaml.Unmarshal(configBytes, &config)
if err != nil {
return fmt.Errorf("failed to parse config: %s", err)
}
log.Printf("Preparing intermediate ceremony for %s\n", config.Outputs.CertificatePath)
err = config.validate(ct)
err = config.validate()
if err != nil {
return fmt.Errorf("failed to validate config: %s", err)
}
@ -684,7 +681,7 @@ func intermediateCeremony(configBytes []byte, ct certType) error {
if err != nil {
return err
}
template, err := makeTemplate(randReader, &config.CertProfile, pubBytes, nil, ct)
template, err := makeTemplate(randReader, &config.CertProfile, pubBytes, nil, intermediateCert)
if err != nil {
return fmt.Errorf("failed to create certificate profile: %s", err)
}
@ -713,10 +710,7 @@ func intermediateCeremony(configBytes []byte, ct certType) error {
return nil
}
func crossCertCeremony(configBytes []byte, ct certType) error {
if ct != crossCert {
return fmt.Errorf("wrong certificate type provided")
}
func crossCertCeremony(configBytes []byte) error {
var config crossCertConfig
err := strictyaml.Unmarshal(configBytes, &config)
if err != nil {
@ -743,7 +737,7 @@ func crossCertCeremony(configBytes []byte, ct certType) error {
if err != nil {
return err
}
template, err := makeTemplate(randReader, &config.CertProfile, pubBytes, toBeCrossSigned, ct)
template, err := makeTemplate(randReader, &config.CertProfile, pubBytes, toBeCrossSigned, crossCert)
if err != nil {
return fmt.Errorf("failed to create certificate profile: %s", err)
}
@ -773,12 +767,24 @@ func crossCertCeremony(configBytes []byte, ct certType) error {
return fmt.Errorf("cross-signed subordinate CA's NotBefore predates the existing CA's NotBefore")
}
// BR 7.1.2.2.3 Cross-Certified Subordinate CA Extensions
// We want the Extended Key Usages of our cross-signs to be identical to those
// in the cert being cross-signed, for the sake of consistency. However, our
// Root CA Certificates do not contain any EKUs, as required by BR 7.1.2.1.2.
// Therefore, cross-signs of our roots count as "unrestricted" cross-signs per
// the definition in BR 7.1.2.2.3, and are subject to the requirement that
// the cross-sign's Issuer and Subject fields must either:
// - have identical organizationNames; or
// - have orgnaizationNames which are affiliates of each other.
// Therefore, we enforce that cross-signs with empty EKUs have identical
// Subject Organization Name fields... or allow one special case where the
// issuer is "Internet Security Research Group" and the subject is "ISRG" to
// allow us to migrate from the longer string to the shorter one.
if !slices.Equal(lintCert.ExtKeyUsage, toBeCrossSigned.ExtKeyUsage) {
return fmt.Errorf("lint cert and toBeCrossSigned cert EKUs differ")
}
if len(lintCert.ExtKeyUsage) == 0 {
// "Unrestricted" case, the issuer and subject need to be the same or at least affiliates.
if !slices.Equal(lintCert.Subject.Organization, issuer.Subject.Organization) {
if !slices.Equal(lintCert.Subject.Organization, issuer.Subject.Organization) &&
!(slices.Equal(issuer.Subject.Organization, []string{"Internet Security Research Group"}) && slices.Equal(lintCert.Subject.Organization, []string{"ISRG"})) {
return fmt.Errorf("attempted unrestricted cross-sign of certificate operated by a different organization")
}
}
@ -1044,12 +1050,12 @@ func main() {
log.Fatalf("root ceremony failed: %s", err)
}
case "cross-certificate":
err = crossCertCeremony(configBytes, crossCert)
err = crossCertCeremony(configBytes)
if err != nil {
log.Fatalf("cross-certificate ceremony failed: %s", err)
}
case "intermediate":
err = intermediateCeremony(configBytes, intermediateCert)
err = intermediateCeremony(configBytes)
if err != nil {
log.Fatalf("intermediate ceremony failed: %s", err)
}
@ -1058,11 +1064,6 @@ func main() {
if err != nil {
log.Fatalf("cross-csr ceremony failed: %s", err)
}
case "ocsp-signer":
err = intermediateCeremony(configBytes, ocspCert)
if err != nil {
log.Fatalf("ocsp signer ceremony failed: %s", err)
}
case "key":
err = keyCeremony(configBytes)
if err != nil {
@ -1078,12 +1079,7 @@ func main() {
if err != nil {
log.Fatalf("crl ceremony failed: %s", err)
}
case "crl-signer":
err = intermediateCeremony(configBytes, crlCert)
if err != nil {
log.Fatalf("crl signer ceremony failed: %s", err)
}
default:
log.Fatalf("unknown ceremony-type, must be one of: root, cross-certificate, intermediate, cross-csr, ocsp-signer, key, ocsp-response, crl, crl-signer")
log.Fatalf("unknown ceremony-type, must be one of: root, cross-certificate, intermediate, cross-csr, key, ocsp-response, crl")
}
}

View File

@ -484,7 +484,7 @@ func TestIntermediateConfigValidate(t *testing.T) {
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
err := tc.config.validate(intermediateCert)
err := tc.config.validate()
if err != nil && err.Error() != tc.expectedError {
t.Fatalf("Unexpected error, wanted: %q, got: %q", tc.expectedError, err)
} else if err == nil && tc.expectedError != "" {

View File

@ -25,6 +25,11 @@ func TLSALPNChallenge01(token string) Challenge {
return newChallenge(ChallengeTypeTLSALPN01, token)
}
// DNSAccountChallenge01 constructs a dns-account-01 challenge.
func DNSAccountChallenge01(token string) Challenge {
return newChallenge(ChallengeTypeDNSAccount01, token)
}
// NewChallenge constructs a challenge of the given kind. It returns an
// error if the challenge type is unrecognized.
func NewChallenge(kind AcmeChallenge, token string) (Challenge, error) {
@ -35,6 +40,8 @@ func NewChallenge(kind AcmeChallenge, token string) (Challenge, error) {
return DNSChallenge01(token), nil
case ChallengeTypeTLSALPN01:
return TLSALPNChallenge01(token), nil
case ChallengeTypeDNSAccount01:
return DNSAccountChallenge01(token), nil
default:
return Challenge{}, fmt.Errorf("unrecognized challenge type %q", kind)
}

View File

@ -32,12 +32,16 @@ func TestChallenges(t *testing.T) {
dns01 := DNSChallenge01(token)
test.AssertNotError(t, dns01.CheckPending(), "CheckConsistencyForClientOffer returned an error")
dnsAccount01 := DNSAccountChallenge01(token)
test.AssertNotError(t, dnsAccount01.CheckPending(), "CheckConsistencyForClientOffer returned an error")
tlsalpn01 := TLSALPNChallenge01(token)
test.AssertNotError(t, tlsalpn01.CheckPending(), "CheckConsistencyForClientOffer returned an error")
test.Assert(t, ChallengeTypeHTTP01.IsValid(), "Refused valid challenge")
test.Assert(t, ChallengeTypeDNS01.IsValid(), "Refused valid challenge")
test.Assert(t, ChallengeTypeTLSALPN01.IsValid(), "Refused valid challenge")
test.Assert(t, ChallengeTypeDNSAccount01.IsValid(), "Refused valid challenge")
test.Assert(t, !AcmeChallenge("nonsense-71").IsValid(), "Accepted invalid challenge")
}

View File

@ -53,15 +53,16 @@ type AcmeChallenge string
// These types are the available challenges
const (
ChallengeTypeHTTP01 = AcmeChallenge("http-01")
ChallengeTypeDNS01 = AcmeChallenge("dns-01")
ChallengeTypeTLSALPN01 = AcmeChallenge("tls-alpn-01")
ChallengeTypeHTTP01 = AcmeChallenge("http-01")
ChallengeTypeDNS01 = AcmeChallenge("dns-01")
ChallengeTypeTLSALPN01 = AcmeChallenge("tls-alpn-01")
ChallengeTypeDNSAccount01 = AcmeChallenge("dns-account-01")
)
// IsValid tests whether the challenge is a known challenge
func (c AcmeChallenge) IsValid() bool {
switch c {
case ChallengeTypeHTTP01, ChallengeTypeDNS01, ChallengeTypeTLSALPN01:
case ChallengeTypeHTTP01, ChallengeTypeDNS01, ChallengeTypeTLSALPN01, ChallengeTypeDNSAccount01:
return true
default:
return false
@ -228,7 +229,7 @@ func (ch Challenge) RecordsSane() bool {
(ch.ValidationRecord[0].AddressUsed == netip.Addr{}) || len(ch.ValidationRecord[0].AddressesResolved) == 0 {
return false
}
case ChallengeTypeDNS01:
case ChallengeTypeDNS01, ChallengeTypeDNSAccount01:
if len(ch.ValidationRecord) > 1 {
return false
}
@ -429,16 +430,6 @@ type CertificateStatus struct {
IssuerNameID int64 `db:"issuerID"`
}
// FQDNSet contains the SHA256 hash of the lowercased, comma joined dNSNames
// contained in a certificate.
type FQDNSet struct {
ID int64
SetHash []byte
Serial string
Issued time.Time
Expires time.Time
}
// SCTDERs is a convenience type
type SCTDERs [][]byte

View File

@ -59,7 +59,7 @@ func TestChallengeSanityCheck(t *testing.T) {
}`), &accountKey)
test.AssertNotError(t, err, "Error unmarshaling JWK")
types := []AcmeChallenge{ChallengeTypeHTTP01, ChallengeTypeDNS01, ChallengeTypeTLSALPN01}
types := []AcmeChallenge{ChallengeTypeHTTP01, ChallengeTypeDNS01, ChallengeTypeTLSALPN01, ChallengeTypeDNSAccount01}
for _, challengeType := range types {
chall := Challenge{
Type: challengeType,
@ -152,6 +152,8 @@ func TestChallengeStringID(t *testing.T) {
test.AssertEquals(t, ch.StringID(), "iFVMwA")
ch.Type = ChallengeTypeHTTP01
test.AssertEquals(t, ch.StringID(), "0Gexug")
ch.Type = ChallengeTypeDNSAccount01
test.AssertEquals(t, ch.StringID(), "8z2wSg")
}
func TestFindChallengeByType(t *testing.T) {

View File

@ -79,7 +79,7 @@ services:
- setup
bmysql:
image: mariadb:10.6.22
image: mariadb:10.11.13
networks:
bouldernet:
aliases:

4
go.mod
View File

@ -7,7 +7,7 @@ require (
github.com/aws/aws-sdk-go-v2/config v1.29.17
github.com/aws/aws-sdk-go-v2/service/s3 v1.83.0
github.com/aws/smithy-go v1.22.4
github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941
github.com/eggsampler/acme/v3 v3.6.2
github.com/go-jose/go-jose/v4 v4.1.0
github.com/go-logr/stdr v1.2.2
github.com/go-sql-driver/mysql v1.9.1
@ -25,7 +25,7 @@ require (
github.com/prometheus/client_golang v1.22.0
github.com/prometheus/client_model v0.6.1
github.com/redis/go-redis/extra/redisotel/v9 v9.5.3
github.com/redis/go-redis/v9 v9.7.3
github.com/redis/go-redis/v9 v9.10.0
github.com/titanous/rocacheck v0.0.0-20171023193734-afe73141d399
github.com/weppos/publicsuffix-go v0.40.3-0.20250307081557-c05521c3453a
github.com/zmap/zcrypto v0.0.0-20250129210703-03c45d0bae98

8
go.sum
View File

@ -70,8 +70,8 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941 h1:CnQwymLMJ3MSfjbZQ/bpaLfuXBZuM3LUgAHJ0gO/7d8=
github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941/go.mod h1:/qh0rKC/Dh7Jj+p4So7DbWmFNzC4dpcpK53r226Fhuo=
github.com/eggsampler/acme/v3 v3.6.2 h1:gvyZbQ92wNQLDASVftGpHEdFwPSfg0+17P0lLt09Tp8=
github.com/eggsampler/acme/v3 v3.6.2/go.mod h1:/qh0rKC/Dh7Jj+p4So7DbWmFNzC4dpcpK53r226Fhuo=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
@ -214,8 +214,8 @@ github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3 h1:1/BDligzCa40GTllkDnY3Y5DTH
github.com/redis/go-redis/extra/rediscmd/v9 v9.5.3/go.mod h1:3dZmcLn3Qw6FLlWASn1g4y+YO9ycEFUOM+bhBmzLVKQ=
github.com/redis/go-redis/extra/redisotel/v9 v9.5.3 h1:kuvuJL/+MZIEdvtb/kTBRiRgYaOmx1l+lYJyVdrRUOs=
github.com/redis/go-redis/extra/redisotel/v9 v9.5.3/go.mod h1:7f/FMrf5RRRVHXgfk7CzSVzXHiWeuOQUu2bsVqWoa+g=
github.com/redis/go-redis/v9 v9.7.3 h1:YpPyAayJV+XErNsatSElgRZZVCwXX9QzkKYNvO7x0wM=
github.com/redis/go-redis/v9 v9.7.3/go.mod h1:bGUrSggJ9X9GUmZpZNEOQKaANxSGgOEBRltRTZHSvrA=
github.com/redis/go-redis/v9 v9.10.0 h1:FxwK3eV8p/CQa0Ch276C7u2d0eNC9kCmAYQ7mCXCzVs=
github.com/redis/go-redis/v9 v9.10.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=

View File

@ -142,10 +142,10 @@ func (p *Profile) GenerateValidity(now time.Time) (time.Time, time.Time) {
// Don't use the full maxBackdate, to ensure that the actual backdate remains
// acceptable throughout the rest of the issuance process.
backdate := time.Duration(float64(p.maxBackdate.Nanoseconds()) * 0.9)
notBefore := now.Add(-1 * backdate)
notBefore := now.Add(-1 * backdate).Truncate(time.Second)
// Subtract one second, because certificate validity periods are *inclusive*
// of their final second (Baseline Requirements, Section 1.6.1).
notAfter := notBefore.Add(p.maxValidity).Add(-1 * time.Second)
notAfter := notBefore.Add(p.maxValidity).Add(-1 * time.Second).Truncate(time.Second)
return notBefore, notAfter
}

View File

@ -271,7 +271,7 @@ func initTables(dbMap *borp.DbMap) {
dbMap.AddTableWithName(issuedNameModel{}, "issuedNames").SetKeys(true, "ID")
dbMap.AddTableWithName(core.Certificate{}, "certificates").SetKeys(true, "ID")
dbMap.AddTableWithName(certificateStatusModel{}, "certificateStatus").SetKeys(true, "ID")
dbMap.AddTableWithName(core.FQDNSet{}, "fqdnSets").SetKeys(true, "ID")
dbMap.AddTableWithName(fqdnSet{}, "fqdnSets").SetKeys(true, "ID")
tableMap := dbMap.AddTableWithName(orderModel{}, "orders").SetKeys(true, "ID")
if !features.Get().StoreARIReplacesInOrders {
tableMap.ColMap("Replaces").SetTransient(true)

View File

@ -416,15 +416,17 @@ func modelToOrder(om *orderModel) (*corepb.Order, error) {
}
var challTypeToUint = map[string]uint8{
"http-01": 0,
"dns-01": 1,
"tls-alpn-01": 2,
"http-01": 0,
"dns-01": 1,
"tls-alpn-01": 2,
"dns-account-01": 3,
}
var uintToChallType = map[uint8]string{
0: "http-01",
1: "dns-01",
2: "tls-alpn-01",
3: "dns-account-01",
}
var identifierTypeToUint = map[string]uint8{
@ -899,6 +901,16 @@ type crlEntryModel struct {
RevokedDate time.Time `db:"revokedDate"`
}
// fqdnSet contains the SHA256 hash of the lowercased, comma joined dNSNames
// contained in a certificate.
type fqdnSet struct {
ID int64
SetHash []byte
Serial string
Issued time.Time
Expires time.Time
}
// orderFQDNSet contains the SHA256 hash of the lowercased, comma joined names
// from a new-order request, along with the corresponding orderID, the
// registration ID, and the order expiry. This is used to find
@ -912,7 +924,7 @@ type orderFQDNSet struct {
}
func addFQDNSet(ctx context.Context, db db.Inserter, idents identifier.ACMEIdentifiers, serial string, issued time.Time, expires time.Time) error {
return db.Insert(ctx, &core.FQDNSet{
return db.Insert(ctx, &fqdnSet{
SetHash: core.HashIdentifiers(idents),
Serial: serial,
Issued: issued,

View File

@ -2632,6 +2632,36 @@ func TestGetValidAuthorizations2(t *testing.T) {
aaa = am.ID
}
var dac int64
{
tokenStr := core.NewToken()
token, err := base64.RawURLEncoding.DecodeString(tokenStr)
test.AssertNotError(t, err, "computing test authorization challenge token")
profile := "test"
attempted := challTypeToUint[string(core.ChallengeTypeDNSAccount01)]
attemptedAt := fc.Now()
vr, _ := json.Marshal([]core.ValidationRecord{})
am := authzModel{
IdentifierType: identifierTypeToUint[string(identifier.TypeDNS)],
IdentifierValue: "aaa",
RegistrationID: 3,
CertificateProfileName: &profile,
Status: statusToUint[core.StatusValid],
Expires: fc.Now().Add(24 * time.Hour),
Challenges: 1 << challTypeToUint[string(core.ChallengeTypeDNSAccount01)],
Attempted: &attempted,
AttemptedAt: &attemptedAt,
Token: token,
ValidationError: nil,
ValidationRecord: vr,
}
err = sa.dbMap.Insert(context.Background(), &am)
test.AssertNotError(t, err, "failed to insert valid authz with dns-account-01")
dac = am.ID
}
for _, tc := range []struct {
name string
regID int64
@ -2648,6 +2678,14 @@ func TestGetValidAuthorizations2(t *testing.T) {
validUntil: fc.Now().Add(time.Hour),
wantIDs: []int64{aaa},
},
{
name: "happy path, dns-account-01 challenge",
regID: 3,
identifiers: []*corepb.Identifier{identifier.NewDNS("aaa").ToProto()},
profile: "test",
validUntil: fc.Now().Add(time.Hour),
wantIDs: []int64{dac},
},
{
name: "different identifier type",
regID: 1,

View File

@ -22,7 +22,8 @@
"maximumRevocations": 15,
"findCertificatesBatchSize": 10,
"interval": "50ms",
"backoffIntervalMax": "2s"
"backoffIntervalMax": "2s",
"maxExpectedReplicationLag": "100ms"
},
"syslog": {
"stdoutlevel": 4,

View File

@ -23,7 +23,8 @@
"maximumRevocations": 15,
"findCertificatesBatchSize": 10,
"interval": "50ms",
"backoffIntervalMax": "2s"
"backoffIntervalMax": "2s",
"maxExpectedReplicationLag": "100ms"
},
"syslog": {
"stdoutlevel": 4,

View File

@ -60,10 +60,11 @@ boulder_setup:
-git clone --depth 1 https://github.com/letsencrypt/boulder.git $(BOULDER_PATH)
(cd $(BOULDER_PATH); git checkout -f main && git reset --hard HEAD && git pull -q)
make boulder_stop
(cd $(BOULDER_PATH); docker compose run --rm bsetup)
# runs an instance of boulder
boulder_start:
docker-compose -f $(BOULDER_PATH)/docker-compose.yml -f docker-compose.boulder-temp.yml up -d
docker-compose -f $(BOULDER_PATH)/docker-compose.yml -f $(BOULDER_PATH)/docker-compose.next.yml -f docker-compose.boulder-temp.yml up -d
# waits until boulder responds
boulder_wait:

View File

@ -3,4 +3,9 @@ testdata/*
.idea/
.DS_Store
*.tar.gz
*.dic
*.dic
redis8tests.sh
coverage.txt
**/coverage.txt
.vscode
tmp/*

View File

@ -1,3 +1,34 @@
version: "2"
run:
timeout: 5m
tests: false
linters:
settings:
staticcheck:
checks:
- all
# Incorrect or missing package comment.
# https://staticcheck.dev/docs/checks/#ST1000
- -ST1000
# Omit embedded fields from selector expression.
# https://staticcheck.dev/docs/checks/#QF1008
- -QF1008
- -ST1003
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@ -32,20 +32,33 @@ Here's how to get started with your code contribution:
1. Create your own fork of go-redis
2. Do the changes in your fork
3. If you need a development environment, run `make test`. Note: this clones and builds the latest release of [redis](https://redis.io). You also need a redis-stack-server docker, in order to run the capabilities tests. This can be started by running:
```docker run -p 6379:6379 -it redis/redis-stack-server:edge```
4. While developing, make sure the tests pass by running `make tests`
3. If you need a development environment, run `make docker.start`.
> Note: this clones and builds the docker containers specified in `docker-compose.yml`, to understand more about
> the infrastructure that will be started you can check the `docker-compose.yml`. You also have the possiblity
> to specify the redis image that will be pulled with the env variable `CLIENT_LIBS_TEST_IMAGE`.
> By default the docker image that will be pulled and started is `redislabs/client-libs-test:rs-7.4.0-v2`.
> If you want to test with newer Redis version, using a newer version of `redislabs/client-libs-test` should work out of the box.
4. While developing, make sure the tests pass by running `make test` (if you have the docker containers running, `make test.ci` may be sufficient).
> Note: `make test` will try to start all containers, run the tests with `make test.ci` and then stop all containers.
5. If you like the change and think the project could use it, send a
pull request
To see what else is part of the automation, run `invoke -l`
## Testing
Call `make test` to run all tests, including linters.
### Setting up Docker
To run the tests, you need to have Docker installed and running. If you are using a host OS that does not support
docker host networks out of the box (e.g. Windows, OSX), you need to set up a docker desktop and enable docker host networks.
### Running tests
Call `make test` to run all tests.
Continuous Integration uses these same wrappers to run all of these
tests against multiple versions of python. Feel free to test your
tests against multiple versions of redis. Feel free to test your
changes against all the go versions supported, as declared by the
[build.yml](./.github/workflows/build.yml) file.
@ -99,3 +112,7 @@ The core team regularly looks at pull requests. We will provide
feedback as soon as possible. After receiving our feedback, please respond
within two weeks. After that time, we may close your PR if it isn't
showing any activity.
## Support
Maintainers can provide limited support to contributors on discord: https://discord.gg/W4txy5AeKM

View File

@ -1,42 +1,59 @@
GO_MOD_DIRS := $(shell find . -type f -name 'go.mod' -exec dirname {} \; | sort)
test: testdeps
$(eval GO_VERSION := $(shell go version | cut -d " " -f 3 | cut -d. -f2))
docker.start:
docker compose --profile all up -d --quiet-pull
docker.stop:
docker compose --profile all down
test:
$(MAKE) docker.start
@if [ -z "$(REDIS_VERSION)" ]; then \
echo "REDIS_VERSION not set, running all tests"; \
$(MAKE) test.ci; \
else \
MAJOR_VERSION=$$(echo "$(REDIS_VERSION)" | cut -d. -f1); \
if [ "$$MAJOR_VERSION" -ge 8 ]; then \
echo "REDIS_VERSION $(REDIS_VERSION) >= 8, running all tests"; \
$(MAKE) test.ci; \
else \
echo "REDIS_VERSION $(REDIS_VERSION) < 8, skipping vector_sets tests"; \
$(MAKE) test.ci.skip-vectorsets; \
fi; \
fi
$(MAKE) docker.stop
test.ci:
set -e; for dir in $(GO_MOD_DIRS); do \
if echo "$${dir}" | grep -q "./example" && [ "$(GO_VERSION)" = "19" ]; then \
echo "Skipping go test in $${dir} due to Go version 1.19 and dir contains ./example"; \
continue; \
fi; \
echo "go test in $${dir}"; \
(cd "$${dir}" && \
go mod tidy -compat=1.18 && \
go test && \
go test ./... -short -race && \
go test ./... -run=NONE -bench=. -benchmem && \
env GOOS=linux GOARCH=386 go test && \
go test -coverprofile=coverage.txt -covermode=atomic ./... && \
go vet); \
go vet && \
go test -v -coverprofile=coverage.txt -covermode=atomic ./... -race -skip Example); \
done
cd internal/customvet && go build .
go vet -vettool ./internal/customvet/customvet
testdeps: testdata/redis/src/redis-server
test.ci.skip-vectorsets:
set -e; for dir in $(GO_MOD_DIRS); do \
echo "go test in $${dir} (skipping vector sets)"; \
(cd "$${dir}" && \
go mod tidy -compat=1.18 && \
go vet && \
go test -v -coverprofile=coverage.txt -covermode=atomic ./... -race \
-run '^(?!.*(?:VectorSet|vectorset|ExampleClient_vectorset)).*$$' -skip Example); \
done
cd internal/customvet && go build .
go vet -vettool ./internal/customvet/customvet
bench: testdeps
go test ./... -test.run=NONE -test.bench=. -test.benchmem
bench:
go test ./... -test.run=NONE -test.bench=. -test.benchmem -skip Example
.PHONY: all test testdeps bench fmt
.PHONY: all test test.ci test.ci.skip-vectorsets bench fmt
build:
go build .
testdata/redis:
mkdir -p $@
wget -qO- https://download.redis.io/releases/redis-7.4-rc2.tar.gz | tar xvz --strip-components=1 -C $@
testdata/redis/src/redis-server: testdata/redis
cd $< && make all
fmt:
gofumpt -w ./
goimports -w -local github.com/redis/go-redis ./

View File

@ -3,16 +3,30 @@
[![build workflow](https://github.com/redis/go-redis/actions/workflows/build.yml/badge.svg)](https://github.com/redis/go-redis/actions)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/redis/go-redis/v9)](https://pkg.go.dev/github.com/redis/go-redis/v9?tab=doc)
[![Documentation](https://img.shields.io/badge/redis-documentation-informational)](https://redis.uptrace.dev/)
[![Go Report Card](https://goreportcard.com/badge/github.com/redis/go-redis/v9)](https://goreportcard.com/report/github.com/redis/go-redis/v9)
[![codecov](https://codecov.io/github/redis/go-redis/graph/badge.svg?token=tsrCZKuSSw)](https://codecov.io/github/redis/go-redis)
[![Chat](https://discordapp.com/api/guilds/752070105847955518/widget.png)](https://discord.gg/rWtp5Aj)
> go-redis is brought to you by :star: [**uptrace/uptrace**](https://github.com/uptrace/uptrace).
> Uptrace is an open-source APM tool that supports distributed tracing, metrics, and logs. You can
> use it to monitor applications and set up automatic alerts to receive notifications via email,
> Slack, Telegram, and others.
>
> See [OpenTelemetry](https://github.com/redis/go-redis/tree/master/example/otel) example which
> demonstrates how you can use Uptrace to monitor go-redis.
[![Discord](https://img.shields.io/discord/697882427875393627.svg?style=social&logo=discord)](https://discord.gg/W4txy5AeKM)
[![Twitch](https://img.shields.io/twitch/status/redisinc?style=social)](https://www.twitch.tv/redisinc)
[![YouTube](https://img.shields.io/youtube/channel/views/UCD78lHSwYqMlyetR0_P4Vig?style=social)](https://www.youtube.com/redisinc)
[![Twitter](https://img.shields.io/twitter/follow/redisinc?style=social)](https://twitter.com/redisinc)
[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/go-redis?style=social&logo=stackoverflow&label=Stackoverflow)](https://stackoverflow.com/questions/tagged/go-redis)
> go-redis is the official Redis client library for the Go programming language. It offers a straightforward interface for interacting with Redis servers.
## Supported versions
In `go-redis` we are aiming to support the last three releases of Redis. Currently, this means we do support:
- [Redis 7.2](https://raw.githubusercontent.com/redis/redis/7.2/00-RELEASENOTES) - using Redis Stack 7.2 for modules support
- [Redis 7.4](https://raw.githubusercontent.com/redis/redis/7.4/00-RELEASENOTES) - using Redis Stack 7.4 for modules support
- [Redis 8.0](https://raw.githubusercontent.com/redis/redis/8.0/00-RELEASENOTES) - using Redis CE 8.0 where modules are included
Although the `go.mod` states it requires at minimum `go 1.18`, our CI is configured to run the tests against all three
versions of Redis and latest two versions of Go ([1.23](https://go.dev/doc/devel/release#go1.23.0),
[1.24](https://go.dev/doc/devel/release#go1.24.0)). We observe that some modules related test may not pass with
Redis Stack 7.2 and some commands are changed with Redis CE 8.0.
Please do refer to the documentation and the tests if you experience any issues. We do plan to update the go version
in the `go.mod` to `go 1.24` in one of the next releases.
## How do I Redis?
@ -36,7 +50,7 @@
## Resources
- [Discussions](https://github.com/redis/go-redis/discussions)
- [Chat](https://discord.gg/rWtp5Aj)
- [Chat](https://discord.gg/W4txy5AeKM)
- [Reference](https://pkg.go.dev/github.com/redis/go-redis/v9)
- [Examples](https://pkg.go.dev/github.com/redis/go-redis/v9#pkg-examples)
@ -54,6 +68,7 @@ key value NoSQL database that uses RocksDB as storage engine and is compatible w
- Redis commands except QUIT and SYNC.
- Automatic connection pooling.
- [StreamingCredentialsProvider (e.g. entra id, oauth)](#1-streaming-credentials-provider-highest-priority) (experimental)
- [Pub/Sub](https://redis.uptrace.dev/guide/go-redis-pubsub.html).
- [Pipelines and transactions](https://redis.uptrace.dev/guide/go-redis-pipelines.html).
- [Scripting](https://redis.uptrace.dev/guide/lua-scripting.html).
@ -122,17 +137,121 @@ func ExampleClient() {
}
```
The above can be modified to specify the version of the RESP protocol by adding the `protocol`
option to the `Options` struct:
### Authentication
The Redis client supports multiple ways to provide authentication credentials, with a clear priority order. Here are the available options:
#### 1. Streaming Credentials Provider (Highest Priority) - Experimental feature
The streaming credentials provider allows for dynamic credential updates during the connection lifetime. This is particularly useful for managed identity services and token-based authentication.
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
Protocol: 3, // specify 2 for RESP 2 or 3 for RESP 3
})
type StreamingCredentialsProvider interface {
Subscribe(listener CredentialsListener) (Credentials, UnsubscribeFunc, error)
}
type CredentialsListener interface {
OnNext(credentials Credentials) // Called when credentials are updated
OnError(err error) // Called when an error occurs
}
type Credentials interface {
BasicAuth() (username string, password string)
RawCredentials() string
}
```
Example usage:
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
StreamingCredentialsProvider: &MyCredentialsProvider{},
})
```
**Note:** The streaming credentials provider can be used with [go-redis-entraid](https://github.com/redis/go-redis-entraid) to enable Entra ID (formerly Azure AD) authentication. This allows for seamless integration with Azure's managed identity services and token-based authentication.
Example with Entra ID:
```go
import (
"github.com/redis/go-redis/v9"
"github.com/redis/go-redis-entraid"
)
// Create an Entra ID credentials provider
provider := entraid.NewDefaultAzureIdentityProvider()
// Configure Redis client with Entra ID authentication
rdb := redis.NewClient(&redis.Options{
Addr: "your-redis-server.redis.cache.windows.net:6380",
StreamingCredentialsProvider: provider,
TLSConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
},
})
```
#### 2. Context-based Credentials Provider
The context-based provider allows credentials to be determined at the time of each operation, using the context.
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
CredentialsProviderContext: func(ctx context.Context) (string, string, error) {
// Return username, password, and any error
return "user", "pass", nil
},
})
```
#### 3. Regular Credentials Provider
A simple function-based provider that returns static credentials.
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
CredentialsProvider: func() (string, string) {
// Return username and password
return "user", "pass"
},
})
```
#### 4. Username/Password Fields (Lowest Priority)
The most basic way to provide credentials is through the `Username` and `Password` fields in the options.
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Username: "user",
Password: "pass",
})
```
#### Priority Order
The client will use credentials in the following priority order:
1. Streaming Credentials Provider (if set)
2. Context-based Credentials Provider (if set)
3. Regular Credentials Provider (if set)
4. Username/Password fields (if set)
If none of these are set, the client will attempt to connect without authentication.
### Protocol Version
The client supports both RESP2 and RESP3 protocols. You can specify the protocol version in the options:
```go
rdb := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
Password: "", // no password set
DB: 0, // use default DB
Protocol: 3, // specify 2 for RESP 2 or 3 for RESP 3
})
```
### Connecting via a redis url
@ -159,6 +278,24 @@ func ExampleClient() *redis.Client {
```
### Instrument with OpenTelemetry
```go
import (
"github.com/redis/go-redis/v9"
"github.com/redis/go-redis/extra/redisotel/v9"
"errors"
)
func main() {
...
rdb := redis.NewClient(&redis.Options{...})
if err := errors.Join(redisotel.InstrumentTracing(rdb), redisotel.InstrumentMetrics(rdb)); err != nil {
log.Fatal(err)
}
```
### Advanced Configuration
@ -203,9 +340,30 @@ res1, err := client.FTSearchWithArgs(ctx, "txt", "foo bar", &redis.FTSearchOptio
val1 := client.FTSearchWithArgs(ctx, "txt", "foo bar", &redis.FTSearchOptions{}).RawVal()
```
## Contributing
#### Redis-Search Default Dialect
Please see [out contributing guidelines](CONTRIBUTING.md) to help us improve this library!
In the Redis-Search module, **the default dialect is 2**. If needed, you can explicitly specify a different dialect using the appropriate configuration in your queries.
**Important**: Be aware that the query dialect may impact the results returned. If needed, you can revert to a different dialect version by passing the desired dialect in the arguments of the command you want to execute.
For example:
```
res2, err := rdb.FTSearchWithArgs(ctx,
"idx:bicycle",
"@pickup_zone:[CONTAINS $bike]",
&redis.FTSearchOptions{
Params: map[string]interface{}{
"bike": "POINT(-0.1278 51.5074)",
},
DialectVersion: 3,
},
).Result()
```
You can find further details in the [query dialect documentation](https://redis.io/docs/latest/develop/interact/search-and-query/advanced-concepts/dialects/).
## Contributing
We welcome contributions to the go-redis library! If you have a bug fix, feature request, or improvement, please open an issue or pull request on GitHub.
We appreciate your help in making go-redis better for everyone.
If you are interested in contributing to the go-redis library, please check out our [contributing guidelines](CONTRIBUTING.md) for more information on how to get started.
## Look and feel
@ -285,6 +443,14 @@ REDIS_PORT=9999 go test <your options>
## Contributors
> The go-redis project was originally initiated by :star: [**uptrace/uptrace**](https://github.com/uptrace/uptrace).
> Uptrace is an open-source APM tool that supports distributed tracing, metrics, and logs. You can
> use it to monitor applications and set up automatic alerts to receive notifications via email,
> Slack, Telegram, and others.
>
> See [OpenTelemetry](https://github.com/redis/go-redis/tree/master/example/otel) example which
> demonstrates how you can use Uptrace to monitor go-redis.
Thanks to all the people who already contributed!
<a href="https://github.com/redis/go-redis/graphs/contributors">

163
vendor/github.com/redis/go-redis/v9/RELEASE-NOTES.md generated vendored Normal file
View File

@ -0,0 +1,163 @@
# Release Notes
# 9.10.0 (2025-06-06)
## 🚀 Highlights
`go-redis` now supports [vector sets](https://redis.io/docs/latest/develop/data-types/vector-sets/). This data type is marked
as "in preview" in Redis and its support in `go-redis` is marked as experimental. You can find examples in the documentation and
in the `doctests` folder.
# Changes
## 🚀 New Features
- feat: support vectorset ([#3375](https://github.com/redis/go-redis/pull/3375))
## 🧰 Maintenance
- Add the missing NewFloatSliceResult for testing ([#3393](https://github.com/redis/go-redis/pull/3393))
- DOC-5078 vector set examples ([#3394](https://github.com/redis/go-redis/pull/3394))
## Contributors
We'd like to thank all the contributors who worked on this release!
[@AndBobsYourUncle](https://github.com/AndBobsYourUncle), [@andy-stark-redis](https://github.com/andy-stark-redis), [@fukua95](https://github.com/fukua95) and [@ndyakov](https://github.com/ndyakov)
# 9.9.0 (2025-05-27)
## 🚀 Highlights
- **Token-based Authentication**: Added `StreamingCredentialsProvider` for dynamic credential updates (experimental)
- Can be used with [go-redis-entraid](https://github.com/redis/go-redis-entraid) for Azure AD authentication
- **Connection Statistics**: Added connection waiting statistics for better monitoring
- **Failover Improvements**: Added `ParseFailoverURL` for easier failover configuration
- **Ring Client Enhancements**: Added shard access methods for better Pub/Sub management
## ✨ New Features
- Added `StreamingCredentialsProvider` for token-based authentication ([#3320](https://github.com/redis/go-redis/pull/3320))
- Supports dynamic credential updates
- Includes connection close hooks
- Note: Currently marked as experimental
- Added `ParseFailoverURL` for parsing failover URLs ([#3362](https://github.com/redis/go-redis/pull/3362))
- Added connection waiting statistics ([#2804](https://github.com/redis/go-redis/pull/2804))
- Added new utility functions:
- `ParseFloat` and `MustParseFloat` in public utils package ([#3371](https://github.com/redis/go-redis/pull/3371))
- Unit tests for `Atoi`, `ParseInt`, `ParseUint`, and `ParseFloat` ([#3377](https://github.com/redis/go-redis/pull/3377))
- Added Ring client shard access methods:
- `GetShardClients()` to retrieve all active shard clients
- `GetShardClientForKey(key string)` to get the shard client for a specific key ([#3388](https://github.com/redis/go-redis/pull/3388))
## 🐛 Bug Fixes
- Fixed routing reads to loading slave nodes ([#3370](https://github.com/redis/go-redis/pull/3370))
- Added support for nil lag in XINFO GROUPS ([#3369](https://github.com/redis/go-redis/pull/3369))
- Fixed pool acquisition timeout issues ([#3381](https://github.com/redis/go-redis/pull/3381))
- Optimized unnecessary copy operations ([#3376](https://github.com/redis/go-redis/pull/3376))
## 📚 Documentation
- Updated documentation for XINFO GROUPS with nil lag support ([#3369](https://github.com/redis/go-redis/pull/3369))
- Added package-level comments for new features
## ⚡ Performance and Reliability
- Optimized `ReplaceSpaces` function ([#3383](https://github.com/redis/go-redis/pull/3383))
- Set default value for `Options.Protocol` in `init()` ([#3387](https://github.com/redis/go-redis/pull/3387))
- Exported pool errors for public consumption ([#3380](https://github.com/redis/go-redis/pull/3380))
## 🔧 Dependencies and Infrastructure
- Updated Redis CI to version 8.0.1 ([#3372](https://github.com/redis/go-redis/pull/3372))
- Updated spellcheck GitHub Actions ([#3389](https://github.com/redis/go-redis/pull/3389))
- Removed unused parameters ([#3382](https://github.com/redis/go-redis/pull/3382), [#3384](https://github.com/redis/go-redis/pull/3384))
## 🧪 Testing
- Added unit tests for pool acquisition timeout ([#3381](https://github.com/redis/go-redis/pull/3381))
- Added unit tests for utility functions ([#3377](https://github.com/redis/go-redis/pull/3377))
## 👥 Contributors
We would like to thank all the contributors who made this release possible:
[@ndyakov](https://github.com/ndyakov), [@ofekshenawa](https://github.com/ofekshenawa), [@LINKIWI](https://github.com/LINKIWI), [@iamamirsalehi](https://github.com/iamamirsalehi), [@fukua95](https://github.com/fukua95), [@lzakharov](https://github.com/lzakharov), [@DengY11](https://github.com/DengY11)
## 📝 Changelog
For a complete list of changes, see the [full changelog](https://github.com/redis/go-redis/compare/v9.8.0...v9.9.0).
# 9.8.0 (2025-04-30)
## 🚀 Highlights
- **Redis 8 Support**: Full compatibility with Redis 8.0, including testing and CI integration
- **Enhanced Hash Operations**: Added support for new hash commands (`HGETDEL`, `HGETEX`, `HSETEX`) and `HSTRLEN` command
- **Search Improvements**: Enabled Search DIALECT 2 by default and added `CountOnly` argument for `FT.Search`
## ✨ New Features
- Added support for new hash commands: `HGETDEL`, `HGETEX`, `HSETEX` ([#3305](https://github.com/redis/go-redis/pull/3305))
- Added `HSTRLEN` command for hash operations ([#2843](https://github.com/redis/go-redis/pull/2843))
- Added `Do` method for raw query by single connection from `pool.Conn()` ([#3182](https://github.com/redis/go-redis/pull/3182))
- Prevent false-positive marshaling by treating zero time.Time as empty in isEmptyValue ([#3273](https://github.com/redis/go-redis/pull/3273))
- Added FailoverClusterClient support for Universal client ([#2794](https://github.com/redis/go-redis/pull/2794))
- Added support for cluster mode with `IsClusterMode` config parameter ([#3255](https://github.com/redis/go-redis/pull/3255))
- Added client name support in `HELLO` RESP handshake ([#3294](https://github.com/redis/go-redis/pull/3294))
- **Enabled Search DIALECT 2 by default** ([#3213](https://github.com/redis/go-redis/pull/3213))
- Added read-only option for failover configurations ([#3281](https://github.com/redis/go-redis/pull/3281))
- Added `CountOnly` argument for `FT.Search` to use `LIMIT 0 0` ([#3338](https://github.com/redis/go-redis/pull/3338))
- Added `DB` option support in `NewFailoverClusterClient` ([#3342](https://github.com/redis/go-redis/pull/3342))
- Added `nil` check for the options when creating a client ([#3363](https://github.com/redis/go-redis/pull/3363))
## 🐛 Bug Fixes
- Fixed `PubSub` concurrency safety issues ([#3360](https://github.com/redis/go-redis/pull/3360))
- Fixed panic caused when argument is `nil` ([#3353](https://github.com/redis/go-redis/pull/3353))
- Improved error handling when fetching master node from sentinels ([#3349](https://github.com/redis/go-redis/pull/3349))
- Fixed connection pool timeout issues and increased retries ([#3298](https://github.com/redis/go-redis/pull/3298))
- Fixed context cancellation error leading to connection spikes on Primary instances ([#3190](https://github.com/redis/go-redis/pull/3190))
- Fixed RedisCluster client to consider `MASTERDOWN` a retriable error ([#3164](https://github.com/redis/go-redis/pull/3164))
- Fixed tracing to show complete commands instead of truncated versions ([#3290](https://github.com/redis/go-redis/pull/3290))
- Fixed OpenTelemetry instrumentation to prevent multiple span reporting ([#3168](https://github.com/redis/go-redis/pull/3168))
- Fixed `FT.Search` Limit argument and added `CountOnly` argument for limit 0 0 ([#3338](https://github.com/redis/go-redis/pull/3338))
- Fixed missing command in interface ([#3344](https://github.com/redis/go-redis/pull/3344))
- Fixed slot calculation for `COUNTKEYSINSLOT` command ([#3327](https://github.com/redis/go-redis/pull/3327))
- Updated PubSub implementation with correct context ([#3329](https://github.com/redis/go-redis/pull/3329))
## 📚 Documentation
- Added hash search examples ([#3357](https://github.com/redis/go-redis/pull/3357))
- Fixed documentation comments ([#3351](https://github.com/redis/go-redis/pull/3351))
- Added `CountOnly` search example ([#3345](https://github.com/redis/go-redis/pull/3345))
- Added examples for list commands: `LLEN`, `LPOP`, `LPUSH`, `LRANGE`, `RPOP`, `RPUSH` ([#3234](https://github.com/redis/go-redis/pull/3234))
- Added `SADD` and `SMEMBERS` command examples ([#3242](https://github.com/redis/go-redis/pull/3242))
- Updated `README.md` to use Redis Discord guild ([#3331](https://github.com/redis/go-redis/pull/3331))
- Updated `HExpire` command documentation ([#3355](https://github.com/redis/go-redis/pull/3355))
- Featured OpenTelemetry instrumentation more prominently ([#3316](https://github.com/redis/go-redis/pull/3316))
- Updated `README.md` with additional information ([#310ce55](https://github.com/redis/go-redis/commit/310ce55))
## ⚡ Performance and Reliability
- Bound connection pool background dials to configured dial timeout ([#3089](https://github.com/redis/go-redis/pull/3089))
- Ensured context isn't exhausted via concurrent query ([#3334](https://github.com/redis/go-redis/pull/3334))
## 🔧 Dependencies and Infrastructure
- Updated testing image to Redis 8.0-RC2 ([#3361](https://github.com/redis/go-redis/pull/3361))
- Enabled CI for Redis CE 8.0 ([#3274](https://github.com/redis/go-redis/pull/3274))
- Updated various dependencies:
- Bumped golangci/golangci-lint-action from 6.5.0 to 7.0.0 ([#3354](https://github.com/redis/go-redis/pull/3354))
- Bumped rojopolis/spellcheck-github-actions ([#3336](https://github.com/redis/go-redis/pull/3336))
- Bumped golang.org/x/net in example/otel ([#3308](https://github.com/redis/go-redis/pull/3308))
- Migrated golangci-lint configuration to v2 format ([#3354](https://github.com/redis/go-redis/pull/3354))
## ⚠️ Breaking Changes
- **Enabled Search DIALECT 2 by default** ([#3213](https://github.com/redis/go-redis/pull/3213))
- Dropped RedisGears (Triggers and Functions) support ([#3321](https://github.com/redis/go-redis/pull/3321))
- Dropped FT.PROFILE command that was never enabled ([#3323](https://github.com/redis/go-redis/pull/3323))
## 🔒 Security
- Fixed network error handling on SETINFO (CVE-2025-29923) ([#3295](https://github.com/redis/go-redis/pull/3295))
## 🧪 Testing
- Added integration tests for Redis 8 behavior changes in Redis Search ([#3337](https://github.com/redis/go-redis/pull/3337))
- Added vector types INT8 and UINT8 tests ([#3299](https://github.com/redis/go-redis/pull/3299))
- Added test codes for search_commands.go ([#3285](https://github.com/redis/go-redis/pull/3285))
- Fixed example test sorting ([#3292](https://github.com/redis/go-redis/pull/3292))
## 👥 Contributors
We would like to thank all the contributors who made this release possible:
[@alexander-menshchikov](https://github.com/alexander-menshchikov), [@EXPEbdodla](https://github.com/EXPEbdodla), [@afti](https://github.com/afti), [@dmaier-redislabs](https://github.com/dmaier-redislabs), [@four_leaf_clover](https://github.com/four_leaf_clover), [@alohaglenn](https://github.com/alohaglenn), [@gh73962](https://github.com/gh73962), [@justinmir](https://github.com/justinmir), [@LINKIWI](https://github.com/LINKIWI), [@liushuangbill](https://github.com/liushuangbill), [@golang88](https://github.com/golang88), [@gnpaone](https://github.com/gnpaone), [@ndyakov](https://github.com/ndyakov), [@nikolaydubina](https://github.com/nikolaydubina), [@oleglacto](https://github.com/oleglacto), [@andy-stark-redis](https://github.com/andy-stark-redis), [@rodneyosodo](https://github.com/rodneyosodo), [@dependabot](https://github.com/dependabot), [@rfyiamcool](https://github.com/rfyiamcool), [@frankxjkuang](https://github.com/frankxjkuang), [@fukua95](https://github.com/fukua95), [@soleymani-milad](https://github.com/soleymani-milad), [@ofekshenawa](https://github.com/ofekshenawa), [@khasanovbi](https://github.com/khasanovbi)

View File

@ -4,8 +4,20 @@ import "context"
type ACLCmdable interface {
ACLDryRun(ctx context.Context, username string, command ...interface{}) *StringCmd
ACLLog(ctx context.Context, count int64) *ACLLogCmd
ACLLogReset(ctx context.Context) *StatusCmd
ACLSetUser(ctx context.Context, username string, rules ...string) *StatusCmd
ACLDelUser(ctx context.Context, username string) *IntCmd
ACLList(ctx context.Context) *StringSliceCmd
ACLCat(ctx context.Context) *StringSliceCmd
ACLCatArgs(ctx context.Context, options *ACLCatArgs) *StringSliceCmd
}
type ACLCatArgs struct {
Category string
}
func (c cmdable) ACLDryRun(ctx context.Context, username string, command ...interface{}) *StringCmd {
@ -33,3 +45,45 @@ func (c cmdable) ACLLogReset(ctx context.Context) *StatusCmd {
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ACLDelUser(ctx context.Context, username string) *IntCmd {
cmd := NewIntCmd(ctx, "acl", "deluser", username)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ACLSetUser(ctx context.Context, username string, rules ...string) *StatusCmd {
args := make([]interface{}, 3+len(rules))
args[0] = "acl"
args[1] = "setuser"
args[2] = username
for i, rule := range rules {
args[i+3] = rule
}
cmd := NewStatusCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ACLList(ctx context.Context) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "acl", "list")
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ACLCat(ctx context.Context) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "acl", "cat")
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ACLCatArgs(ctx context.Context, options *ACLCatArgs) *StringSliceCmd {
// if there is a category passed, build new cmd, if there isn't - use the ACLCat method
if options != nil && options.Category != "" {
cmd := NewStringSliceCmd(ctx, "acl", "cat", options.Category)
_ = c(ctx, cmd)
return cmd
}
return c.ACLCat(ctx)
}

61
vendor/github.com/redis/go-redis/v9/auth/auth.go generated vendored Normal file
View File

@ -0,0 +1,61 @@
// Package auth package provides authentication-related interfaces and types.
// It also includes a basic implementation of credentials using username and password.
package auth
// StreamingCredentialsProvider is an interface that defines the methods for a streaming credentials provider.
// It is used to provide credentials for authentication.
// The CredentialsListener is used to receive updates when the credentials change.
type StreamingCredentialsProvider interface {
// Subscribe subscribes to the credentials provider for updates.
// It returns the current credentials, a cancel function to unsubscribe from the provider,
// and an error if any.
// TODO(ndyakov): Should we add context to the Subscribe method?
Subscribe(listener CredentialsListener) (Credentials, UnsubscribeFunc, error)
}
// UnsubscribeFunc is a function that is used to cancel the subscription to the credentials provider.
// It is used to unsubscribe from the provider when the credentials are no longer needed.
type UnsubscribeFunc func() error
// CredentialsListener is an interface that defines the methods for a credentials listener.
// It is used to receive updates when the credentials change.
// The OnNext method is called when the credentials change.
// The OnError method is called when an error occurs while requesting the credentials.
type CredentialsListener interface {
OnNext(credentials Credentials)
OnError(err error)
}
// Credentials is an interface that defines the methods for credentials.
// It is used to provide the credentials for authentication.
type Credentials interface {
// BasicAuth returns the username and password for basic authentication.
BasicAuth() (username string, password string)
// RawCredentials returns the raw credentials as a string.
// This can be used to extract the username and password from the raw credentials or
// additional information if present in the token.
RawCredentials() string
}
type basicAuth struct {
username string
password string
}
// RawCredentials returns the raw credentials as a string.
func (b *basicAuth) RawCredentials() string {
return b.username + ":" + b.password
}
// BasicAuth returns the username and password for basic authentication.
func (b *basicAuth) BasicAuth() (username string, password string) {
return b.username, b.password
}
// NewBasicCredentials creates a new Credentials object from the given username and password.
func NewBasicCredentials(username, password string) Credentials {
return &basicAuth{
username: username,
password: password,
}
}

View File

@ -0,0 +1,47 @@
package auth
// ReAuthCredentialsListener is a struct that implements the CredentialsListener interface.
// It is used to re-authenticate the credentials when they are updated.
// It contains:
// - reAuth: a function that takes the new credentials and returns an error if any.
// - onErr: a function that takes an error and handles it.
type ReAuthCredentialsListener struct {
reAuth func(credentials Credentials) error
onErr func(err error)
}
// OnNext is called when the credentials are updated.
// It calls the reAuth function with the new credentials.
// If the reAuth function returns an error, it calls the onErr function with the error.
func (c *ReAuthCredentialsListener) OnNext(credentials Credentials) {
if c.reAuth == nil {
return
}
err := c.reAuth(credentials)
if err != nil {
c.OnError(err)
}
}
// OnError is called when an error occurs.
// It can be called from both the credentials provider and the reAuth function.
func (c *ReAuthCredentialsListener) OnError(err error) {
if c.onErr == nil {
return
}
c.onErr(err)
}
// NewReAuthCredentialsListener creates a new ReAuthCredentialsListener.
// Implements the auth.CredentialsListener interface.
func NewReAuthCredentialsListener(reAuth func(credentials Credentials) error, onErr func(err error)) *ReAuthCredentialsListener {
return &ReAuthCredentialsListener{
reAuth: reAuth,
onErr: onErr,
}
}
// Ensure ReAuthCredentialsListener implements the CredentialsListener interface.
var _ CredentialsListener = (*ReAuthCredentialsListener)(nil)

View File

@ -4,6 +4,7 @@ import "context"
type ClusterCmdable interface {
ClusterMyShardID(ctx context.Context) *StringCmd
ClusterMyID(ctx context.Context) *StringCmd
ClusterSlots(ctx context.Context) *ClusterSlotsCmd
ClusterShards(ctx context.Context) *ClusterShardsCmd
ClusterLinks(ctx context.Context) *ClusterLinksCmd
@ -35,6 +36,12 @@ func (c cmdable) ClusterMyShardID(ctx context.Context) *StringCmd {
return cmd
}
func (c cmdable) ClusterMyID(ctx context.Context) *StringCmd {
cmd := NewStringCmd(ctx, "cluster", "myid")
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) ClusterSlots(ctx context.Context) *ClusterSlotsCmd {
cmd := NewClusterSlotsCmd(ctx, "cluster", "slots")
_ = c(ctx, cmd)

View File

@ -1405,27 +1405,64 @@ func (cmd *MapStringSliceInterfaceCmd) Val() map[string][]interface{} {
}
func (cmd *MapStringSliceInterfaceCmd) readReply(rd *proto.Reader) (err error) {
n, err := rd.ReadMapLen()
readType, err := rd.PeekReplyType()
if err != nil {
return err
}
cmd.val = make(map[string][]interface{}, n)
for i := 0; i < n; i++ {
k, err := rd.ReadString()
cmd.val = make(map[string][]interface{})
switch readType {
case proto.RespMap:
n, err := rd.ReadMapLen()
if err != nil {
return err
}
nn, err := rd.ReadArrayLen()
if err != nil {
return err
}
cmd.val[k] = make([]interface{}, nn)
for j := 0; j < nn; j++ {
value, err := rd.ReadReply()
for i := 0; i < n; i++ {
k, err := rd.ReadString()
if err != nil {
return err
}
cmd.val[k][j] = value
nn, err := rd.ReadArrayLen()
if err != nil {
return err
}
cmd.val[k] = make([]interface{}, nn)
for j := 0; j < nn; j++ {
value, err := rd.ReadReply()
if err != nil {
return err
}
cmd.val[k][j] = value
}
}
case proto.RespArray:
// RESP2 response
n, err := rd.ReadArrayLen()
if err != nil {
return err
}
for i := 0; i < n; i++ {
// Each entry in this array is itself an array with key details
itemLen, err := rd.ReadArrayLen()
if err != nil {
return err
}
key, err := rd.ReadString()
if err != nil {
return err
}
cmd.val[key] = make([]interface{}, 0, itemLen-1)
for j := 1; j < itemLen; j++ {
// Read the inner array for timestamp-value pairs
data, err := rd.ReadReply()
if err != nil {
return err
}
cmd.val[key] = append(cmd.val[key], data)
}
}
}
@ -2067,7 +2104,9 @@ type XInfoGroup struct {
Pending int64
LastDeliveredID string
EntriesRead int64
Lag int64
// Lag represents the number of pending messages in the stream not yet
// delivered to this consumer group. Returns -1 when the lag cannot be determined.
Lag int64
}
var _ Cmder = (*XInfoGroupsCmd)(nil)
@ -2150,8 +2189,11 @@ func (cmd *XInfoGroupsCmd) readReply(rd *proto.Reader) error {
// lag: the number of entries in the stream that are still waiting to be delivered
// to the group's consumers, or a NULL(Nil) when that number can't be determined.
// In that case, we return -1.
if err != nil && err != Nil {
return err
} else if err == Nil {
group.Lag = -1
}
default:
return fmt.Errorf("redis: unexpected key %q in XINFO GROUPS reply", key)
@ -3795,7 +3837,8 @@ func (cmd *MapStringStringSliceCmd) readReply(rd *proto.Reader) error {
}
// -----------------------------------------------------------------------
// MapStringInterfaceCmd represents a command that returns a map of strings to interface{}.
// MapMapStringInterfaceCmd represents a command that returns a map of strings to interface{}.
type MapMapStringInterfaceCmd struct {
baseCmd
val map[string]interface{}
@ -3826,30 +3869,48 @@ func (cmd *MapMapStringInterfaceCmd) Val() map[string]interface{} {
return cmd.val
}
// readReply will try to parse the reply from the proto.Reader for both resp2 and resp3
func (cmd *MapMapStringInterfaceCmd) readReply(rd *proto.Reader) (err error) {
n, err := rd.ReadArrayLen()
data, err := rd.ReadReply()
if err != nil {
return err
}
resultMap := map[string]interface{}{}
data := make(map[string]interface{}, n/2)
for i := 0; i < n; i += 2 {
_, err := rd.ReadArrayLen()
if err != nil {
cmd.err = err
switch midResponse := data.(type) {
case map[interface{}]interface{}: // resp3 will return map
for k, v := range midResponse {
stringKey, ok := k.(string)
if !ok {
return fmt.Errorf("redis: invalid map key %#v", k)
}
resultMap[stringKey] = v
}
key, err := rd.ReadString()
if err != nil {
cmd.err = err
case []interface{}: // resp2 will return array of arrays
n := len(midResponse)
for i := 0; i < n; i++ {
finalArr, ok := midResponse[i].([]interface{}) // final array that we need to transform to map
if !ok {
return fmt.Errorf("redis: unexpected response %#v", data)
}
m := len(finalArr)
if m%2 != 0 { // since this should be map, keys should be even number
return fmt.Errorf("redis: unexpected response %#v", data)
}
for j := 0; j < m; j += 2 {
stringKey, ok := finalArr[j].(string) // the first one
if !ok {
return fmt.Errorf("redis: invalid map key %#v", finalArr[i])
}
resultMap[stringKey] = finalArr[j+1] // second one is value
}
}
value, err := rd.ReadString()
if err != nil {
cmd.err = err
}
data[key] = value
default:
return fmt.Errorf("redis: unexpected response %#v", data)
}
cmd.val = data
cmd.val = resultMap
return nil
}
@ -5078,6 +5139,7 @@ type ClientInfo struct {
OutputListLength int // oll, output list length (replies are queued in this list when the buffer is full)
OutputMemory int // omem, output buffer memory usage
TotalMemory int // tot-mem, total memory consumed by this client in its various buffers
IoThread int // io-thread id
Events string // file descriptor events (see below)
LastCmd string // cmd, last command played
User string // the authenticated username of the client
@ -5256,6 +5318,8 @@ func parseClientInfo(txt string) (info *ClientInfo, err error) {
info.LibName = val
case "lib-ver":
info.LibVer = val
case "io-thread":
info.IoThread, err = strconv.Atoi(val)
default:
return nil, fmt.Errorf("redis: unexpected client info key(%s)", key)
}
@ -5435,8 +5499,6 @@ func (cmd *InfoCmd) readReply(rd *proto.Reader) error {
section := ""
scanner := bufio.NewScanner(strings.NewReader(val))
moduleRe := regexp.MustCompile(`module:name=(.+?),(.+)$`)
for scanner.Scan() {
line := scanner.Text()
if strings.HasPrefix(line, "#") {
@ -5447,6 +5509,7 @@ func (cmd *InfoCmd) readReply(rd *proto.Reader) error {
cmd.val[section] = make(map[string]string)
} else if line != "" {
if section == "Modules" {
moduleRe := regexp.MustCompile(`module:name=(.+?),(.+)$`)
kv := moduleRe.FindStringSubmatch(line)
if len(kv) == 3 {
cmd.val[section][kv[1]] = kv[2]
@ -5557,3 +5620,59 @@ func (cmd *MonitorCmd) Stop() {
defer cmd.mu.Unlock()
cmd.status = monitorStatusStop
}
type VectorScoreSliceCmd struct {
baseCmd
val []VectorScore
}
var _ Cmder = (*VectorScoreSliceCmd)(nil)
func NewVectorInfoSliceCmd(ctx context.Context, args ...any) *VectorScoreSliceCmd {
return &VectorScoreSliceCmd{
baseCmd: baseCmd{
ctx: ctx,
args: args,
},
}
}
func (cmd *VectorScoreSliceCmd) SetVal(val []VectorScore) {
cmd.val = val
}
func (cmd *VectorScoreSliceCmd) Val() []VectorScore {
return cmd.val
}
func (cmd *VectorScoreSliceCmd) Result() ([]VectorScore, error) {
return cmd.val, cmd.err
}
func (cmd *VectorScoreSliceCmd) String() string {
return cmdString(cmd, cmd.val)
}
func (cmd *VectorScoreSliceCmd) readReply(rd *proto.Reader) error {
n, err := rd.ReadMapLen()
if err != nil {
return err
}
cmd.val = make([]VectorScore, n)
for i := 0; i < n; i++ {
name, err := rd.ReadString()
if err != nil {
return err
}
cmd.val[i].Name = name
score, err := rd.ReadFloat()
if err != nil {
return err
}
cmd.val[i].Score = score
}
return nil
}

View File

@ -81,6 +81,8 @@ func appendArg(dst []interface{}, arg interface{}) []interface{} {
return dst
case time.Time, time.Duration, encoding.BinaryMarshaler, net.IP:
return append(dst, arg)
case nil:
return dst
default:
// scan struct field
v := reflect.ValueOf(arg)
@ -153,6 +155,12 @@ func isEmptyValue(v reflect.Value) bool {
return v.Float() == 0
case reflect.Interface, reflect.Pointer:
return v.IsNil()
case reflect.Struct:
if v.Type() == reflect.TypeOf(time.Time{}) {
return v.IsZero()
}
// Only supports the struct time.Time,
// subsequent iterations will follow the func Scan support decoder.
}
return false
}
@ -211,7 +219,6 @@ type Cmdable interface {
ACLCmdable
BitMapCmdable
ClusterCmdable
GearsCmdable
GenericCmdable
GeoCmdable
HashCmdable
@ -227,6 +234,7 @@ type Cmdable interface {
StreamCmdable
TimeseriesCmdable
JSONCmdable
VectorSetCmdable
}
type StatefulCmdable interface {
@ -331,7 +339,7 @@ func (info LibraryInfo) Validate() error {
return nil
}
// Hello Set the resp protocol used.
// Hello sets the resp protocol used.
func (c statefulCmdable) Hello(ctx context.Context,
ver int, username, password, clientName string,
) *MapStringInterfaceCmd {
@ -423,6 +431,12 @@ func (c cmdable) Ping(ctx context.Context) *StatusCmd {
return cmd
}
func (c cmdable) Do(ctx context.Context, args ...interface{}) *Cmd {
cmd := NewCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) Quit(_ context.Context) *StatusCmd {
panic("not implemented")
}

106
vendor/github.com/redis/go-redis/v9/docker-compose.yml generated vendored Normal file
View File

@ -0,0 +1,106 @@
---
services:
redis:
image: ${CLIENT_LIBS_TEST_IMAGE:-redislabs/client-libs-test:rs-7.4.0-v2}
platform: linux/amd64
container_name: redis-standalone
environment:
- TLS_ENABLED=yes
- REDIS_CLUSTER=no
- PORT=6379
- TLS_PORT=6666
command: ${REDIS_EXTRA_ARGS:---enable-debug-command yes --enable-module-command yes --tls-auth-clients optional --save ""}
ports:
- 6379:6379
- 6666:6666 # TLS port
volumes:
- "./dockers/standalone:/redis/work"
profiles:
- standalone
- sentinel
- all-stack
- all
osscluster:
image: ${CLIENT_LIBS_TEST_IMAGE:-redislabs/client-libs-test:rs-7.4.0-v2}
platform: linux/amd64
container_name: redis-osscluster
environment:
- NODES=6
- PORT=16600
command: "--cluster-enabled yes"
ports:
- "16600-16605:16600-16605"
volumes:
- "./dockers/osscluster:/redis/work"
profiles:
- cluster
- all-stack
- all
sentinel-cluster:
image: ${CLIENT_LIBS_TEST_IMAGE:-redislabs/client-libs-test:rs-7.4.0-v2}
platform: linux/amd64
container_name: redis-sentinel-cluster
network_mode: "host"
environment:
- NODES=3
- TLS_ENABLED=yes
- REDIS_CLUSTER=no
- PORT=9121
command: ${REDIS_EXTRA_ARGS:---enable-debug-command yes --enable-module-command yes --tls-auth-clients optional --save ""}
#ports:
# - "9121-9123:9121-9123"
volumes:
- "./dockers/sentinel-cluster:/redis/work"
profiles:
- sentinel
- all-stack
- all
sentinel:
image: ${CLIENT_LIBS_TEST_IMAGE:-redislabs/client-libs-test:rs-7.4.0-v2}
platform: linux/amd64
container_name: redis-sentinel
depends_on:
- sentinel-cluster
environment:
- NODES=3
- REDIS_CLUSTER=no
- PORT=26379
command: ${REDIS_EXTRA_ARGS:---sentinel}
network_mode: "host"
#ports:
# - 26379:26379
# - 26380:26380
# - 26381:26381
volumes:
- "./dockers/sentinel.conf:/redis/config-default/redis.conf"
- "./dockers/sentinel:/redis/work"
profiles:
- sentinel
- all-stack
- all
ring-cluster:
image: ${CLIENT_LIBS_TEST_IMAGE:-redislabs/client-libs-test:rs-7.4.0-v2}
platform: linux/amd64
container_name: redis-ring-cluster
environment:
- NODES=3
- TLS_ENABLED=yes
- REDIS_CLUSTER=no
- PORT=6390
command: ${REDIS_EXTRA_ARGS:---enable-debug-command yes --enable-module-command yes --tls-auth-clients optional --save ""}
ports:
- 6390:6390
- 6391:6391
- 6392:6392
volumes:
- "./dockers/ring:/redis/work"
profiles:
- ring
- cluster
- all-stack
- all

View File

@ -15,6 +15,13 @@ import (
// ErrClosed performs any operation on the closed client will return this error.
var ErrClosed = pool.ErrClosed
// ErrPoolExhausted is returned from a pool connection method
// when the maximum number of database connections in the pool has been reached.
var ErrPoolExhausted = pool.ErrPoolExhausted
// ErrPoolTimeout timed out waiting to get a connection from the connection pool.
var ErrPoolTimeout = pool.ErrPoolTimeout
// HasErrorPrefix checks if the err is a Redis error and the message contains a prefix.
func HasErrorPrefix(err error, prefix string) bool {
var rErr Error
@ -38,12 +45,24 @@ type Error interface {
var _ Error = proto.RedisError("")
func isContextError(err error) bool {
switch err {
case context.Canceled, context.DeadlineExceeded:
return true
default:
return false
}
}
func shouldRetry(err error, retryTimeout bool) bool {
switch err {
case io.EOF, io.ErrUnexpectedEOF:
return true
case nil, context.Canceled, context.DeadlineExceeded:
return false
case pool.ErrPoolTimeout:
// connection pool timeout, increase retries. #3289
return true
}
if v, ok := err.(timeoutError); ok {
@ -63,6 +82,9 @@ func shouldRetry(err error, retryTimeout bool) bool {
if strings.HasPrefix(s, "READONLY ") {
return true
}
if strings.HasPrefix(s, "MASTERDOWN ") {
return true
}
if strings.HasPrefix(s, "CLUSTERDOWN ") {
return true
}

View File

@ -1,149 +0,0 @@
package redis
import (
"context"
"fmt"
"strings"
)
type GearsCmdable interface {
TFunctionLoad(ctx context.Context, lib string) *StatusCmd
TFunctionLoadArgs(ctx context.Context, lib string, options *TFunctionLoadOptions) *StatusCmd
TFunctionDelete(ctx context.Context, libName string) *StatusCmd
TFunctionList(ctx context.Context) *MapStringInterfaceSliceCmd
TFunctionListArgs(ctx context.Context, options *TFunctionListOptions) *MapStringInterfaceSliceCmd
TFCall(ctx context.Context, libName string, funcName string, numKeys int) *Cmd
TFCallArgs(ctx context.Context, libName string, funcName string, numKeys int, options *TFCallOptions) *Cmd
TFCallASYNC(ctx context.Context, libName string, funcName string, numKeys int) *Cmd
TFCallASYNCArgs(ctx context.Context, libName string, funcName string, numKeys int, options *TFCallOptions) *Cmd
}
type TFunctionLoadOptions struct {
Replace bool
Config string
}
type TFunctionListOptions struct {
Withcode bool
Verbose int
Library string
}
type TFCallOptions struct {
Keys []string
Arguments []string
}
// TFunctionLoad - load a new JavaScript library into Redis.
// For more information - https://redis.io/commands/tfunction-load/
func (c cmdable) TFunctionLoad(ctx context.Context, lib string) *StatusCmd {
args := []interface{}{"TFUNCTION", "LOAD", lib}
cmd := NewStatusCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) TFunctionLoadArgs(ctx context.Context, lib string, options *TFunctionLoadOptions) *StatusCmd {
args := []interface{}{"TFUNCTION", "LOAD"}
if options != nil {
if options.Replace {
args = append(args, "REPLACE")
}
if options.Config != "" {
args = append(args, "CONFIG", options.Config)
}
}
args = append(args, lib)
cmd := NewStatusCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// TFunctionDelete - delete a JavaScript library from Redis.
// For more information - https://redis.io/commands/tfunction-delete/
func (c cmdable) TFunctionDelete(ctx context.Context, libName string) *StatusCmd {
args := []interface{}{"TFUNCTION", "DELETE", libName}
cmd := NewStatusCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// TFunctionList - list the functions with additional information about each function.
// For more information - https://redis.io/commands/tfunction-list/
func (c cmdable) TFunctionList(ctx context.Context) *MapStringInterfaceSliceCmd {
args := []interface{}{"TFUNCTION", "LIST"}
cmd := NewMapStringInterfaceSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) TFunctionListArgs(ctx context.Context, options *TFunctionListOptions) *MapStringInterfaceSliceCmd {
args := []interface{}{"TFUNCTION", "LIST"}
if options != nil {
if options.Withcode {
args = append(args, "WITHCODE")
}
if options.Verbose != 0 {
v := strings.Repeat("v", options.Verbose)
args = append(args, v)
}
if options.Library != "" {
args = append(args, "LIBRARY", options.Library)
}
}
cmd := NewMapStringInterfaceSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// TFCall - invoke a function.
// For more information - https://redis.io/commands/tfcall/
func (c cmdable) TFCall(ctx context.Context, libName string, funcName string, numKeys int) *Cmd {
lf := libName + "." + funcName
args := []interface{}{"TFCALL", lf, numKeys}
cmd := NewCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) TFCallArgs(ctx context.Context, libName string, funcName string, numKeys int, options *TFCallOptions) *Cmd {
lf := libName + "." + funcName
args := []interface{}{"TFCALL", lf, numKeys}
if options != nil {
for _, key := range options.Keys {
args = append(args, key)
}
for _, key := range options.Arguments {
args = append(args, key)
}
}
cmd := NewCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// TFCallASYNC - invoke an asynchronous JavaScript function (coroutine).
// For more information - https://redis.io/commands/TFCallASYNC/
func (c cmdable) TFCallASYNC(ctx context.Context, libName string, funcName string, numKeys int) *Cmd {
lf := fmt.Sprintf("%s.%s", libName, funcName)
args := []interface{}{"TFCALLASYNC", lf, numKeys}
cmd := NewCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) TFCallASYNCArgs(ctx context.Context, libName string, funcName string, numKeys int, options *TFCallOptions) *Cmd {
lf := fmt.Sprintf("%s.%s", libName, funcName)
args := []interface{}{"TFCALLASYNC", lf, numKeys}
if options != nil {
for _, key := range options.Keys {
args = append(args, key)
}
for _, key := range options.Arguments {
args = append(args, key)
}
}
cmd := NewCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}

View File

@ -10,6 +10,9 @@ type HashCmdable interface {
HExists(ctx context.Context, key, field string) *BoolCmd
HGet(ctx context.Context, key, field string) *StringCmd
HGetAll(ctx context.Context, key string) *MapStringStringCmd
HGetDel(ctx context.Context, key string, fields ...string) *StringSliceCmd
HGetEX(ctx context.Context, key string, fields ...string) *StringSliceCmd
HGetEXWithArgs(ctx context.Context, key string, options *HGetEXOptions, fields ...string) *StringSliceCmd
HIncrBy(ctx context.Context, key, field string, incr int64) *IntCmd
HIncrByFloat(ctx context.Context, key, field string, incr float64) *FloatCmd
HKeys(ctx context.Context, key string) *StringSliceCmd
@ -17,12 +20,15 @@ type HashCmdable interface {
HMGet(ctx context.Context, key string, fields ...string) *SliceCmd
HSet(ctx context.Context, key string, values ...interface{}) *IntCmd
HMSet(ctx context.Context, key string, values ...interface{}) *BoolCmd
HSetEX(ctx context.Context, key string, fieldsAndValues ...string) *IntCmd
HSetEXWithArgs(ctx context.Context, key string, options *HSetEXOptions, fieldsAndValues ...string) *IntCmd
HSetNX(ctx context.Context, key, field string, value interface{}) *BoolCmd
HScan(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd
HScanNoValues(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd
HVals(ctx context.Context, key string) *StringSliceCmd
HRandField(ctx context.Context, key string, count int) *StringSliceCmd
HRandFieldWithValues(ctx context.Context, key string, count int) *KeyValueSliceCmd
HStrLen(ctx context.Context, key, field string) *IntCmd
HExpire(ctx context.Context, key string, expiration time.Duration, fields ...string) *IntSliceCmd
HExpireWithArgs(ctx context.Context, key string, expiration time.Duration, expirationArgs HExpireArgs, fields ...string) *IntSliceCmd
HPExpire(ctx context.Context, key string, expiration time.Duration, fields ...string) *IntSliceCmd
@ -190,6 +196,11 @@ func (c cmdable) HScan(ctx context.Context, key string, cursor uint64, match str
return cmd
}
func (c cmdable) HStrLen(ctx context.Context, key, field string) *IntCmd {
cmd := NewIntCmd(ctx, "hstrlen", key, field)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) HScanNoValues(ctx context.Context, key string, cursor uint64, match string, count int64) *ScanCmd {
args := []interface{}{"hscan", key, cursor}
if match != "" {
@ -213,7 +224,10 @@ type HExpireArgs struct {
// HExpire - Sets the expiration time for specified fields in a hash in seconds.
// The command constructs an argument list starting with "HEXPIRE", followed by the key, duration, any conditional flags, and the specified fields.
// For more information - https://redis.io/commands/hexpire/
// Available since Redis 7.4 CE.
// For more information refer to [HEXPIRE Documentation].
//
// [HEXPIRE Documentation]: https://redis.io/commands/hexpire/
func (c cmdable) HExpire(ctx context.Context, key string, expiration time.Duration, fields ...string) *IntSliceCmd {
args := []interface{}{"HEXPIRE", key, formatSec(ctx, expiration), "FIELDS", len(fields)}
@ -228,7 +242,10 @@ func (c cmdable) HExpire(ctx context.Context, key string, expiration time.Durati
// HExpireWithArgs - Sets the expiration time for specified fields in a hash in seconds.
// It requires a key, an expiration duration, a struct with boolean flags for conditional expiration settings (NX, XX, GT, LT), and a list of fields.
// The command constructs an argument list starting with "HEXPIRE", followed by the key, duration, any conditional flags, and the specified fields.
// For more information - https://redis.io/commands/hexpire/
// Available since Redis 7.4 CE.
// For more information refer to [HEXPIRE Documentation].
//
// [HEXPIRE Documentation]: https://redis.io/commands/hexpire/
func (c cmdable) HExpireWithArgs(ctx context.Context, key string, expiration time.Duration, expirationArgs HExpireArgs, fields ...string) *IntSliceCmd {
args := []interface{}{"HEXPIRE", key, formatSec(ctx, expiration)}
@ -257,7 +274,10 @@ func (c cmdable) HExpireWithArgs(ctx context.Context, key string, expiration tim
// HPExpire - Sets the expiration time for specified fields in a hash in milliseconds.
// Similar to HExpire, it accepts a key, an expiration duration in milliseconds, a struct with expiration condition flags, and a list of fields.
// The command modifies the standard time.Duration to milliseconds for the Redis command.
// For more information - https://redis.io/commands/hpexpire/
// Available since Redis 7.4 CE.
// For more information refer to [HPEXPIRE Documentation].
//
// [HPEXPIRE Documentation]: https://redis.io/commands/hpexpire/
func (c cmdable) HPExpire(ctx context.Context, key string, expiration time.Duration, fields ...string) *IntSliceCmd {
args := []interface{}{"HPEXPIRE", key, formatMs(ctx, expiration), "FIELDS", len(fields)}
@ -269,6 +289,13 @@ func (c cmdable) HPExpire(ctx context.Context, key string, expiration time.Durat
return cmd
}
// HPExpireWithArgs - Sets the expiration time for specified fields in a hash in milliseconds.
// It requires a key, an expiration duration, a struct with boolean flags for conditional expiration settings (NX, XX, GT, LT), and a list of fields.
// The command constructs an argument list starting with "HPEXPIRE", followed by the key, duration, any conditional flags, and the specified fields.
// Available since Redis 7.4 CE.
// For more information refer to [HPEXPIRE Documentation].
//
// [HPEXPIRE Documentation]: https://redis.io/commands/hpexpire/
func (c cmdable) HPExpireWithArgs(ctx context.Context, key string, expiration time.Duration, expirationArgs HExpireArgs, fields ...string) *IntSliceCmd {
args := []interface{}{"HPEXPIRE", key, formatMs(ctx, expiration)}
@ -297,7 +324,10 @@ func (c cmdable) HPExpireWithArgs(ctx context.Context, key string, expiration ti
// HExpireAt - Sets the expiration time for specified fields in a hash to a UNIX timestamp in seconds.
// Takes a key, a UNIX timestamp, a struct of conditional flags, and a list of fields.
// The command sets absolute expiration times based on the UNIX timestamp provided.
// For more information - https://redis.io/commands/hexpireat/
// Available since Redis 7.4 CE.
// For more information refer to [HExpireAt Documentation].
//
// [HExpireAt Documentation]: https://redis.io/commands/hexpireat/
func (c cmdable) HExpireAt(ctx context.Context, key string, tm time.Time, fields ...string) *IntSliceCmd {
args := []interface{}{"HEXPIREAT", key, tm.Unix(), "FIELDS", len(fields)}
@ -337,7 +367,10 @@ func (c cmdable) HExpireAtWithArgs(ctx context.Context, key string, tm time.Time
// HPExpireAt - Sets the expiration time for specified fields in a hash to a UNIX timestamp in milliseconds.
// Similar to HExpireAt but for timestamps in milliseconds. It accepts the same parameters and adjusts the UNIX time to milliseconds.
// For more information - https://redis.io/commands/hpexpireat/
// Available since Redis 7.4 CE.
// For more information refer to [HExpireAt Documentation].
//
// [HExpireAt Documentation]: https://redis.io/commands/hexpireat/
func (c cmdable) HPExpireAt(ctx context.Context, key string, tm time.Time, fields ...string) *IntSliceCmd {
args := []interface{}{"HPEXPIREAT", key, tm.UnixNano() / int64(time.Millisecond), "FIELDS", len(fields)}
@ -377,7 +410,10 @@ func (c cmdable) HPExpireAtWithArgs(ctx context.Context, key string, tm time.Tim
// HPersist - Removes the expiration time from specified fields in a hash.
// Accepts a key and the fields themselves.
// This command ensures that each field specified will have its expiration removed if present.
// For more information - https://redis.io/commands/hpersist/
// Available since Redis 7.4 CE.
// For more information refer to [HPersist Documentation].
//
// [HPersist Documentation]: https://redis.io/commands/hpersist/
func (c cmdable) HPersist(ctx context.Context, key string, fields ...string) *IntSliceCmd {
args := []interface{}{"HPERSIST", key, "FIELDS", len(fields)}
@ -392,6 +428,10 @@ func (c cmdable) HPersist(ctx context.Context, key string, fields ...string) *In
// HExpireTime - Retrieves the expiration time for specified fields in a hash as a UNIX timestamp in seconds.
// Requires a key and the fields themselves to fetch their expiration timestamps.
// This command returns the expiration times for each field or error/status codes for each field as specified.
// Available since Redis 7.4 CE.
// For more information refer to [HExpireTime Documentation].
//
// [HExpireTime Documentation]: https://redis.io/commands/hexpiretime/
// For more information - https://redis.io/commands/hexpiretime/
func (c cmdable) HExpireTime(ctx context.Context, key string, fields ...string) *IntSliceCmd {
args := []interface{}{"HEXPIRETIME", key, "FIELDS", len(fields)}
@ -407,6 +447,10 @@ func (c cmdable) HExpireTime(ctx context.Context, key string, fields ...string)
// HPExpireTime - Retrieves the expiration time for specified fields in a hash as a UNIX timestamp in milliseconds.
// Similar to HExpireTime, adjusted for timestamps in milliseconds. It requires the same parameters.
// Provides the expiration timestamp for each field in milliseconds.
// Available since Redis 7.4 CE.
// For more information refer to [HExpireTime Documentation].
//
// [HExpireTime Documentation]: https://redis.io/commands/hexpiretime/
// For more information - https://redis.io/commands/hexpiretime/
func (c cmdable) HPExpireTime(ctx context.Context, key string, fields ...string) *IntSliceCmd {
args := []interface{}{"HPEXPIRETIME", key, "FIELDS", len(fields)}
@ -422,7 +466,10 @@ func (c cmdable) HPExpireTime(ctx context.Context, key string, fields ...string)
// HTTL - Retrieves the remaining time to live for specified fields in a hash in seconds.
// Requires a key and the fields themselves. It returns the TTL for each specified field.
// This command fetches the TTL in seconds for each field or returns error/status codes as appropriate.
// For more information - https://redis.io/commands/httl/
// Available since Redis 7.4 CE.
// For more information refer to [HTTL Documentation].
//
// [HTTL Documentation]: https://redis.io/commands/httl/
func (c cmdable) HTTL(ctx context.Context, key string, fields ...string) *IntSliceCmd {
args := []interface{}{"HTTL", key, "FIELDS", len(fields)}
@ -437,6 +484,10 @@ func (c cmdable) HTTL(ctx context.Context, key string, fields ...string) *IntSli
// HPTTL - Retrieves the remaining time to live for specified fields in a hash in milliseconds.
// Similar to HTTL, but returns the TTL in milliseconds. It requires a key and the specified fields.
// This command provides the TTL in milliseconds for each field or returns error/status codes as needed.
// Available since Redis 7.4 CE.
// For more information refer to [HPTTL Documentation].
//
// [HPTTL Documentation]: https://redis.io/commands/hpttl/
// For more information - https://redis.io/commands/hpttl/
func (c cmdable) HPTTL(ctx context.Context, key string, fields ...string) *IntSliceCmd {
args := []interface{}{"HPTTL", key, "FIELDS", len(fields)}
@ -448,3 +499,113 @@ func (c cmdable) HPTTL(ctx context.Context, key string, fields ...string) *IntSl
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) HGetDel(ctx context.Context, key string, fields ...string) *StringSliceCmd {
args := []interface{}{"HGETDEL", key, "FIELDS", len(fields)}
for _, field := range fields {
args = append(args, field)
}
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) HGetEX(ctx context.Context, key string, fields ...string) *StringSliceCmd {
args := []interface{}{"HGETEX", key, "FIELDS", len(fields)}
for _, field := range fields {
args = append(args, field)
}
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// HGetEXExpirationType represents an expiration option for the HGETEX command.
type HGetEXExpirationType string
const (
HGetEXExpirationEX HGetEXExpirationType = "EX"
HGetEXExpirationPX HGetEXExpirationType = "PX"
HGetEXExpirationEXAT HGetEXExpirationType = "EXAT"
HGetEXExpirationPXAT HGetEXExpirationType = "PXAT"
HGetEXExpirationPERSIST HGetEXExpirationType = "PERSIST"
)
type HGetEXOptions struct {
ExpirationType HGetEXExpirationType
ExpirationVal int64
}
func (c cmdable) HGetEXWithArgs(ctx context.Context, key string, options *HGetEXOptions, fields ...string) *StringSliceCmd {
args := []interface{}{"HGETEX", key}
if options.ExpirationType != "" {
args = append(args, string(options.ExpirationType))
if options.ExpirationType != HGetEXExpirationPERSIST {
args = append(args, options.ExpirationVal)
}
}
args = append(args, "FIELDS", len(fields))
for _, field := range fields {
args = append(args, field)
}
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
type HSetEXCondition string
const (
HSetEXFNX HSetEXCondition = "FNX" // Only set the fields if none of them already exist.
HSetEXFXX HSetEXCondition = "FXX" // Only set the fields if all already exist.
)
type HSetEXExpirationType string
const (
HSetEXExpirationEX HSetEXExpirationType = "EX"
HSetEXExpirationPX HSetEXExpirationType = "PX"
HSetEXExpirationEXAT HSetEXExpirationType = "EXAT"
HSetEXExpirationPXAT HSetEXExpirationType = "PXAT"
HSetEXExpirationKEEPTTL HSetEXExpirationType = "KEEPTTL"
)
type HSetEXOptions struct {
Condition HSetEXCondition
ExpirationType HSetEXExpirationType
ExpirationVal int64
}
func (c cmdable) HSetEX(ctx context.Context, key string, fieldsAndValues ...string) *IntCmd {
args := []interface{}{"HSETEX", key, "FIELDS", len(fieldsAndValues) / 2}
for _, field := range fieldsAndValues {
args = append(args, field)
}
cmd := NewIntCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
func (c cmdable) HSetEXWithArgs(ctx context.Context, key string, options *HSetEXOptions, fieldsAndValues ...string) *IntCmd {
args := []interface{}{"HSETEX", key}
if options.Condition != "" {
args = append(args, string(options.Condition))
}
if options.ExpirationType != "" {
args = append(args, string(options.ExpirationType))
if options.ExpirationType != HSetEXExpirationKEEPTTL {
args = append(args, options.ExpirationVal)
}
}
args = append(args, "FIELDS", len(fieldsAndValues)/2)
for _, field := range fieldsAndValues {
args = append(args, field)
}
cmd := NewIntCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}

View File

@ -23,6 +23,8 @@ type Conn struct {
Inited bool
pooled bool
createdAt time.Time
onClose func() error
}
func NewConn(netConn net.Conn) *Conn {
@ -46,6 +48,10 @@ func (cn *Conn) SetUsedAt(tm time.Time) {
atomic.StoreInt64(&cn.usedAt, tm.Unix())
}
func (cn *Conn) SetOnClose(fn func() error) {
cn.onClose = fn
}
func (cn *Conn) SetNetConn(netConn net.Conn) {
cn.netConn = netConn
cn.rd.Reset(netConn)
@ -95,6 +101,10 @@ func (cn *Conn) WithWriter(
}
func (cn *Conn) Close() error {
if cn.onClose != nil {
// ignore error
_ = cn.onClose()
}
return cn.netConn.Close()
}

View File

@ -33,9 +33,11 @@ var timers = sync.Pool{
// Stats contains pool state information and accumulated stats.
type Stats struct {
Hits uint32 // number of times free connection was found in the pool
Misses uint32 // number of times free connection was NOT found in the pool
Timeouts uint32 // number of times a wait timeout occurred
Hits uint32 // number of times free connection was found in the pool
Misses uint32 // number of times free connection was NOT found in the pool
Timeouts uint32 // number of times a wait timeout occurred
WaitCount uint32 // number of times a connection was waited
WaitDurationNs int64 // total time spent for waiting a connection in nanoseconds
TotalConns uint32 // number of total connections in the pool
IdleConns uint32 // number of idle connections in the pool
@ -62,6 +64,7 @@ type Options struct {
PoolFIFO bool
PoolSize int
DialTimeout time.Duration
PoolTimeout time.Duration
MinIdleConns int
MaxIdleConns int
@ -89,7 +92,8 @@ type ConnPool struct {
poolSize int
idleConnsLen int
stats Stats
stats Stats
waitDurationNs atomic.Int64
_closed uint32 // atomic
}
@ -140,7 +144,10 @@ func (p *ConnPool) checkMinIdleConns() {
}
func (p *ConnPool) addIdleConn() error {
cn, err := p.dialConn(context.TODO(), true)
ctx, cancel := context.WithTimeout(context.Background(), p.cfg.DialTimeout)
defer cancel()
cn, err := p.dialConn(ctx, true)
if err != nil {
return err
}
@ -230,15 +237,19 @@ func (p *ConnPool) tryDial() {
return
}
conn, err := p.cfg.Dialer(context.Background())
ctx, cancel := context.WithTimeout(context.Background(), p.cfg.DialTimeout)
conn, err := p.cfg.Dialer(ctx)
if err != nil {
p.setLastDialError(err)
time.Sleep(time.Second)
cancel()
continue
}
atomic.StoreUint32(&p.dialErrorsNum, 0)
_ = conn.Close()
cancel()
return
}
}
@ -312,6 +323,7 @@ func (p *ConnPool) waitTurn(ctx context.Context) error {
default:
}
start := time.Now()
timer := timers.Get().(*time.Timer)
timer.Reset(p.cfg.PoolTimeout)
@ -323,6 +335,8 @@ func (p *ConnPool) waitTurn(ctx context.Context) error {
timers.Put(timer)
return ctx.Err()
case p.queue <- struct{}{}:
p.waitDurationNs.Add(time.Since(start).Nanoseconds())
atomic.AddUint32(&p.stats.WaitCount, 1)
if !timer.Stop() {
<-timer.C
}
@ -449,9 +463,11 @@ func (p *ConnPool) IdleLen() int {
func (p *ConnPool) Stats() *Stats {
return &Stats{
Hits: atomic.LoadUint32(&p.stats.Hits),
Misses: atomic.LoadUint32(&p.stats.Misses),
Timeouts: atomic.LoadUint32(&p.stats.Timeouts),
Hits: atomic.LoadUint32(&p.stats.Hits),
Misses: atomic.LoadUint32(&p.stats.Misses),
Timeouts: atomic.LoadUint32(&p.stats.Timeouts),
WaitCount: atomic.LoadUint32(&p.stats.WaitCount),
WaitDurationNs: p.waitDurationNs.Load(),
TotalConns: uint32(p.Len()),
IdleConns: uint32(p.IdleLen()),

View File

@ -66,56 +66,95 @@ func (w *Writer) WriteArg(v interface{}) error {
case string:
return w.string(v)
case *string:
if v == nil {
return w.string("")
}
return w.string(*v)
case []byte:
return w.bytes(v)
case int:
return w.int(int64(v))
case *int:
if v == nil {
return w.int(0)
}
return w.int(int64(*v))
case int8:
return w.int(int64(v))
case *int8:
if v == nil {
return w.int(0)
}
return w.int(int64(*v))
case int16:
return w.int(int64(v))
case *int16:
if v == nil {
return w.int(0)
}
return w.int(int64(*v))
case int32:
return w.int(int64(v))
case *int32:
if v == nil {
return w.int(0)
}
return w.int(int64(*v))
case int64:
return w.int(v)
case *int64:
if v == nil {
return w.int(0)
}
return w.int(*v)
case uint:
return w.uint(uint64(v))
case *uint:
if v == nil {
return w.uint(0)
}
return w.uint(uint64(*v))
case uint8:
return w.uint(uint64(v))
case *uint8:
if v == nil {
return w.string("")
}
return w.uint(uint64(*v))
case uint16:
return w.uint(uint64(v))
case *uint16:
if v == nil {
return w.uint(0)
}
return w.uint(uint64(*v))
case uint32:
return w.uint(uint64(v))
case *uint32:
if v == nil {
return w.uint(0)
}
return w.uint(uint64(*v))
case uint64:
return w.uint(v)
case *uint64:
if v == nil {
return w.uint(0)
}
return w.uint(*v)
case float32:
return w.float(float64(v))
case *float32:
if v == nil {
return w.float(0)
}
return w.float(float64(*v))
case float64:
return w.float(v)
case *float64:
if v == nil {
return w.float(0)
}
return w.float(*v)
case bool:
if v {
@ -123,6 +162,9 @@ func (w *Writer) WriteArg(v interface{}) error {
}
return w.int(0)
case *bool:
if v == nil {
return w.int(0)
}
if *v {
return w.int(1)
}
@ -130,8 +172,19 @@ func (w *Writer) WriteArg(v interface{}) error {
case time.Time:
w.numBuf = v.AppendFormat(w.numBuf[:0], time.RFC3339Nano)
return w.bytes(w.numBuf)
case *time.Time:
if v == nil {
v = &time.Time{}
}
w.numBuf = v.AppendFormat(w.numBuf[:0], time.RFC3339Nano)
return w.bytes(w.numBuf)
case time.Duration:
return w.int(v.Nanoseconds())
case *time.Duration:
if v == nil {
return w.int(0)
}
return w.int(v.Nanoseconds())
case encoding.BinaryMarshaler:
b, err := v.MarshalBinary()
if err != nil {

View File

@ -49,22 +49,7 @@ func isLower(s string) bool {
}
func ReplaceSpaces(s string) string {
// Pre-allocate a builder with the same length as s to minimize allocations.
// This is a basic optimization; adjust the initial size based on your use case.
var builder strings.Builder
builder.Grow(len(s))
for _, char := range s {
if char == ' ' {
// Replace space with a hyphen.
builder.WriteRune('-')
} else {
// Copy the character as-is.
builder.WriteRune(char)
}
}
return builder.String()
return strings.ReplaceAll(s, " ", "-")
}
func GetAddr(addr string) string {

View File

@ -0,0 +1,30 @@
package util
import (
"fmt"
"math"
"strconv"
)
// ParseFloat parses a Redis RESP3 float reply into a Go float64,
// handling "inf", "-inf", "nan" per Redis conventions.
func ParseStringToFloat(s string) (float64, error) {
switch s {
case "inf":
return math.Inf(1), nil
case "-inf":
return math.Inf(-1), nil
case "nan", "-nan":
return math.NaN(), nil
}
return strconv.ParseFloat(s, 64)
}
// MustParseFloat is like ParseFloat but panics on parse errors.
func MustParseFloat(s string) float64 {
f, err := ParseStringToFloat(s)
if err != nil {
panic(fmt.Sprintf("redis: failed to parse float %q: %v", s, err))
}
return f
}

View File

@ -13,6 +13,7 @@ import (
"strings"
"time"
"github.com/redis/go-redis/v9/auth"
"github.com/redis/go-redis/v9/internal/pool"
)
@ -29,10 +30,13 @@ type Limiter interface {
// Options keeps the settings to set up redis connection.
type Options struct {
// The network type, either tcp or unix.
// Default is tcp.
// Network type, either tcp or unix.
//
// default: is tcp.
Network string
// host:port address.
// Addr is the address formated as host:port
Addr string
// ClientName will execute the `CLIENT SETNAME ClientName` command for each conn.
@ -46,17 +50,21 @@ type Options struct {
OnConnect func(ctx context.Context, cn *Conn) error
// Protocol 2 or 3. Use the version to negotiate RESP version with redis-server.
// Default is 3.
//
// default: 3.
Protocol int
// Use the specified Username to authenticate the current connection
// Username is used to authenticate the current connection
// with one of the connections defined in the ACL list when connecting
// to a Redis 6.0 instance, or greater, that is using the Redis ACL system.
Username string
// Optional password. Must match the password specified in the
// requirepass server configuration option (if connecting to a Redis 5.0 instance, or lower),
// Password is an optional password. Must match the password specified in the
// `requirepass` server configuration option (if connecting to a Redis 5.0 instance, or lower),
// or the User Password when connecting to a Redis 6.0 instance, or greater,
// that is using the Redis ACL system.
Password string
// CredentialsProvider allows the username and password to be updated
// before reconnecting. It should return the current username and password.
CredentialsProvider func() (username string, password string)
@ -67,85 +75,126 @@ type Options struct {
// There will be a conflict between them; if CredentialsProviderContext exists, we will ignore CredentialsProvider.
CredentialsProviderContext func(ctx context.Context) (username string, password string, err error)
// Database to be selected after connecting to the server.
// StreamingCredentialsProvider is used to retrieve the credentials
// for the connection from an external source. Those credentials may change
// during the connection lifetime. This is useful for managed identity
// scenarios where the credentials are retrieved from an external source.
//
// Currently, this is a placeholder for the future implementation.
StreamingCredentialsProvider auth.StreamingCredentialsProvider
// DB is the database to be selected after connecting to the server.
DB int
// Maximum number of retries before giving up.
// Default is 3 retries; -1 (not 0) disables retries.
// MaxRetries is the maximum number of retries before giving up.
// -1 (not 0) disables retries.
//
// default: 3 retries
MaxRetries int
// Minimum backoff between each retry.
// Default is 8 milliseconds; -1 disables backoff.
// MinRetryBackoff is the minimum backoff between each retry.
// -1 disables backoff.
//
// default: 8 milliseconds
MinRetryBackoff time.Duration
// Maximum backoff between each retry.
// Default is 512 milliseconds; -1 disables backoff.
// MaxRetryBackoff is the maximum backoff between each retry.
// -1 disables backoff.
// default: 512 milliseconds;
MaxRetryBackoff time.Duration
// Dial timeout for establishing new connections.
// Default is 5 seconds.
// DialTimeout for establishing new connections.
//
// default: 5 seconds
DialTimeout time.Duration
// Timeout for socket reads. If reached, commands will fail
// ReadTimeout for socket reads. If reached, commands will fail
// with a timeout instead of blocking. Supported values:
// - `0` - default timeout (3 seconds).
// - `-1` - no timeout (block indefinitely).
// - `-2` - disables SetReadDeadline calls completely.
//
// - `-1` - no timeout (block indefinitely).
// - `-2` - disables SetReadDeadline calls completely.
//
// default: 3 seconds
ReadTimeout time.Duration
// Timeout for socket writes. If reached, commands will fail
// WriteTimeout for socket writes. If reached, commands will fail
// with a timeout instead of blocking. Supported values:
// - `0` - default timeout (3 seconds).
// - `-1` - no timeout (block indefinitely).
// - `-2` - disables SetWriteDeadline calls completely.
//
// - `-1` - no timeout (block indefinitely).
// - `-2` - disables SetWriteDeadline calls completely.
//
// default: 3 seconds
WriteTimeout time.Duration
// ContextTimeoutEnabled controls whether the client respects context timeouts and deadlines.
// See https://redis.uptrace.dev/guide/go-redis-debugging.html#timeouts
ContextTimeoutEnabled bool
// Type of connection pool.
// true for FIFO pool, false for LIFO pool.
// PoolFIFO type of connection pool.
//
// - true for FIFO pool
// - false for LIFO pool.
//
// Note that FIFO has slightly higher overhead compared to LIFO,
// but it helps closing idle connections faster reducing the pool size.
PoolFIFO bool
// Base number of socket connections.
// PoolSize is the base number of socket connections.
// Default is 10 connections per every available CPU as reported by runtime.GOMAXPROCS.
// If there is not enough connections in the pool, new connections will be allocated in excess of PoolSize,
// you can limit it through MaxActiveConns
//
// default: 10 * runtime.GOMAXPROCS(0)
PoolSize int
// Amount of time client waits for connection if all connections
// PoolTimeout is the amount of time client waits for connection if all connections
// are busy before returning an error.
// Default is ReadTimeout + 1 second.
//
// default: ReadTimeout + 1 second
PoolTimeout time.Duration
// Minimum number of idle connections which is useful when establishing
// new connection is slow.
// Default is 0. the idle connections are not closed by default.
// MinIdleConns is the minimum number of idle connections which is useful when establishing
// new connection is slow. The idle connections are not closed by default.
//
// default: 0
MinIdleConns int
// Maximum number of idle connections.
// Default is 0. the idle connections are not closed by default.
// MaxIdleConns is the maximum number of idle connections.
// The idle connections are not closed by default.
//
// default: 0
MaxIdleConns int
// Maximum number of connections allocated by the pool at a given time.
// MaxActiveConns is the maximum number of connections allocated by the pool at a given time.
// When zero, there is no limit on the number of connections in the pool.
// If the pool is full, the next call to Get() will block until a connection is released.
MaxActiveConns int
// ConnMaxIdleTime is the maximum amount of time a connection may be idle.
// Should be less than server's timeout.
//
// Expired connections may be closed lazily before reuse.
// If d <= 0, connections are not closed due to a connection's idle time.
// -1 disables idle timeout check.
//
// Default is 30 minutes. -1 disables idle timeout check.
// default: 30 minutes
ConnMaxIdleTime time.Duration
// ConnMaxLifetime is the maximum amount of time a connection may be reused.
//
// Expired connections may be closed lazily before reuse.
// If <= 0, connections are not closed due to a connection's age.
//
// Default is to not close idle connections.
// default: 0
ConnMaxLifetime time.Duration
// TLS Config to use. When set, TLS will be negotiated.
// TLSConfig to use. When set, TLS will be negotiated.
TLSConfig *tls.Config
// Limiter interface used to implement circuit breaker or rate limiter.
Limiter Limiter
// Enables read only queries on slave/follower nodes.
// readOnly enables read only queries on slave/follower nodes.
readOnly bool
// DisableIndentity - Disable set-lib on connect.
@ -161,9 +210,11 @@ type Options struct {
DisableIdentity bool
// Add suffix to client name. Default is empty.
// IdentitySuffix - add suffix to client name.
IdentitySuffix string
// UnstableResp3 enables Unstable mode for Redis Search module with RESP3.
// When unstable mode is enabled, the client will use RESP3 protocol and only be able to use RawResult
UnstableResp3 bool
}
@ -178,6 +229,9 @@ func (opt *Options) init() {
opt.Network = "tcp"
}
}
if opt.Protocol < 2 {
opt.Protocol = 3
}
if opt.DialTimeout == 0 {
opt.DialTimeout = 5 * time.Second
}
@ -214,9 +268,10 @@ func (opt *Options) init() {
opt.ConnMaxIdleTime = 30 * time.Minute
}
if opt.MaxRetries == -1 {
switch opt.MaxRetries {
case -1:
opt.MaxRetries = 0
} else if opt.MaxRetries == 0 {
case 0:
opt.MaxRetries = 3
}
switch opt.MinRetryBackoff {
@ -276,6 +331,7 @@ func NewDialer(opt *Options) func(context.Context, string, string) (net.Conn, er
// URL attributes (scheme, host, userinfo, resp.), query parameters using these
// names will be treated as unknown parameters
// - unknown parameter names will result in an error
// - use "skip_verify=true" to ignore TLS certificate validation
//
// Examples:
//
@ -496,6 +552,9 @@ func setupConnParams(u *url.URL, o *Options) (*Options, error) {
if q.err != nil {
return nil, q.err
}
if o.TLSConfig != nil && q.has("skip_verify") {
o.TLSConfig.InsecureSkipVerify = q.bool("skip_verify")
}
// any parameters left?
if r := q.remaining(); len(r) > 0 {
@ -527,6 +586,7 @@ func newConnPool(
PoolFIFO: opt.PoolFIFO,
PoolSize: opt.PoolSize,
PoolTimeout: opt.PoolTimeout,
DialTimeout: opt.DialTimeout,
MinIdleConns: opt.MinIdleConns,
MaxIdleConns: opt.MaxIdleConns,
MaxActiveConns: opt.MaxActiveConns,

View File

@ -14,6 +14,7 @@ import (
"sync/atomic"
"time"
"github.com/redis/go-redis/v9/auth"
"github.com/redis/go-redis/v9/internal"
"github.com/redis/go-redis/v9/internal/hashtag"
"github.com/redis/go-redis/v9/internal/pool"
@ -21,6 +22,10 @@ import (
"github.com/redis/go-redis/v9/internal/rand"
)
const (
minLatencyMeasurementInterval = 10 * time.Second
)
var errClusterNoNodes = fmt.Errorf("redis: cluster has no nodes")
// ClusterOptions are used to configure a cluster client and should be
@ -62,11 +67,12 @@ type ClusterOptions struct {
OnConnect func(ctx context.Context, cn *Conn) error
Protocol int
Username string
Password string
CredentialsProvider func() (username string, password string)
CredentialsProviderContext func(ctx context.Context) (username string, password string, err error)
Protocol int
Username string
Password string
CredentialsProvider func() (username string, password string)
CredentialsProviderContext func(ctx context.Context) (username string, password string, err error)
StreamingCredentialsProvider auth.StreamingCredentialsProvider
MaxRetries int
MinRetryBackoff time.Duration
@ -107,9 +113,10 @@ type ClusterOptions struct {
}
func (opt *ClusterOptions) init() {
if opt.MaxRedirects == -1 {
switch opt.MaxRedirects {
case -1:
opt.MaxRedirects = 0
} else if opt.MaxRedirects == 0 {
case 0:
opt.MaxRedirects = 3
}
@ -287,11 +294,12 @@ func (opt *ClusterOptions) clientOptions() *Options {
Dialer: opt.Dialer,
OnConnect: opt.OnConnect,
Protocol: opt.Protocol,
Username: opt.Username,
Password: opt.Password,
CredentialsProvider: opt.CredentialsProvider,
CredentialsProviderContext: opt.CredentialsProviderContext,
Protocol: opt.Protocol,
Username: opt.Username,
Password: opt.Password,
CredentialsProvider: opt.CredentialsProvider,
CredentialsProviderContext: opt.CredentialsProviderContext,
StreamingCredentialsProvider: opt.StreamingCredentialsProvider,
MaxRetries: opt.MaxRetries,
MinRetryBackoff: opt.MinRetryBackoff,
@ -332,6 +340,10 @@ type clusterNode struct {
latency uint32 // atomic
generation uint32 // atomic
failing uint32 // atomic
// last time the latency measurement was performed for the node, stored in nanoseconds
// from epoch
lastLatencyMeasurement int64 // atomic
}
func newClusterNode(clOpt *ClusterOptions, addr string) *clusterNode {
@ -384,6 +396,7 @@ func (n *clusterNode) updateLatency() {
latency = float64(dur) / float64(successes)
}
atomic.StoreUint32(&n.latency, uint32(latency+0.5))
n.SetLastLatencyMeasurement(time.Now())
}
func (n *clusterNode) Latency() time.Duration {
@ -413,6 +426,10 @@ func (n *clusterNode) Generation() uint32 {
return atomic.LoadUint32(&n.generation)
}
func (n *clusterNode) LastLatencyMeasurement() int64 {
return atomic.LoadInt64(&n.lastLatencyMeasurement)
}
func (n *clusterNode) SetGeneration(gen uint32) {
for {
v := atomic.LoadUint32(&n.generation)
@ -422,6 +439,23 @@ func (n *clusterNode) SetGeneration(gen uint32) {
}
}
func (n *clusterNode) SetLastLatencyMeasurement(t time.Time) {
for {
v := atomic.LoadInt64(&n.lastLatencyMeasurement)
if t.UnixNano() < v || atomic.CompareAndSwapInt64(&n.lastLatencyMeasurement, v, t.UnixNano()) {
break
}
}
}
func (n *clusterNode) Loading() bool {
ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
err := n.Client.Ping(ctx).Err()
return err != nil && isLoadingError(err)
}
//------------------------------------------------------------------------------
type clusterNodes struct {
@ -511,10 +545,11 @@ func (c *clusterNodes) GC(generation uint32) {
c.mu.Lock()
c.activeAddrs = c.activeAddrs[:0]
now := time.Now()
for addr, node := range c.nodes {
if node.Generation() >= generation {
c.activeAddrs = append(c.activeAddrs, addr)
if c.opt.RouteByLatency {
if c.opt.RouteByLatency && node.LastLatencyMeasurement() < now.Add(-minLatencyMeasurementInterval).UnixNano() {
go node.updateLatency()
}
continue
@ -730,7 +765,8 @@ func (c *clusterState) slotSlaveNode(slot int) (*clusterNode, error) {
case 1:
return nodes[0], nil
case 2:
if slave := nodes[1]; !slave.Failing() {
slave := nodes[1]
if !slave.Failing() && !slave.Loading() {
return slave, nil
}
return nodes[0], nil
@ -739,7 +775,7 @@ func (c *clusterState) slotSlaveNode(slot int) (*clusterNode, error) {
for i := 0; i < 10; i++ {
n := rand.Intn(len(nodes)-1) + 1
slave = nodes[n]
if !slave.Failing() {
if !slave.Failing() && !slave.Loading() {
return slave, nil
}
}
@ -900,6 +936,9 @@ type ClusterClient struct {
// NewClusterClient returns a Redis Cluster client as described in
// http://redis.io/topics/cluster-spec.
func NewClusterClient(opt *ClusterOptions) *ClusterClient {
if opt == nil {
panic("redis: NewClusterClient nil options")
}
opt.init()
c := &ClusterClient{
@ -954,7 +993,7 @@ func (c *ClusterClient) Process(ctx context.Context, cmd Cmder) error {
}
func (c *ClusterClient) process(ctx context.Context, cmd Cmder) error {
slot := c.cmdSlot(ctx, cmd)
slot := c.cmdSlot(cmd)
var node *clusterNode
var moved bool
var ask bool
@ -1302,7 +1341,7 @@ func (c *ClusterClient) mapCmdsByNode(ctx context.Context, cmdsMap *cmdsMap, cmd
if c.opt.ReadOnly && c.cmdsAreReadOnly(ctx, cmds) {
for _, cmd := range cmds {
slot := c.cmdSlot(ctx, cmd)
slot := c.cmdSlot(cmd)
node, err := c.slotReadOnlyNode(state, slot)
if err != nil {
return err
@ -1313,7 +1352,7 @@ func (c *ClusterClient) mapCmdsByNode(ctx context.Context, cmdsMap *cmdsMap, cmd
}
for _, cmd := range cmds {
slot := c.cmdSlot(ctx, cmd)
slot := c.cmdSlot(cmd)
node, err := state.slotMasterNode(slot)
if err != nil {
return err
@ -1339,7 +1378,9 @@ func (c *ClusterClient) processPipelineNode(
_ = node.Client.withProcessPipelineHook(ctx, cmds, func(ctx context.Context, cmds []Cmder) error {
cn, err := node.Client.getConn(ctx)
if err != nil {
node.MarkAsFailing()
if !isContextError(err) {
node.MarkAsFailing()
}
_ = c.mapCmdsByNode(ctx, failedCmds, cmds)
setCmdsErr(cmds, err)
return err
@ -1469,7 +1510,7 @@ func (c *ClusterClient) processTxPipeline(ctx context.Context, cmds []Cmder) err
return err
}
cmdsMap := c.mapCmdsBySlot(ctx, cmds)
cmdsMap := c.mapCmdsBySlot(cmds)
for slot, cmds := range cmdsMap {
node, err := state.slotMasterNode(slot)
if err != nil {
@ -1508,10 +1549,10 @@ func (c *ClusterClient) processTxPipeline(ctx context.Context, cmds []Cmder) err
return cmdsFirstErr(cmds)
}
func (c *ClusterClient) mapCmdsBySlot(ctx context.Context, cmds []Cmder) map[int][]Cmder {
func (c *ClusterClient) mapCmdsBySlot(cmds []Cmder) map[int][]Cmder {
cmdsMap := make(map[int][]Cmder)
for _, cmd := range cmds {
slot := c.cmdSlot(ctx, cmd)
slot := c.cmdSlot(cmd)
cmdsMap[slot] = append(cmdsMap[slot], cmd)
}
return cmdsMap
@ -1540,7 +1581,7 @@ func (c *ClusterClient) processTxPipelineNode(
}
func (c *ClusterClient) processTxPipelineNodeConn(
ctx context.Context, node *clusterNode, cn *pool.Conn, cmds []Cmder, failedCmds *cmdsMap,
ctx context.Context, _ *clusterNode, cn *pool.Conn, cmds []Cmder, failedCmds *cmdsMap,
) error {
if err := cn.WithWriter(c.context(ctx), c.opt.WriteTimeout, func(wr *proto.Writer) error {
return writeCmds(wr, cmds)
@ -1829,9 +1870,9 @@ func (c *ClusterClient) cmdInfo(ctx context.Context, name string) *CommandInfo {
return info
}
func (c *ClusterClient) cmdSlot(ctx context.Context, cmd Cmder) int {
func (c *ClusterClient) cmdSlot(cmd Cmder) int {
args := cmd.Args()
if args[0] == "cluster" && args[1] == "getkeysinslot" {
if args[0] == "cluster" && (args[1] == "getkeysinslot" || args[1] == "countkeysinslot") {
return args[2].(int)
}

View File

@ -319,37 +319,69 @@ func (cmd *BFInfoCmd) Result() (BFInfo, error) {
}
func (cmd *BFInfoCmd) readReply(rd *proto.Reader) (err error) {
n, err := rd.ReadMapLen()
result := BFInfo{}
// Create a mapping from key names to pointers of struct fields
respMapping := map[string]*int64{
"Capacity": &result.Capacity,
"CAPACITY": &result.Capacity,
"Size": &result.Size,
"SIZE": &result.Size,
"Number of filters": &result.Filters,
"FILTERS": &result.Filters,
"Number of items inserted": &result.ItemsInserted,
"ITEMS": &result.ItemsInserted,
"Expansion rate": &result.ExpansionRate,
"EXPANSION": &result.ExpansionRate,
}
// Helper function to read and assign a value based on the key
readAndAssignValue := func(key string) error {
fieldPtr, exists := respMapping[key]
if !exists {
return fmt.Errorf("redis: BLOOM.INFO unexpected key %s", key)
}
// Read the integer and assign to the field via pointer dereferencing
val, err := rd.ReadInt()
if err != nil {
return err
}
*fieldPtr = val
return nil
}
readType, err := rd.PeekReplyType()
if err != nil {
return err
}
var key string
var result BFInfo
for f := 0; f < n; f++ {
key, err = rd.ReadString()
if len(cmd.args) > 2 && readType == proto.RespArray {
n, err := rd.ReadArrayLen()
if err != nil {
return err
}
switch key {
case "Capacity":
result.Capacity, err = rd.ReadInt()
case "Size":
result.Size, err = rd.ReadInt()
case "Number of filters":
result.Filters, err = rd.ReadInt()
case "Number of items inserted":
result.ItemsInserted, err = rd.ReadInt()
case "Expansion rate":
result.ExpansionRate, err = rd.ReadInt()
default:
return fmt.Errorf("redis: BLOOM.INFO unexpected key %s", key)
if key, ok := cmd.args[2].(string); ok && n == 1 {
if err := readAndAssignValue(key); err != nil {
return err
}
} else {
return fmt.Errorf("redis: BLOOM.INFO invalid argument key type")
}
} else {
n, err := rd.ReadMapLen()
if err != nil {
return err
}
for i := 0; i < n; i++ {
key, err := rd.ReadString()
if err != nil {
return err
}
if err := readAndAssignValue(key); err != nil {
return err
}
}
}
cmd.val = result

View File

@ -45,6 +45,9 @@ func (c *PubSub) init() {
}
func (c *PubSub) String() string {
c.mu.Lock()
defer c.mu.Unlock()
channels := mapKeys(c.channels)
channels = append(channels, mapKeys(c.patterns)...)
channels = append(channels, mapKeys(c.schannels)...)
@ -432,7 +435,7 @@ func (c *PubSub) ReceiveTimeout(ctx context.Context, timeout time.Duration) (int
return nil, err
}
err = cn.WithReader(context.Background(), timeout, func(rd *proto.Reader) error {
err = cn.WithReader(ctx, timeout, func(rd *proto.Reader) error {
return c.cmd.readReply(rd)
})

View File

@ -9,6 +9,7 @@ import (
"sync/atomic"
"time"
"github.com/redis/go-redis/v9/auth"
"github.com/redis/go-redis/v9/internal"
"github.com/redis/go-redis/v9/internal/hscan"
"github.com/redis/go-redis/v9/internal/pool"
@ -203,6 +204,7 @@ func (hs *hooksMixin) processTxPipelineHook(ctx context.Context, cmds []Cmder) e
type baseClient struct {
opt *Options
connPool pool.Pooler
hooksMixin
onClose func() error // hook called when client is closed
}
@ -282,36 +284,107 @@ func (c *baseClient) _getConn(ctx context.Context) (*pool.Conn, error) {
return cn, nil
}
func (c *baseClient) newReAuthCredentialsListener(poolCn *pool.Conn) auth.CredentialsListener {
return auth.NewReAuthCredentialsListener(
c.reAuthConnection(poolCn),
c.onAuthenticationErr(poolCn),
)
}
func (c *baseClient) reAuthConnection(poolCn *pool.Conn) func(credentials auth.Credentials) error {
return func(credentials auth.Credentials) error {
var err error
username, password := credentials.BasicAuth()
ctx := context.Background()
connPool := pool.NewSingleConnPool(c.connPool, poolCn)
// hooksMixin are intentionally empty here
cn := newConn(c.opt, connPool, nil)
if username != "" {
err = cn.AuthACL(ctx, username, password).Err()
} else {
err = cn.Auth(ctx, password).Err()
}
return err
}
}
func (c *baseClient) onAuthenticationErr(poolCn *pool.Conn) func(err error) {
return func(err error) {
if err != nil {
if isBadConn(err, false, c.opt.Addr) {
// Close the connection to force a reconnection.
err := c.connPool.CloseConn(poolCn)
if err != nil {
internal.Logger.Printf(context.Background(), "redis: failed to close connection: %v", err)
// try to close the network connection directly
// so that no resource is leaked
err := poolCn.Close()
if err != nil {
internal.Logger.Printf(context.Background(), "redis: failed to close network connection: %v", err)
}
}
}
internal.Logger.Printf(context.Background(), "redis: re-authentication failed: %v", err)
}
}
}
func (c *baseClient) wrappedOnClose(newOnClose func() error) func() error {
onClose := c.onClose
return func() error {
var firstErr error
err := newOnClose()
// Even if we have an error we would like to execute the onClose hook
// if it exists. We will return the first error that occurred.
// This is to keep error handling consistent with the rest of the code.
if err != nil {
firstErr = err
}
if onClose != nil {
err = onClose()
if err != nil && firstErr == nil {
firstErr = err
}
}
return firstErr
}
}
func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
if cn.Inited {
return nil
}
cn.Inited = true
var err error
username, password := c.opt.Username, c.opt.Password
if c.opt.CredentialsProviderContext != nil {
if username, password, err = c.opt.CredentialsProviderContext(ctx); err != nil {
return err
cn.Inited = true
connPool := pool.NewSingleConnPool(c.connPool, cn)
conn := newConn(c.opt, connPool, &c.hooksMixin)
username, password := "", ""
if c.opt.StreamingCredentialsProvider != nil {
credentials, unsubscribeFromCredentialsProvider, err := c.opt.StreamingCredentialsProvider.
Subscribe(c.newReAuthCredentialsListener(cn))
if err != nil {
return fmt.Errorf("failed to subscribe to streaming credentials: %w", err)
}
c.onClose = c.wrappedOnClose(unsubscribeFromCredentialsProvider)
cn.SetOnClose(unsubscribeFromCredentialsProvider)
username, password = credentials.BasicAuth()
} else if c.opt.CredentialsProviderContext != nil {
username, password, err = c.opt.CredentialsProviderContext(ctx)
if err != nil {
return fmt.Errorf("failed to get credentials from context provider: %w", err)
}
} else if c.opt.CredentialsProvider != nil {
username, password = c.opt.CredentialsProvider()
}
connPool := pool.NewSingleConnPool(c.connPool, cn)
conn := newConn(c.opt, connPool)
var auth bool
protocol := c.opt.Protocol
// By default, use RESP3 in current version.
if protocol < 2 {
protocol = 3
} else if c.opt.Username != "" || c.opt.Password != "" {
username, password = c.opt.Username, c.opt.Password
}
// for redis-server versions that do not support the HELLO command,
// RESP2 will continue to be used.
if err = conn.Hello(ctx, protocol, username, password, "").Err(); err == nil {
auth = true
if err = conn.Hello(ctx, c.opt.Protocol, username, password, c.opt.ClientName).Err(); err == nil {
// Authentication successful with HELLO command
} else if !isRedisError(err) {
// When the server responds with the RESP protocol and the result is not a normal
// execution result of the HELLO command, we consider it to be an indication that
@ -321,17 +394,19 @@ func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
// with different error string results for unsupported commands, making it
// difficult to rely on error strings to determine all results.
return err
} else if password != "" {
// Try legacy AUTH command if HELLO failed
if username != "" {
err = conn.AuthACL(ctx, username, password).Err()
} else {
err = conn.Auth(ctx, password).Err()
}
if err != nil {
return fmt.Errorf("failed to authenticate: %w", err)
}
}
_, err = conn.Pipelined(ctx, func(pipe Pipeliner) error {
if !auth && password != "" {
if username != "" {
pipe.AuthACL(ctx, username, password)
} else {
pipe.Auth(ctx, password)
}
}
if c.opt.DB > 0 {
pipe.Select(ctx, c.opt.DB)
}
@ -347,7 +422,7 @@ func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
return nil
})
if err != nil {
return err
return fmt.Errorf("failed to initialize connection options: %w", err)
}
if !c.opt.DisableIdentity && !c.opt.DisableIndentity {
@ -369,6 +444,7 @@ func (c *baseClient) initConn(ctx context.Context, cn *pool.Conn) error {
if c.opt.OnConnect != nil {
return c.opt.OnConnect(ctx, conn)
}
return nil
}
@ -487,6 +563,16 @@ func (c *baseClient) cmdTimeout(cmd Cmder) time.Duration {
return c.opt.ReadTimeout
}
// context returns the context for the current connection.
// If the context timeout is enabled, it returns the original context.
// Otherwise, it returns a new background context.
func (c *baseClient) context(ctx context.Context) context.Context {
if c.opt.ContextTimeoutEnabled {
return ctx
}
return context.Background()
}
// Close closes the client, releasing any open resources.
//
// It is rare to Close a Client, as the Client is meant to be
@ -639,13 +725,6 @@ func txPipelineReadQueued(rd *proto.Reader, statusCmd *StatusCmd, cmds []Cmder)
return nil
}
func (c *baseClient) context(ctx context.Context) context.Context {
if c.opt.ContextTimeoutEnabled {
return ctx
}
return context.Background()
}
//------------------------------------------------------------------------------
// Client is a Redis client representing a pool of zero or more underlying connections.
@ -656,11 +735,13 @@ func (c *baseClient) context(ctx context.Context) context.Context {
type Client struct {
*baseClient
cmdable
hooksMixin
}
// NewClient returns a client to the Redis Server specified by Options.
func NewClient(opt *Options) *Client {
if opt == nil {
panic("redis: NewClient nil options")
}
opt.init()
c := Client{
@ -692,7 +773,7 @@ func (c *Client) WithTimeout(timeout time.Duration) *Client {
}
func (c *Client) Conn() *Conn {
return newConn(c.opt, pool.NewStickyConnPool(c.connPool))
return newConn(c.opt, pool.NewStickyConnPool(c.connPool), &c.hooksMixin)
}
// Do create a Cmd from the args and processes the cmd.
@ -825,10 +906,12 @@ type Conn struct {
baseClient
cmdable
statefulCmdable
hooksMixin
}
func newConn(opt *Options, connPool pool.Pooler) *Conn {
// newConn is a helper func to create a new Conn instance.
// the Conn instance is not thread-safe and should not be shared between goroutines.
// the parentHooks will be cloned, no need to clone before passing it.
func newConn(opt *Options, connPool pool.Pooler, parentHooks *hooksMixin) *Conn {
c := Conn{
baseClient: baseClient{
opt: opt,
@ -836,6 +919,10 @@ func newConn(opt *Options, connPool pool.Pooler) *Conn {
},
}
if parentHooks != nil {
c.hooksMixin = parentHooks.clone()
}
c.cmdable = c.Process
c.statefulCmdable = c.Process
c.initHooks(hooks{

View File

@ -82,6 +82,14 @@ func NewBoolSliceResult(val []bool, err error) *BoolSliceCmd {
return &cmd
}
// NewFloatSliceResult returns a FloatSliceCmd initialised with val and err for testing.
func NewFloatSliceResult(val []float64, err error) *FloatSliceCmd {
var cmd FloatSliceCmd
cmd.val = val
cmd.SetErr(err)
return &cmd
}
// NewMapStringStringResult returns a MapStringStringCmd initialised with val and err for testing.
func NewMapStringStringResult(val map[string]string, err error) *MapStringStringCmd {
var cmd MapStringStringCmd

View File

@ -128,9 +128,10 @@ func (opt *RingOptions) init() {
opt.NewConsistentHash = newRendezvous
}
if opt.MaxRetries == -1 {
switch opt.MaxRetries {
case -1:
opt.MaxRetries = 0
} else if opt.MaxRetries == 0 {
case 0:
opt.MaxRetries = 3
}
switch opt.MinRetryBackoff {
@ -348,16 +349,16 @@ func (c *ringSharding) newRingShards(
return
}
// Warning: External exposure of `c.shards.list` may cause data races.
// So keep internal or implement deep copy if exposed.
func (c *ringSharding) List() []*ringShard {
var list []*ringShard
c.mu.RLock()
if !c.closed {
list = c.shards.list
}
c.mu.RUnlock()
defer c.mu.RUnlock()
return list
if c.closed {
return nil
}
return c.shards.list
}
func (c *ringSharding) Hash(key string) string {
@ -421,6 +422,7 @@ func (c *ringSharding) Heartbeat(ctx context.Context, frequency time.Duration) {
case <-ticker.C:
var rebalance bool
// note: `c.List()` return a shadow copy of `[]*ringShard`.
for _, shard := range c.List() {
err := shard.Client.Ping(ctx).Err()
isUp := err == nil || err == pool.ErrPoolTimeout
@ -521,6 +523,9 @@ type Ring struct {
}
func NewRing(opt *RingOptions) *Ring {
if opt == nil {
panic("redis: NewRing nil options")
}
opt.init()
hbCtx, hbCancel := context.WithCancel(context.Background())
@ -577,6 +582,7 @@ func (c *Ring) retryBackoff(attempt int) time.Duration {
// PoolStats returns accumulated connection pool stats.
func (c *Ring) PoolStats() *PoolStats {
// note: `c.List()` return a shadow copy of `[]*ringShard`.
shards := c.sharding.List()
var acc PoolStats
for _, shard := range shards {
@ -646,6 +652,7 @@ func (c *Ring) ForEachShard(
ctx context.Context,
fn func(ctx context.Context, client *Client) error,
) error {
// note: `c.List()` return a shadow copy of `[]*ringShard`.
shards := c.sharding.List()
var wg sync.WaitGroup
errCh := make(chan error, 1)
@ -677,6 +684,7 @@ func (c *Ring) ForEachShard(
}
func (c *Ring) cmdsInfo(ctx context.Context) (map[string]*CommandInfo, error) {
// note: `c.List()` return a shadow copy of `[]*ringShard`.
shards := c.sharding.List()
var firstErr error
for _, shard := range shards {
@ -694,7 +702,7 @@ func (c *Ring) cmdsInfo(ctx context.Context) (map[string]*CommandInfo, error) {
return nil, firstErr
}
func (c *Ring) cmdShard(ctx context.Context, cmd Cmder) (*ringShard, error) {
func (c *Ring) cmdShard(cmd Cmder) (*ringShard, error) {
pos := cmdFirstKeyPos(cmd)
if pos == 0 {
return c.sharding.Random()
@ -712,7 +720,7 @@ func (c *Ring) process(ctx context.Context, cmd Cmder) error {
}
}
shard, err := c.cmdShard(ctx, cmd)
shard, err := c.cmdShard(cmd)
if err != nil {
return err
}
@ -805,7 +813,7 @@ func (c *Ring) Watch(ctx context.Context, fn func(*Tx) error, keys ...string) er
for _, key := range keys {
if key != "" {
shard, err := c.sharding.GetByKey(hashtag.Key(key))
shard, err := c.sharding.GetByKey(key)
if err != nil {
return err
}
@ -839,3 +847,26 @@ func (c *Ring) Close() error {
return c.sharding.Close()
}
// GetShardClients returns a list of all shard clients in the ring.
// This can be used to create dedicated connections (e.g., PubSub) for each shard.
func (c *Ring) GetShardClients() []*Client {
shards := c.sharding.List()
clients := make([]*Client, 0, len(shards))
for _, shard := range shards {
if shard.IsUp() {
clients = append(clients, shard.Client)
}
}
return clients
}
// GetShardClientForKey returns the shard client that would handle the given key.
// This can be used to determine which shard a particular key/channel would be routed to.
func (c *Ring) GetShardClientForKey(key string) (*Client, error) {
shard, err := c.sharding.GetByKey(key)
if err != nil {
return nil, err
}
return shard.Client, nil
}

View File

@ -114,6 +114,7 @@ type SpellCheckTerms struct {
}
type FTExplainOptions struct {
// Dialect 1,3 and 4 are deprecated since redis 8.0
Dialect string
}
@ -240,14 +241,19 @@ type FTAggregateWithCursor struct {
}
type FTAggregateOptions struct {
Verbatim bool
LoadAll bool
Load []FTAggregateLoad
Timeout int
GroupBy []FTAggregateGroupBy
SortBy []FTAggregateSortBy
SortByMax int
Scorer string
Verbatim bool
LoadAll bool
Load []FTAggregateLoad
Timeout int
GroupBy []FTAggregateGroupBy
SortBy []FTAggregateSortBy
SortByMax int
// Scorer is used to set scoring function, if not set passed, a default will be used.
// The default scorer depends on the Redis version:
// - `BM25` for Redis >= 8
// - `TFIDF` for Redis < 8
Scorer string
// AddScores is available in Redis CE 8
AddScores bool
Apply []FTAggregateApply
LimitOffset int
@ -256,7 +262,8 @@ type FTAggregateOptions struct {
WithCursor bool
WithCursorOptions *FTAggregateWithCursor
Params map[string]interface{}
DialectVersion int
// Dialect 1,3 and 4 are deprecated since redis 8.0
DialectVersion int
}
type FTSearchFilter struct {
@ -284,23 +291,30 @@ type FTSearchSortBy struct {
Desc bool
}
// FTSearchOptions hold options that can be passed to the FT.SEARCH command.
// More information about the options can be found
// in the documentation for FT.SEARCH https://redis.io/docs/latest/commands/ft.search/
type FTSearchOptions struct {
NoContent bool
Verbatim bool
NoStopWords bool
WithScores bool
WithPayloads bool
WithSortKeys bool
Filters []FTSearchFilter
GeoFilter []FTSearchGeoFilter
InKeys []interface{}
InFields []interface{}
Return []FTSearchReturn
Slop int
Timeout int
InOrder bool
Language string
Expander string
NoContent bool
Verbatim bool
NoStopWords bool
WithScores bool
WithPayloads bool
WithSortKeys bool
Filters []FTSearchFilter
GeoFilter []FTSearchGeoFilter
InKeys []interface{}
InFields []interface{}
Return []FTSearchReturn
Slop int
Timeout int
InOrder bool
Language string
Expander string
// Scorer is used to set scoring function, if not set passed, a default will be used.
// The default scorer depends on the Redis version:
// - `BM25` for Redis >= 8
// - `TFIDF` for Redis < 8
Scorer string
ExplainScore bool
Payload string
@ -308,8 +322,12 @@ type FTSearchOptions struct {
SortByWithCount bool
LimitOffset int
Limit int
Params map[string]interface{}
DialectVersion int
// CountOnly sets LIMIT 0 0 to get the count - number of documents in the result set without actually returning the result set.
// When using this option, the Limit and LimitOffset options are ignored.
CountOnly bool
Params map[string]interface{}
// Dialect 1,3 and 4 are deprecated since redis 8.0
DialectVersion int
}
type FTSynDumpResult struct {
@ -425,7 +443,8 @@ type IndexDefinition struct {
type FTSpellCheckOptions struct {
Distance int
Terms *FTSpellCheckTerms
Dialect int
// Dialect 1,3 and 4 are deprecated since redis 8.0
Dialect int
}
type FTSpellCheckTerms struct {
@ -592,6 +611,8 @@ func FTAggregateQuery(query string, options *FTAggregateOptions) AggregateQuery
if options.DialectVersion > 0 {
queryArgs = append(queryArgs, "DIALECT", options.DialectVersion)
} else {
queryArgs = append(queryArgs, "DIALECT", 2)
}
}
return queryArgs
@ -789,6 +810,8 @@ func (c cmdable) FTAggregateWithArgs(ctx context.Context, index string, query st
}
if options.DialectVersion > 0 {
args = append(args, "DIALECT", options.DialectVersion)
} else {
args = append(args, "DIALECT", 2)
}
}
@ -846,20 +869,32 @@ func (c cmdable) FTAlter(ctx context.Context, index string, skipInitialScan bool
return cmd
}
// FTConfigGet - Retrieves the value of a RediSearch configuration parameter.
// Retrieves the value of a RediSearch configuration parameter.
// The 'option' parameter specifies the configuration parameter to retrieve.
// For more information, please refer to the Redis documentation:
// [FT.CONFIG GET]: (https://redis.io/commands/ft.config-get/)
// For more information, please refer to the Redis [FT.CONFIG GET] documentation.
//
// Deprecated: FTConfigGet is deprecated in Redis 8.
// All configuration will be done with the CONFIG GET command.
// For more information check [Client.ConfigGet] and [CONFIG GET Documentation]
//
// [CONFIG GET Documentation]: https://redis.io/commands/config-get/
// [FT.CONFIG GET]: https://redis.io/commands/ft.config-get/
func (c cmdable) FTConfigGet(ctx context.Context, option string) *MapMapStringInterfaceCmd {
cmd := NewMapMapStringInterfaceCmd(ctx, "FT.CONFIG", "GET", option)
_ = c(ctx, cmd)
return cmd
}
// FTConfigSet - Sets the value of a RediSearch configuration parameter.
// Sets the value of a RediSearch configuration parameter.
// The 'option' parameter specifies the configuration parameter to set, and the 'value' parameter specifies the new value.
// For more information, please refer to the Redis documentation:
// [FT.CONFIG SET]: (https://redis.io/commands/ft.config-set/)
// For more information, please refer to the Redis [FT.CONFIG SET] documentation.
//
// Deprecated: FTConfigSet is deprecated in Redis 8.
// All configuration will be done with the CONFIG SET command.
// For more information check [Client.ConfigSet] and [CONFIG SET Documentation]
//
// [CONFIG SET Documentation]: https://redis.io/commands/config-set/
// [FT.CONFIG SET]: https://redis.io/commands/ft.config-set/
func (c cmdable) FTConfigSet(ctx context.Context, option string, value interface{}) *StatusCmd {
cmd := NewStatusCmd(ctx, "FT.CONFIG", "SET", option, value)
_ = c(ctx, cmd)
@ -1150,6 +1185,8 @@ func (c cmdable) FTExplainWithArgs(ctx context.Context, index string, query stri
args := []interface{}{"FT.EXPLAIN", index, query}
if options.Dialect != "" {
args = append(args, "DIALECT", options.Dialect)
} else {
args = append(args, "DIALECT", 2)
}
cmd := NewStringCmd(ctx, args...)
_ = c(ctx, cmd)
@ -1447,6 +1484,8 @@ func (c cmdable) FTSpellCheckWithArgs(ctx context.Context, index string, query s
}
if options.Dialect > 0 {
args = append(args, "DIALECT", options.Dialect)
} else {
args = append(args, "DIALECT", 2)
}
}
cmd := newFTSpellCheckCmd(ctx, args...)
@ -1816,6 +1855,8 @@ func FTSearchQuery(query string, options *FTSearchOptions) SearchQuery {
}
if options.DialectVersion > 0 {
queryArgs = append(queryArgs, "DIALECT", options.DialectVersion)
} else {
queryArgs = append(queryArgs, "DIALECT", 2)
}
}
return queryArgs
@ -1920,8 +1961,12 @@ func (c cmdable) FTSearchWithArgs(ctx context.Context, index string, query strin
args = append(args, "WITHCOUNT")
}
}
if options.LimitOffset >= 0 && options.Limit > 0 {
args = append(args, "LIMIT", options.LimitOffset, options.Limit)
if options.CountOnly {
args = append(args, "LIMIT", 0, 0)
} else {
if options.LimitOffset >= 0 && options.Limit > 0 || options.LimitOffset > 0 && options.Limit == 0 {
args = append(args, "LIMIT", options.LimitOffset, options.Limit)
}
}
if options.Params != nil {
args = append(args, "PARAMS", len(options.Params)*2)
@ -1931,6 +1976,8 @@ func (c cmdable) FTSearchWithArgs(ctx context.Context, index string, query strin
}
if options.DialectVersion > 0 {
args = append(args, "DIALECT", options.DialectVersion)
} else {
args = append(args, "DIALECT", 2)
}
}
cmd := newFTSearchCmd(ctx, options, args...)
@ -2054,215 +2101,3 @@ func (c cmdable) FTTagVals(ctx context.Context, index string, field string) *Str
_ = c(ctx, cmd)
return cmd
}
// type FTProfileResult struct {
// Results []interface{}
// Profile ProfileDetails
// }
// type ProfileDetails struct {
// TotalProfileTime string
// ParsingTime string
// PipelineCreationTime string
// Warning string
// IteratorsProfile []IteratorProfile
// ResultProcessorsProfile []ResultProcessorProfile
// }
// type IteratorProfile struct {
// Type string
// QueryType string
// Time interface{}
// Counter int
// Term string
// Size int
// ChildIterators []IteratorProfile
// }
// type ResultProcessorProfile struct {
// Type string
// Time interface{}
// Counter int
// }
// func parseFTProfileResult(data []interface{}) (FTProfileResult, error) {
// var result FTProfileResult
// if len(data) < 2 {
// return result, fmt.Errorf("unexpected data length")
// }
// // Parse results
// result.Results = data[0].([]interface{})
// // Parse profile details
// profileData := data[1].([]interface{})
// profileDetails := ProfileDetails{}
// for i := 0; i < len(profileData); i += 2 {
// switch profileData[i].(string) {
// case "Total profile time":
// profileDetails.TotalProfileTime = profileData[i+1].(string)
// case "Parsing time":
// profileDetails.ParsingTime = profileData[i+1].(string)
// case "Pipeline creation time":
// profileDetails.PipelineCreationTime = profileData[i+1].(string)
// case "Warning":
// profileDetails.Warning = profileData[i+1].(string)
// case "Iterators profile":
// profileDetails.IteratorsProfile = parseIteratorsProfile(profileData[i+1].([]interface{}))
// case "Result processors profile":
// profileDetails.ResultProcessorsProfile = parseResultProcessorsProfile(profileData[i+1].([]interface{}))
// }
// }
// result.Profile = profileDetails
// return result, nil
// }
// func parseIteratorsProfile(data []interface{}) []IteratorProfile {
// var iterators []IteratorProfile
// for _, item := range data {
// profile := item.([]interface{})
// iterator := IteratorProfile{}
// for i := 0; i < len(profile); i += 2 {
// switch profile[i].(string) {
// case "Type":
// iterator.Type = profile[i+1].(string)
// case "Query type":
// iterator.QueryType = profile[i+1].(string)
// case "Time":
// iterator.Time = profile[i+1]
// case "Counter":
// iterator.Counter = int(profile[i+1].(int64))
// case "Term":
// iterator.Term = profile[i+1].(string)
// case "Size":
// iterator.Size = int(profile[i+1].(int64))
// case "Child iterators":
// iterator.ChildIterators = parseChildIteratorsProfile(profile[i+1].([]interface{}))
// }
// }
// iterators = append(iterators, iterator)
// }
// return iterators
// }
// func parseChildIteratorsProfile(data []interface{}) []IteratorProfile {
// var iterators []IteratorProfile
// for _, item := range data {
// profile := item.([]interface{})
// iterator := IteratorProfile{}
// for i := 0; i < len(profile); i += 2 {
// switch profile[i].(string) {
// case "Type":
// iterator.Type = profile[i+1].(string)
// case "Query type":
// iterator.QueryType = profile[i+1].(string)
// case "Time":
// iterator.Time = profile[i+1]
// case "Counter":
// iterator.Counter = int(profile[i+1].(int64))
// case "Term":
// iterator.Term = profile[i+1].(string)
// case "Size":
// iterator.Size = int(profile[i+1].(int64))
// }
// }
// iterators = append(iterators, iterator)
// }
// return iterators
// }
// func parseResultProcessorsProfile(data []interface{}) []ResultProcessorProfile {
// var processors []ResultProcessorProfile
// for _, item := range data {
// profile := item.([]interface{})
// processor := ResultProcessorProfile{}
// for i := 0; i < len(profile); i += 2 {
// switch profile[i].(string) {
// case "Type":
// processor.Type = profile[i+1].(string)
// case "Time":
// processor.Time = profile[i+1]
// case "Counter":
// processor.Counter = int(profile[i+1].(int64))
// }
// }
// processors = append(processors, processor)
// }
// return processors
// }
// func NewFTProfileCmd(ctx context.Context, args ...interface{}) *FTProfileCmd {
// return &FTProfileCmd{
// baseCmd: baseCmd{
// ctx: ctx,
// args: args,
// },
// }
// }
// type FTProfileCmd struct {
// baseCmd
// val FTProfileResult
// }
// func (cmd *FTProfileCmd) String() string {
// return cmdString(cmd, cmd.val)
// }
// func (cmd *FTProfileCmd) SetVal(val FTProfileResult) {
// cmd.val = val
// }
// func (cmd *FTProfileCmd) Result() (FTProfileResult, error) {
// return cmd.val, cmd.err
// }
// func (cmd *FTProfileCmd) Val() FTProfileResult {
// return cmd.val
// }
// func (cmd *FTProfileCmd) readReply(rd *proto.Reader) (err error) {
// data, err := rd.ReadSlice()
// if err != nil {
// return err
// }
// cmd.val, err = parseFTProfileResult(data)
// if err != nil {
// cmd.err = err
// }
// return nil
// }
// // FTProfile - Executes a search query and returns a profile of how the query was processed.
// // The 'index' parameter specifies the index to search, the 'limited' parameter specifies whether to limit the results,
// // and the 'query' parameter specifies the search / aggreagte query. Please notice that you must either pass a SearchQuery or an AggregateQuery.
// // For more information, please refer to the Redis documentation:
// // [FT.PROFILE]: (https://redis.io/commands/ft.profile/)
// func (c cmdable) FTProfile(ctx context.Context, index string, limited bool, query interface{}) *FTProfileCmd {
// queryType := ""
// var argsQuery []interface{}
// switch v := query.(type) {
// case AggregateQuery:
// queryType = "AGGREGATE"
// argsQuery = v
// case SearchQuery:
// queryType = "SEARCH"
// argsQuery = v
// default:
// panic("FT.PROFILE: query must be either AggregateQuery or SearchQuery")
// }
// args := []interface{}{"FT.PROFILE", index, queryType}
// if limited {
// args = append(args, "LIMITED")
// }
// args = append(args, "QUERY")
// args = append(args, argsQuery...)
// cmd := NewFTProfileCmd(ctx, args...)
// _ = c(ctx, cmd)
// return cmd
// }

View File

@ -4,7 +4,10 @@ import (
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/url"
"strconv"
"strings"
"sync"
"time"
@ -219,10 +222,154 @@ func (opt *FailoverOptions) clusterOptions() *ClusterOptions {
}
}
// ParseFailoverURL parses a URL into FailoverOptions that can be used to connect to Redis.
// The URL must be in the form:
//
// redis://<user>:<password>@<host>:<port>/<db_number>
// or
// rediss://<user>:<password>@<host>:<port>/<db_number>
//
// To add additional addresses, specify the query parameter, "addr" one or more times. e.g:
//
// redis://<user>:<password>@<host>:<port>/<db_number>?addr=<host2>:<port2>&addr=<host3>:<port3>
// or
// rediss://<user>:<password>@<host>:<port>/<db_number>?addr=<host2>:<port2>&addr=<host3>:<port3>
//
// Most Option fields can be set using query parameters, with the following restrictions:
// - field names are mapped using snake-case conversion: to set MaxRetries, use max_retries
// - only scalar type fields are supported (bool, int, time.Duration)
// - for time.Duration fields, values must be a valid input for time.ParseDuration();
// additionally a plain integer as value (i.e. without unit) is interpreted as seconds
// - to disable a duration field, use value less than or equal to 0; to use the default
// value, leave the value blank or remove the parameter
// - only the last value is interpreted if a parameter is given multiple times
// - fields "network", "addr", "sentinel_username" and "sentinel_password" can only be set using other
// URL attributes (scheme, host, userinfo, resp.), query parameters using these
// names will be treated as unknown parameters
// - unknown parameter names will result in an error
//
// Example:
//
// redis://user:password@localhost:6789?master_name=mymaster&dial_timeout=3&read_timeout=6s&addr=localhost:6790&addr=localhost:6791
// is equivalent to:
// &FailoverOptions{
// MasterName: "mymaster",
// Addr: ["localhost:6789", "localhost:6790", "localhost:6791"]
// DialTimeout: 3 * time.Second, // no time unit = seconds
// ReadTimeout: 6 * time.Second,
// }
func ParseFailoverURL(redisURL string) (*FailoverOptions, error) {
u, err := url.Parse(redisURL)
if err != nil {
return nil, err
}
return setupFailoverConn(u)
}
func setupFailoverConn(u *url.URL) (*FailoverOptions, error) {
o := &FailoverOptions{}
o.SentinelUsername, o.SentinelPassword = getUserPassword(u)
h, p := getHostPortWithDefaults(u)
o.SentinelAddrs = append(o.SentinelAddrs, net.JoinHostPort(h, p))
switch u.Scheme {
case "rediss":
o.TLSConfig = &tls.Config{ServerName: h, MinVersion: tls.VersionTLS12}
case "redis":
o.TLSConfig = nil
default:
return nil, fmt.Errorf("redis: invalid URL scheme: %s", u.Scheme)
}
f := strings.FieldsFunc(u.Path, func(r rune) bool {
return r == '/'
})
switch len(f) {
case 0:
o.DB = 0
case 1:
var err error
if o.DB, err = strconv.Atoi(f[0]); err != nil {
return nil, fmt.Errorf("redis: invalid database number: %q", f[0])
}
default:
return nil, fmt.Errorf("redis: invalid URL path: %s", u.Path)
}
return setupFailoverConnParams(u, o)
}
func setupFailoverConnParams(u *url.URL, o *FailoverOptions) (*FailoverOptions, error) {
q := queryOptions{q: u.Query()}
o.MasterName = q.string("master_name")
o.ClientName = q.string("client_name")
o.RouteByLatency = q.bool("route_by_latency")
o.RouteRandomly = q.bool("route_randomly")
o.ReplicaOnly = q.bool("replica_only")
o.UseDisconnectedReplicas = q.bool("use_disconnected_replicas")
o.Protocol = q.int("protocol")
o.Username = q.string("username")
o.Password = q.string("password")
o.MaxRetries = q.int("max_retries")
o.MinRetryBackoff = q.duration("min_retry_backoff")
o.MaxRetryBackoff = q.duration("max_retry_backoff")
o.DialTimeout = q.duration("dial_timeout")
o.ReadTimeout = q.duration("read_timeout")
o.WriteTimeout = q.duration("write_timeout")
o.ContextTimeoutEnabled = q.bool("context_timeout_enabled")
o.PoolFIFO = q.bool("pool_fifo")
o.PoolSize = q.int("pool_size")
o.MinIdleConns = q.int("min_idle_conns")
o.MaxIdleConns = q.int("max_idle_conns")
o.MaxActiveConns = q.int("max_active_conns")
o.ConnMaxLifetime = q.duration("conn_max_lifetime")
o.ConnMaxIdleTime = q.duration("conn_max_idle_time")
o.PoolTimeout = q.duration("pool_timeout")
o.DisableIdentity = q.bool("disableIdentity")
o.IdentitySuffix = q.string("identitySuffix")
o.UnstableResp3 = q.bool("unstable_resp3")
if q.err != nil {
return nil, q.err
}
if tmp := q.string("db"); tmp != "" {
db, err := strconv.Atoi(tmp)
if err != nil {
return nil, fmt.Errorf("redis: invalid database number: %w", err)
}
o.DB = db
}
addrs := q.strings("addr")
for _, addr := range addrs {
h, p, err := net.SplitHostPort(addr)
if err != nil || h == "" || p == "" {
return nil, fmt.Errorf("redis: unable to parse addr param: %s", addr)
}
o.SentinelAddrs = append(o.SentinelAddrs, net.JoinHostPort(h, p))
}
// any parameters left?
if r := q.remaining(); len(r) > 0 {
return nil, fmt.Errorf("redis: unexpected option: %s", strings.Join(r, ", "))
}
return o, nil
}
// NewFailoverClient returns a Redis client that uses Redis Sentinel
// for automatic failover. It's safe for concurrent use by multiple
// goroutines.
func NewFailoverClient(failoverOpt *FailoverOptions) *Client {
if failoverOpt == nil {
panic("redis: NewFailoverClient nil options")
}
if failoverOpt.RouteByLatency {
panic("to route commands by latency, use NewFailoverClusterClient")
}
@ -257,7 +404,7 @@ func NewFailoverClient(failoverOpt *FailoverOptions) *Client {
connPool = newConnPool(opt, rdb.dialHook)
rdb.connPool = connPool
rdb.onClose = failover.Close
rdb.onClose = rdb.wrappedOnClose(failover.Close)
failover.mu.Lock()
failover.onFailover = func(ctx context.Context, addr string) {
@ -308,10 +455,12 @@ func masterReplicaDialer(
// SentinelClient is a client for a Redis Sentinel.
type SentinelClient struct {
*baseClient
hooksMixin
}
func NewSentinelClient(opt *Options) *SentinelClient {
if opt == nil {
panic("redis: NewSentinelClient nil options")
}
opt.init()
c := &SentinelClient{
baseClient: &baseClient{
@ -566,29 +715,50 @@ func (c *sentinelFailover) MasterAddr(ctx context.Context) (string, error) {
}
}
var (
masterAddr string
wg sync.WaitGroup
once sync.Once
errCh = make(chan error, len(c.sentinelAddrs))
)
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for i, sentinelAddr := range c.sentinelAddrs {
sentinel := NewSentinelClient(c.opt.sentinelOptions(sentinelAddr))
masterAddr, err := sentinel.GetMasterAddrByName(ctx, c.opt.MasterName).Result()
if err != nil {
_ = sentinel.Close()
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return "", err
wg.Add(1)
go func(i int, addr string) {
defer wg.Done()
sentinelCli := NewSentinelClient(c.opt.sentinelOptions(addr))
addrVal, err := sentinelCli.GetMasterAddrByName(ctx, c.opt.MasterName).Result()
if err != nil {
internal.Logger.Printf(ctx, "sentinel: GetMasterAddrByName addr=%s, master=%q failed: %s",
addr, c.opt.MasterName, err)
_ = sentinelCli.Close()
errCh <- err
return
}
internal.Logger.Printf(ctx, "sentinel: GetMasterAddrByName master=%q failed: %s",
c.opt.MasterName, err)
continue
}
// Push working sentinel to the top.
c.sentinelAddrs[0], c.sentinelAddrs[i] = c.sentinelAddrs[i], c.sentinelAddrs[0]
c.setSentinel(ctx, sentinel)
addr := net.JoinHostPort(masterAddr[0], masterAddr[1])
return addr, nil
once.Do(func() {
masterAddr = net.JoinHostPort(addrVal[0], addrVal[1])
// Push working sentinel to the top
c.sentinelAddrs[0], c.sentinelAddrs[i] = c.sentinelAddrs[i], c.sentinelAddrs[0]
c.setSentinel(ctx, sentinelCli)
internal.Logger.Printf(ctx, "sentinel: selected addr=%s masterAddr=%s", addr, masterAddr)
cancel()
})
}(i, sentinelAddr)
}
return "", errors.New("redis: all sentinels specified in configuration are unreachable")
wg.Wait()
close(errCh)
if masterAddr != "" {
return masterAddr, nil
}
errs := make([]error, 0, len(errCh))
for err := range errCh {
errs = append(errs, err)
}
return "", fmt.Errorf("redis: all sentinels specified in configuration are unreachable: %w", errors.Join(errs...))
}
func (c *sentinelFailover) replicaAddrs(ctx context.Context, useDisconnected bool) ([]string, error) {
@ -806,6 +976,10 @@ func contains(slice []string, str string) bool {
// NewFailoverClusterClient returns a client that supports routing read-only commands
// to a replica node.
func NewFailoverClusterClient(failoverOpt *FailoverOptions) *ClusterClient {
if failoverOpt == nil {
panic("redis: NewFailoverClusterClient nil options")
}
sentinelAddrs := make([]string, len(failoverOpt.SentinelAddrs))
copy(sentinelAddrs, failoverOpt.SentinelAddrs)
@ -815,6 +989,22 @@ func NewFailoverClusterClient(failoverOpt *FailoverOptions) *ClusterClient {
}
opt := failoverOpt.clusterOptions()
if failoverOpt.DB != 0 {
onConnect := opt.OnConnect
opt.OnConnect = func(ctx context.Context, cn *Conn) error {
if err := cn.Select(ctx, failoverOpt.DB).Err(); err != nil {
return err
}
if onConnect != nil {
return onConnect(ctx, cn)
}
return nil
}
}
opt.ClusterSlots = func(ctx context.Context) ([]ClusterSlot, error) {
masterAddr, err := failover.MasterAddr(ctx)
if err != nil {

View File

@ -19,16 +19,15 @@ type Tx struct {
baseClient
cmdable
statefulCmdable
hooksMixin
}
func (c *Client) newTx() *Tx {
tx := Tx{
baseClient: baseClient{
opt: c.opt,
connPool: pool.NewStickyConnPool(c.connPool),
opt: c.opt,
connPool: pool.NewStickyConnPool(c.connPool),
hooksMixin: c.hooksMixin.clone(),
},
hooksMixin: c.hooksMixin.clone(),
}
tx.init()
return &tx

View File

@ -80,6 +80,8 @@ type UniversalOptions struct {
IdentitySuffix string
UnstableResp3 bool
// IsClusterMode can be used when only one Addrs is provided (e.g. Elasticache supports setting up cluster mode with configuration endpoint).
IsClusterMode bool
}
// Cluster returns cluster options created from the universal options.
@ -152,6 +154,9 @@ func (o *UniversalOptions) Failover() *FailoverOptions {
SentinelUsername: o.SentinelUsername,
SentinelPassword: o.SentinelPassword,
RouteByLatency: o.RouteByLatency,
RouteRandomly: o.RouteRandomly,
MaxRetries: o.MaxRetries,
MinRetryBackoff: o.MinRetryBackoff,
MaxRetryBackoff: o.MaxRetryBackoff,
@ -172,6 +177,8 @@ func (o *UniversalOptions) Failover() *FailoverOptions {
TLSConfig: o.TLSConfig,
ReplicaOnly: o.ReadOnly,
DisableIdentity: o.DisableIdentity,
DisableIndentity: o.DisableIndentity,
IdentitySuffix: o.IdentitySuffix,
@ -252,14 +259,26 @@ var (
// NewUniversalClient returns a new multi client. The type of the returned client depends
// on the following conditions:
//
// 1. If the MasterName option is specified, a sentinel-backed FailoverClient is returned.
// 2. if the number of Addrs is two or more, a ClusterClient is returned.
// 3. Otherwise, a single-node Client is returned.
// 1. If the MasterName option is specified with RouteByLatency, RouteRandomly or IsClusterMode,
// a FailoverClusterClient is returned.
// 2. If the MasterName option is specified without RouteByLatency, RouteRandomly or IsClusterMode,
// a sentinel-backed FailoverClient is returned.
// 3. If the number of Addrs is two or more, or IsClusterMode option is specified,
// a ClusterClient is returned.
// 4. Otherwise, a single-node Client is returned.
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.Failover())
} else if len(opts.Addrs) > 1 {
return NewClusterClient(opts.Cluster())
if opts == nil {
panic("redis: NewUniversalClient nil options")
}
switch {
case opts.MasterName != "" && (opts.RouteByLatency || opts.RouteRandomly || opts.IsClusterMode):
return NewFailoverClusterClient(opts.Failover())
case opts.MasterName != "":
return NewFailoverClient(opts.Failover())
case len(opts.Addrs) > 1 || opts.IsClusterMode:
return NewClusterClient(opts.Cluster())
default:
return NewClient(opts.Simple())
}
return NewClient(opts.Simple())
}

View File

@ -0,0 +1,348 @@
package redis
import (
"context"
"encoding/json"
"strconv"
)
// note: the APIs is experimental and may be subject to change.
type VectorSetCmdable interface {
VAdd(ctx context.Context, key, element string, val Vector) *BoolCmd
VAddWithArgs(ctx context.Context, key, element string, val Vector, addArgs *VAddArgs) *BoolCmd
VCard(ctx context.Context, key string) *IntCmd
VDim(ctx context.Context, key string) *IntCmd
VEmb(ctx context.Context, key, element string, raw bool) *SliceCmd
VGetAttr(ctx context.Context, key, element string) *StringCmd
VInfo(ctx context.Context, key string) *MapStringInterfaceCmd
VLinks(ctx context.Context, key, element string) *StringSliceCmd
VLinksWithScores(ctx context.Context, key, element string) *VectorScoreSliceCmd
VRandMember(ctx context.Context, key string) *StringCmd
VRandMemberCount(ctx context.Context, key string, count int) *StringSliceCmd
VRem(ctx context.Context, key, element string) *BoolCmd
VSetAttr(ctx context.Context, key, element string, attr interface{}) *BoolCmd
VClearAttributes(ctx context.Context, key, element string) *BoolCmd
VSim(ctx context.Context, key string, val Vector) *StringSliceCmd
VSimWithScores(ctx context.Context, key string, val Vector) *VectorScoreSliceCmd
VSimWithArgs(ctx context.Context, key string, val Vector, args *VSimArgs) *StringSliceCmd
VSimWithArgsWithScores(ctx context.Context, key string, val Vector, args *VSimArgs) *VectorScoreSliceCmd
}
type Vector interface {
Value() []any
}
const (
vectorFormatFP32 string = "FP32"
vectorFormatValues string = "Values"
)
type VectorFP32 struct {
Val []byte
}
func (v *VectorFP32) Value() []any {
return []any{vectorFormatFP32, v.Val}
}
var _ Vector = (*VectorFP32)(nil)
type VectorValues struct {
Val []float64
}
func (v *VectorValues) Value() []any {
res := make([]any, 2+len(v.Val))
res[0] = vectorFormatValues
res[1] = len(v.Val)
for i, v := range v.Val {
res[2+i] = v
}
return res
}
var _ Vector = (*VectorValues)(nil)
type VectorRef struct {
Name string // the name of the referent vector
}
func (v *VectorRef) Value() []any {
return []any{"ele", v.Name}
}
var _ Vector = (*VectorRef)(nil)
type VectorScore struct {
Name string
Score float64
}
// `VADD key (FP32 | VALUES num) vector element`
// note: the API is experimental and may be subject to change.
func (c cmdable) VAdd(ctx context.Context, key, element string, val Vector) *BoolCmd {
return c.VAddWithArgs(ctx, key, element, val, &VAddArgs{})
}
type VAddArgs struct {
// the REDUCE option must be passed immediately after the key
Reduce int64
Cas bool
// The NoQuant, Q8 and Bin options are mutually exclusive.
NoQuant bool
Q8 bool
Bin bool
EF int64
SetAttr string
M int64
}
func (v VAddArgs) reduce() int64 {
return v.Reduce
}
func (v VAddArgs) appendArgs(args []any) []any {
if v.Cas {
args = append(args, "cas")
}
if v.NoQuant {
args = append(args, "noquant")
} else if v.Q8 {
args = append(args, "q8")
} else if v.Bin {
args = append(args, "bin")
}
if v.EF > 0 {
args = append(args, "ef", strconv.FormatInt(v.EF, 10))
}
if len(v.SetAttr) > 0 {
args = append(args, "setattr", v.SetAttr)
}
if v.M > 0 {
args = append(args, "m", strconv.FormatInt(v.M, 10))
}
return args
}
// `VADD key [REDUCE dim] (FP32 | VALUES num) vector element [CAS] [NOQUANT | Q8 | BIN] [EF build-exploration-factor] [SETATTR attributes] [M numlinks]`
// note: the API is experimental and may be subject to change.
func (c cmdable) VAddWithArgs(ctx context.Context, key, element string, val Vector, addArgs *VAddArgs) *BoolCmd {
if addArgs == nil {
addArgs = &VAddArgs{}
}
args := []any{"vadd", key}
if addArgs.reduce() > 0 {
args = append(args, "reduce", addArgs.reduce())
}
args = append(args, val.Value()...)
args = append(args, element)
args = addArgs.appendArgs(args)
cmd := NewBoolCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// `VCARD key`
// note: the API is experimental and may be subject to change.
func (c cmdable) VCard(ctx context.Context, key string) *IntCmd {
cmd := NewIntCmd(ctx, "vcard", key)
_ = c(ctx, cmd)
return cmd
}
// `VDIM key`
// note: the API is experimental and may be subject to change.
func (c cmdable) VDim(ctx context.Context, key string) *IntCmd {
cmd := NewIntCmd(ctx, "vdim", key)
_ = c(ctx, cmd)
return cmd
}
// `VEMB key element [RAW]`
// note: the API is experimental and may be subject to change.
func (c cmdable) VEmb(ctx context.Context, key, element string, raw bool) *SliceCmd {
args := []any{"vemb", key, element}
if raw {
args = append(args, "raw")
}
cmd := NewSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// `VGETATTR key element`
// note: the API is experimental and may be subject to change.
func (c cmdable) VGetAttr(ctx context.Context, key, element string) *StringCmd {
cmd := NewStringCmd(ctx, "vgetattr", key, element)
_ = c(ctx, cmd)
return cmd
}
// `VINFO key`
// note: the API is experimental and may be subject to change.
func (c cmdable) VInfo(ctx context.Context, key string) *MapStringInterfaceCmd {
cmd := NewMapStringInterfaceCmd(ctx, "vinfo", key)
_ = c(ctx, cmd)
return cmd
}
// `VLINKS key element`
// note: the API is experimental and may be subject to change.
func (c cmdable) VLinks(ctx context.Context, key, element string) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "vlinks", key, element)
_ = c(ctx, cmd)
return cmd
}
// `VLINKS key element WITHSCORES`
// note: the API is experimental and may be subject to change.
func (c cmdable) VLinksWithScores(ctx context.Context, key, element string) *VectorScoreSliceCmd {
cmd := NewVectorInfoSliceCmd(ctx, "vlinks", key, element, "withscores")
_ = c(ctx, cmd)
return cmd
}
// `VRANDMEMBER key`
// note: the API is experimental and may be subject to change.
func (c cmdable) VRandMember(ctx context.Context, key string) *StringCmd {
cmd := NewStringCmd(ctx, "vrandmember", key)
_ = c(ctx, cmd)
return cmd
}
// `VRANDMEMBER key [count]`
// note: the API is experimental and may be subject to change.
func (c cmdable) VRandMemberCount(ctx context.Context, key string, count int) *StringSliceCmd {
cmd := NewStringSliceCmd(ctx, "vrandmember", key, count)
_ = c(ctx, cmd)
return cmd
}
// `VREM key element`
// note: the API is experimental and may be subject to change.
func (c cmdable) VRem(ctx context.Context, key, element string) *BoolCmd {
cmd := NewBoolCmd(ctx, "vrem", key, element)
_ = c(ctx, cmd)
return cmd
}
// `VSETATTR key element "{ JSON obj }"`
// The `attr` must be something that can be marshaled to JSON (using encoding/JSON) unless
// the argument is a string or []byte when we assume that it can be passed directly as JSON.
//
// note: the API is experimental and may be subject to change.
func (c cmdable) VSetAttr(ctx context.Context, key, element string, attr interface{}) *BoolCmd {
var attrStr string
var err error
switch v := attr.(type) {
case string:
attrStr = v
case []byte:
attrStr = string(v)
default:
var bytes []byte
bytes, err = json.Marshal(v)
if err != nil {
// If marshalling fails, create the command and set the error; this command won't be executed.
cmd := NewBoolCmd(ctx, "vsetattr", key, element, "")
cmd.SetErr(err)
return cmd
}
attrStr = string(bytes)
}
cmd := NewBoolCmd(ctx, "vsetattr", key, element, attrStr)
_ = c(ctx, cmd)
return cmd
}
// `VClearAttributes` clear attributes on a vector set element.
// The implementation of `VClearAttributes` is execute command `VSETATTR key element ""`.
// note: the API is experimental and may be subject to change.
func (c cmdable) VClearAttributes(ctx context.Context, key, element string) *BoolCmd {
cmd := NewBoolCmd(ctx, "vsetattr", key, element, "")
_ = c(ctx, cmd)
return cmd
}
// `VSIM key (ELE | FP32 | VALUES num) (vector | element)`
// note: the API is experimental and may be subject to change.
func (c cmdable) VSim(ctx context.Context, key string, val Vector) *StringSliceCmd {
return c.VSimWithArgs(ctx, key, val, &VSimArgs{})
}
// `VSIM key (ELE | FP32 | VALUES num) (vector | element) WITHSCORES`
// note: the API is experimental and may be subject to change.
func (c cmdable) VSimWithScores(ctx context.Context, key string, val Vector) *VectorScoreSliceCmd {
return c.VSimWithArgsWithScores(ctx, key, val, &VSimArgs{})
}
type VSimArgs struct {
Count int64
EF int64
Filter string
FilterEF int64
Truth bool
NoThread bool
// The `VSim` command in Redis has the option, by the doc in Redis.io don't have.
// Epsilon float64
}
func (v VSimArgs) appendArgs(args []any) []any {
if v.Count > 0 {
args = append(args, "count", v.Count)
}
if v.EF > 0 {
args = append(args, "ef", v.EF)
}
if len(v.Filter) > 0 {
args = append(args, "filter", v.Filter)
}
if v.FilterEF > 0 {
args = append(args, "filter-ef", v.FilterEF)
}
if v.Truth {
args = append(args, "truth")
}
if v.NoThread {
args = append(args, "nothread")
}
// if v.Epsilon > 0 {
// args = append(args, "Epsilon", v.Epsilon)
// }
return args
}
// `VSIM key (ELE | FP32 | VALUES num) (vector | element) [COUNT num]
// [EF search-exploration-factor] [FILTER expression] [FILTER-EF max-filtering-effort] [TRUTH] [NOTHREAD]`
// note: the API is experimental and may be subject to change.
func (c cmdable) VSimWithArgs(ctx context.Context, key string, val Vector, simArgs *VSimArgs) *StringSliceCmd {
if simArgs == nil {
simArgs = &VSimArgs{}
}
args := []any{"vsim", key}
args = append(args, val.Value()...)
args = simArgs.appendArgs(args)
cmd := NewStringSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}
// `VSIM key (ELE | FP32 | VALUES num) (vector | element) [WITHSCORES] [COUNT num]
// [EF search-exploration-factor] [FILTER expression] [FILTER-EF max-filtering-effort] [TRUTH] [NOTHREAD]`
// note: the API is experimental and may be subject to change.
func (c cmdable) VSimWithArgsWithScores(ctx context.Context, key string, val Vector, simArgs *VSimArgs) *VectorScoreSliceCmd {
if simArgs == nil {
simArgs = &VSimArgs{}
}
args := []any{"vsim", key}
args = append(args, val.Value()...)
args = append(args, "withscores")
args = simArgs.appendArgs(args)
cmd := NewVectorInfoSliceCmd(ctx, args...)
_ = c(ctx, cmd)
return cmd
}

View File

@ -2,5 +2,5 @@ package redis
// Version is the current release version.
func Version() string {
return "9.7.3"
return "9.10.0"
}

5
vendor/modules.txt vendored
View File

@ -140,7 +140,7 @@ github.com/cespare/xxhash/v2
# github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f
## explicit
github.com/dgryski/go-rendezvous
# github.com/eggsampler/acme/v3 v3.6.2-0.20250208073118-0466a0230941
# github.com/eggsampler/acme/v3 v3.6.2
## explicit; go 1.11
github.com/eggsampler/acme/v3
# github.com/felixge/httpsnoop v1.0.4
@ -265,9 +265,10 @@ github.com/redis/go-redis/extra/rediscmd/v9
# github.com/redis/go-redis/extra/redisotel/v9 v9.5.3
## explicit; go 1.19
github.com/redis/go-redis/extra/redisotel/v9
# github.com/redis/go-redis/v9 v9.7.3
# github.com/redis/go-redis/v9 v9.10.0
## explicit; go 1.18
github.com/redis/go-redis/v9
github.com/redis/go-redis/v9/auth
github.com/redis/go-redis/v9/internal
github.com/redis/go-redis/v9/internal/hashtag
github.com/redis/go-redis/v9/internal/hscan