Merge pull request #9094 from olemarkus/vault-vfs

Implement VFS for vault
This commit is contained in:
Kubernetes Prow Robot 2020-06-20 12:02:39 -07:00 committed by GitHub
commit 8b371acef0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
183 changed files with 24346 additions and 14 deletions

View File

@ -103,6 +103,8 @@ func (f *Factory) Clientset() (simple.Clientset, error) {
},
KopsClient: kopsClient.Kops(),
}
} else if strings.HasPrefix(registryPath, "vault://") {
return nil, field.Invalid(field.NewPath("State Store"), registryPath, "Vault is not supported as registry path")
} else {
basePath, err := vfs.Context.BuildVfsPath(registryPath)
if err != nil {

View File

@ -22,3 +22,4 @@ The following experimental features are currently available:
* `-SpotinstController` - Toggles the installation of the Spot controller addon off
* `+SkipEtcdVersionCheck` - Bypasses the check that etcd-manager is using a supported etcd version
* `+TerraformJSON` - Produce kubernetes.ts.json file instead of writing HCL v1 syntax. Can be consumed by terraform 0.12
* `+VFSVaultSupport` - Enables setting Vault as secret/keystore

View File

@ -7,6 +7,8 @@
* On AWS kops now defaults to using launch templates instead of launch configurations.
* Clusters using the Amazon VPC CNI provider now perform an `ec2.DescribeInstanceTypes` call at instance launch time. In large clusters or AWS accounts this may lead to API throttling which could delay node readiness. If this becomes a problem please open a GitHub issue.
* Alpha support for Hashicorp Vault as store for secrets and keys. See the [Vault state store docs](/state/#vault-vault).
# Breaking changes

View File

@ -186,3 +186,68 @@ if err != nil {
gcsClient, err := storage.New(httpClient)
```
## Vault (vault://)
As of 1.19, Kops has support for using Vault as state store. It is currently an experimental feature and you have to enable the `VFSVaultSupport` feature flag to enable it.
The goal of the vault store is to be a safe storage for the kops keys and secrets store. It will not work to use this as a kops registry/config store. Among other things, etcd-manager is unable to read VFS control files from vault. Vault also cannot be used as backend for etcd backups.
```sh
export KOPS_FEATURE_FLAGS=VFSVaultSupport
```
### Node authentication and configuration
The vault store uses IAM auth to authenticate against the vault server and expects the vault auth plugin to be mounted on `/aws`.
Instructions for configuring your vault server to accept IAM authentication are at https://learn.hashicorp.com/vault/identity-access-management/iam-authentication
To configure kops to use the Vault store, add this to the cluster spec:
```yaml
spec:
secretStore: vault://<vault>:<port>/<kv2 mount>/clusters/<clustername>/secrets
keyStore: vault://<vault>:<port>/<kv2 mount>/clusters/<clustername>/keys
```
Each of the paths specified above can be configurable, but they must be unique across all clusters. You can also not use the same path as both `stateStore` and `keyStore`.
After launching your cluster you need to add the cluster roles to Vault, binding them to the cluster's IAM identity and granting them access to the appropriate secrets and keys. The nodes will wait until they can authenticate before completing provisioning.
#### Vault policies
Note that contrary to the S3 state store, kops will not provision any policies for you. You have to provide roles for both operators and nodes.
Using the example paths above, a policy for the cluster nodes can be:
```
path "kv-v2/metadata/<clustername>" {
capabilities = ["list"]
}
path "kv-v2/metadata/clusters/<clustername>/*" {
capabilities = ["list", "read"]
}
path "kv-v2/data/clusters/<clustername>/*" {
capabilities = ["read"]
}
```
Once you add this policy, you can assign it to the IAM roles like this:
```sh
vault write auth/aws/role/masters.<clustername> auth_type=iam \
bound_iam_principal_arn=arn:aws:iam::<account>:role/masters.<clustername> policies=<policy> max_ttl=500h
vault write auth/aws/role/nodes.<clustername> auth_type=iam \
bound_iam_principal_arn=arn:aws:iam::<account>:role/nodes.<clustername> policies=<policy> max_ttl=500h
vault write auth/aws/config/client iam_server_id_header_value="<vault server hostname>"
```
Note that if you re-provision your cluster, you need to re-run the above in order for Vault to update the role internal IDs.
Vault will use TLS by default. If you want to use plaintext instead, add `?tls=false` to the url.
### Client configuration
The `kops` CLI only expects the `VAULT_TOKEN` environment variable to be set to a valid token. You can use any authentication method to obtain a token and then set it manually if the authentication method does not do that automatically.

1
go.mod
View File

@ -77,6 +77,7 @@ require (
github.com/gorilla/mux v1.7.3
github.com/hashicorp/hcl v1.0.0
github.com/hashicorp/hcl/v2 v2.3.0
github.com/hashicorp/vault/api v1.0.4
github.com/huandu/xstrings v1.2.0 // indirect
github.com/jacksontj/memberlistmesh v0.0.0-20190905163944-93462b9d2bb7
github.com/jpillora/backoff v0.0.0-20170918002102-8eab2debe79d

43
go.sum
View File

@ -155,6 +155,7 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/denverdino/aliyungo v0.0.0-20191128015008-acd8035bbb1d h1:Rw/qE0ALLN27jXBKR9xTR+RFAA9s1yIZCKhfT1GgCGA=
github.com/denverdino/aliyungo v0.0.0-20191128015008-acd8035bbb1d/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
github.com/denverdino/aliyungo v0.0.0-20200327235253-d59c209c7e93 h1:ujQ4DKs+MqFWy/hoLAU4Ey/nQhqJ6pXyocMDbVJ4qSw=
github.com/dgrijalva/jwt-go v3.2.0+incompatible h1:7qlOGliEKZXTDg6OTjfoBKDXWrumCAMpl/TFQ4/5kLM=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
@ -203,6 +204,7 @@ github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwo
github.com/fatih/color v1.6.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@ -222,6 +224,7 @@ github.com/go-critic/go-critic v0.3.5-0.20190526074819-1df300866540/go.mod h1:+s
github.com/go-ini/ini v1.51.0 h1:VPJKXGzbKlyExUE8f41aV57yxkYx5R49yR6n7flp0M0=
github.com/go-ini/ini v1.51.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-lintpack/lintpack v0.5.2/go.mod h1:NwZuYi2nUHho8XEIZ6SIxihrnPoqBTDqfpXvXAN0sXM=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
@ -284,6 +287,7 @@ github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85n
github.com/go-ozzo/ozzo-validation v3.5.0+incompatible/go.mod h1:gsEKFIVnabGBt6mXmxK0MoFy+cZoTJY6mu5Ll3LVLBU=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-test/deep v1.0.2-0.20181118220953-042da051cf31/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/go-toolsmith/astcast v1.0.0/go.mod h1:mt2OdQTeAQcY4DQgPSArJjHCcOwlX+Wl/kwN+LbLGQ4=
@ -332,6 +336,8 @@ github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3 h1:gyjaxf+svBWX08ZjK86iN9geUJF0H6gp2IRKX6Nf6/I=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangci/check v0.0.0-20180506172741-cfe4005ccda2/go.mod h1:k9Qvh+8juN+UKMCS/3jFtGICgW8O96FVaZsaxdzDkR4=
github.com/golangci/dupl v0.0.0-20180902072040-3e9179ac440a/go.mod h1:ryS0uhF+x9jgbj/N71xsEqODy9BN81/GonCZiOzirOk=
github.com/golangci/errcheck v0.0.0-20181223084120-ef45e06d44b6/go.mod h1:DbHgvLiFKX1Sh2T1w8Q/h4NAI8MHIpzCdnBUDTXU3I0=
@ -411,12 +417,22 @@ github.com/grpc-ecosystem/grpc-gateway v1.9.5 h1:UImYN5qQ8tuGpGE16ZmjvcTtTw24zw1
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=
github.com/hashicorp/go-hclog v0.8.0/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
github.com/hashicorp/go-immutable-radix v1.0.0 h1:AKDB1HM5PWEA7i4nhcpwOrO2byshxBjXVn/J/3+z5/0=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3 h1:zKjpN5BK/P5lMYrLmBHdBULWbJ0XpYR+7NGzqkZzoD4=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-retryablehttp v0.5.4 h1:1BZvpawXoJCWX6pNtow9+rpEj+3itIlutiqnntI6jOE=
github.com/hashicorp/go-retryablehttp v0.5.4/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.1 h1:DMo4fmknnz0E0evoNYnV48RjWndOsmd6OW+09R3cEP8=
github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
@ -425,6 +441,7 @@ github.com/hashicorp/go-uuid v1.0.0 h1:RS8zrF7PhGwyNPOtxSClXXj9HA8feRnJzgnI1RJCS
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1 h1:fv1ep09latC32wFoVwnqcnKJGnMSdBanPczbHAYm1BE=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.0.0-20180201235237-0fb14efe8c47/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
@ -441,6 +458,13 @@ github.com/hashicorp/hcl/v2 v2.3.0 h1:iRly8YaMwTBAKhn1Ybk7VSdzbnopghktCD031P8ggU
github.com/hashicorp/hcl/v2 v2.3.0/go.mod h1:d+FwDBbOLvpAM3Z6J7gPj/VoAGkNe/gm352ZhjJ/Zv8=
github.com/hashicorp/memberlist v0.1.4 h1:gkyML/r71w3FL8gUi74Vk76avkj/9lYAY9lvg0OcoGs=
github.com/hashicorp/memberlist v0.1.4/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/vault v1.4.0 h1:pe4kdAs0Wogqv/3yi91FInkN5PXuY4IMnjgz3LurGtg=
github.com/hashicorp/vault/api v1.0.4 h1:j08Or/wryXT4AcHj1oCbMd7IijXcKzYUGw59LGu9onU=
github.com/hashicorp/vault/api v1.0.4/go.mod h1:gDcqh3WGcR1cpF5AJz/B1UFheUEneMoIospckxBxk6Q=
github.com/hashicorp/vault/sdk v0.1.13 h1:mOEPeOhT7jl0J4AMl1E705+BcmeRs1VmKNb9F0sMLy8=
github.com/hashicorp/vault/sdk v0.1.13/go.mod h1:B+hVj7TpuQY1Y/GPbCpffmgd+tSEwvhkWnjtSYCaS2M=
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/heketi/heketi v9.0.1-0.20190917153846-c2e2a4ab7ab9+incompatible/go.mod h1:bB9ly3RchcQqsQ9CpyaQwvva7RS5ytVoSoholZQON6o=
github.com/heketi/tests v0.0.0-20151005000721-f3775cbcefd6/go.mod h1:xGMAM8JLi7UkZt1i4FQeQy0R2T8GLUwQhOP5M1gBhy4=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
@ -558,15 +582,20 @@ github.com/miekg/dns v1.1.16/go.mod h1:YNV562EiewvSmpCB6/W4c6yqjK7Z+M/aIS1JHsIVe
github.com/mindprince/gonvml v0.0.0-20190828220739-9ebdce4bb989/go.mod h1:2eu9pRWp8mo84xCg6KswZ+USQHjwgRhNp06sozOdsTY=
github.com/mistifyio/go-zfs v2.1.1+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-ps v0.0.0-20170309133038-4fdf99ab2936/go.mod h1:r1VsdOzOPt1ZSrGZWFoNhsAedKnEd6r9Np1+5blZCWk=
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/mapstructure v0.0.0-20180220230111-00c29f56e238/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -588,6 +617,7 @@ github.com/naoina/go-stringutil v0.1.0/go.mod h1:XJ2SJL9jCtBh+P9q5btrd/Ylo8XwT/h
github.com/naoina/toml v0.1.1/go.mod h1:NBIhNtsFMo3G2szEBne+bO4gS192HuIYRqfvOWb4i1E=
github.com/nbutton23/zxcvbn-go v0.0.0-20160627004424-a22cb81b2ecd/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU=
github.com/nbutton23/zxcvbn-go v0.0.0-20171102151520-eafdab6b0663/go.mod h1:o96djdrsSGy3AWPyBgZMAGfxZNfgntdJG+11KU4QvbU=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
@ -617,6 +647,7 @@ github.com/opencontainers/runtime-spec v1.0.0/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/
github.com/opencontainers/selinux v1.3.1-0.20190929122143-5215b1806f52/go.mod h1:+BLncwf63G4dgOzykXAxcmnFlUaOlkDdmw/CqsW6pjs=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c h1:Lgl0gzECD8GnQ5QCWA8o6BtfL6mDH5rQgM4/fX3avOs=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0 h1:J7Q5mO4ysT1dv8hyrUGHb9+ooztCXu1D8MY8DZYsu3g=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.1.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
@ -626,6 +657,8 @@ github.com/pelletier/go-toml v1.4.0 h1:u3Z1r+oOXJIkxqw34zVhyPgjBsm6X2wn21NWs/HfS
github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/errors v0.8.0 h1:WdK/asTD0HN+q6hsWO3/vpuAkAr+tw6aNJNDFFf0+qw=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
@ -679,6 +712,8 @@ github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNue
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/go-glob v0.0.0-20170128012129-256dc444b735/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk=
github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
@ -903,12 +938,14 @@ golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190122071731-054c452bb702/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190124100055-b90733256f2e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313 h1:pczuHS43Cp2ktBEEmLwScxgjWsBSzdaQiKzUyf3DTTc=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -925,6 +962,7 @@ golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fq
golang.org/x/text v0.0.0-20170915090833-1cbadb444a80/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -984,14 +1022,17 @@ google.golang.org/appengine v1.6.2 h1:j8RI1yW0SkI+paT6uGwMlrMI/6zwYA6/CFil8rxOzG
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
@ -1000,6 +1041,7 @@ google.golang.org/grpc v1.27.0 h1:rRYRFMVgRv6E0D70Skyfsr28tDXIuuPZyWGMPdMcnXg=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@ -1022,6 +1064,7 @@ gopkg.in/mcuadros/go-syslog.v2 v2.2.1/go.mod h1:l5LPIyOOyIdQquNg+oU6Z3524YwrcqEm
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=

View File

@ -19,6 +19,7 @@ package validation
import (
"fmt"
"net"
"strings"
"k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/kops/pkg/apis/kops"
@ -46,6 +47,10 @@ func ValidateCluster(c *kops.Cluster, strict bool) field.ErrorList {
return allErrs
}
if strings.HasPrefix(c.Spec.ConfigBase, "vault://") {
allErrs = append(allErrs, field.Invalid(fieldSpec.Child("configBase"), c.Spec.ConfigBase, "Vault is not supported as configBase"))
}
requiresSubnets := true
requiresNetworkCIDR := true
requiresSubnetCIDR := true
@ -346,6 +351,22 @@ func ValidateCluster(c *kops.Cluster, strict bool) field.ErrorList {
allErrs = append(allErrs, newValidateCluster(c)...)
if strings.HasPrefix(c.Spec.SecretStore, "vault://") {
if !featureflag.VFSVaultSupport.Enabled() {
allErrs = append(allErrs, field.Forbidden(fieldSpec.Child("secretStore"), "vault VFS is an experimental feature; set `export KOPS_FEATURE_FLAGS=VFSVaultSupport`"))
}
if kops.CloudProviderID(c.Spec.CloudProvider) != kops.CloudProviderAWS {
allErrs = append(allErrs, field.Forbidden(fieldSpec.Child("secretStore"), "Vault secret store is only available on AWS"))
}
}
if strings.HasPrefix(c.Spec.KeyStore, "vault://") {
if !featureflag.VFSVaultSupport.Enabled() {
allErrs = append(allErrs, field.Forbidden(fieldSpec.Child("keyStore"), "vault VFS is an experimental feature; set `export KOPS_FEATURE_FLAGS=VFSVaultSupport`"))
}
if kops.CloudProviderID(c.Spec.CloudProvider) != kops.CloudProviderAWS {
allErrs = append(allErrs, field.Forbidden(fieldSpec.Child("keyStore"), "Vault keystore is only available on AWS"))
}
}
return allErrs
}

View File

@ -1,4 +1,4 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
load("@io_bazel_rules_go//go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
@ -35,3 +35,13 @@ go_library(
"//vendor/k8s.io/klog:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["clientset_test.go"],
embed = [":go_default_library"],
deps = [
"//pkg/apis/kops:go_default_library",
"//util/pkg/vfs:go_default_library",
],
)

View File

@ -21,6 +21,8 @@ import (
"fmt"
"strings"
"k8s.io/klog"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/pkg/apis/kops/registry"
@ -75,32 +77,52 @@ func (c *VFSClientset) InstanceGroupsFor(cluster *kops.Cluster) kopsinternalvers
}
func (c *VFSClientset) SecretStore(cluster *kops.Cluster) (fi.SecretStore, error) {
configBase, err := registry.ConfigBase(cluster)
if err != nil {
return nil, err
if cluster.Spec.SecretStore == "" {
configBase, err := registry.ConfigBase(cluster)
if err != nil {
return nil, err
}
basedir := configBase.Join("secrets")
return secrets.NewVFSSecretStore(cluster, basedir), nil
} else {
storePath, err := vfs.Context.BuildVfsPath(cluster.Spec.SecretStore)
return secrets.NewVFSSecretStore(cluster, storePath), err
}
basedir := configBase.Join("secrets")
return secrets.NewVFSSecretStore(cluster, basedir), nil
}
func (c *VFSClientset) KeyStore(cluster *kops.Cluster) (fi.CAStore, error) {
configBase, err := registry.ConfigBase(cluster)
basedir, err := pkiPath(cluster)
if err != nil {
return nil, err
}
basedir := configBase.Join("pki")
return fi.NewVFSCAStore(cluster, basedir), nil
klog.V(8).Infof("Using keystore path: %q", basedir)
return fi.NewVFSCAStore(cluster, basedir), err
}
func (c *VFSClientset) SSHCredentialStore(cluster *kops.Cluster) (fi.SSHCredentialStore, error) {
configBase, err := registry.ConfigBase(cluster)
basedir, err := pkiPath(cluster)
if err != nil {
return nil, err
}
basedir := configBase.Join("pki")
return fi.NewVFSSSHCredentialStore(cluster, basedir), nil
}
func pkiPath(cluster *kops.Cluster) (vfs.Path, error) {
if cluster.Spec.KeyStore == "" {
configBase, err := registry.ConfigBase(cluster)
if err != nil {
return nil, err
}
return configBase.Join("pki"), nil
} else {
storePath, err := vfs.Context.BuildVfsPath(cluster.Spec.KeyStore)
return storePath, err
}
}
func DeleteAllClusterState(basePath vfs.Path) error {
paths, err := basePath.ReadTree()
if err != nil {
@ -153,7 +175,46 @@ func DeleteAllClusterState(basePath vfs.Path) error {
return nil
}
func deleteAllPaths(basePath vfs.Path) error {
paths, err := basePath.ReadTree()
if err != nil {
return fmt.Errorf("error listing files in state store: %v", err)
}
for _, path := range paths {
err = path.Remove()
if err != nil {
return fmt.Errorf("error deleting cluster file %s: %v", path, err)
}
}
return nil
}
func (c *VFSClientset) DeleteCluster(ctx context.Context, cluster *kops.Cluster) error {
secretStore := cluster.Spec.SecretStore
if secretStore != "" {
path, err := vfs.Context.BuildVfsPath(secretStore)
if err != nil {
return err
}
err = deleteAllPaths(path)
if err != nil {
return err
}
}
keyStore := cluster.Spec.KeyStore
if keyStore != "" && keyStore != secretStore {
path, err := vfs.Context.BuildVfsPath(keyStore)
if err != nil {
return err
}
err = deleteAllPaths(path)
if err != nil {
return err
}
}
configBase, err := registry.ConfigBase(cluster)
if err != nil {
return err

View File

@ -0,0 +1,72 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package vfsclientset
import (
"testing"
"k8s.io/kops/pkg/apis/kops"
"k8s.io/kops/util/pkg/vfs"
)
func TestSSHCredentialStoreOnConfigBase(t *testing.T) {
vfs.Context.ResetMemfsContext(true)
configBase := "memfs://some/config/base"
cluster := &kops.Cluster{
Spec: kops.ClusterSpec{
ConfigBase: configBase,
},
}
p, err := pkiPath(cluster)
if err != nil {
t.Errorf("Failed to create ssh path: %v", err)
}
actual := p.Path()
expected := configBase + "/pki"
if actual != expected {
t.Errorf("Expected %v, got %v", expected, actual)
}
}
func TestSSHCredentialStoreOnOwnCFS(t *testing.T) {
vfs.Context.ResetMemfsContext(true)
configBase := "memfs://some/config/base"
keyPath := "memfs://keys/some/config/base/pki"
cluster := &kops.Cluster{
Spec: kops.ClusterSpec{
ConfigBase: configBase,
KeyStore: keyPath,
},
}
p, err := pkiPath(cluster)
if err != nil {
t.Errorf("Failed to create ssh path: %v", err)
}
actual := p.Path()
expected := keyPath
if actual != expected {
t.Errorf("Expected %v, got %v", expected, actual)
}
}

View File

@ -80,6 +80,8 @@ var (
SpotinstHybrid = New("SpotinstHybrid", Bool(false))
// SpotinstController toggles the installation of the Spotinst controller addon.
SpotinstController = New("SpotinstController", Bool(true))
// VFSVaultSupport enables setting Vault as secret/keystore
VFSVaultSupport = New("VFSVaultSupport", Bool(false))
// VPCSkipEnableDNSSupport if set will make that a VPC does not need DNSSupport enabled.
VPCSkipEnableDNSSupport = New("VPCSkipEnableDNSSupport", Bool(false))
// SkipEtcdVersionCheck will bypass the check that etcd-manager is using a supported etcd version

View File

@ -356,6 +356,9 @@ func (b *PolicyBuilder) AddS3Permissions(p *Policy) (*Policy, error) {
} else if _, ok := vfsPath.(*vfs.MemFSPath); ok {
// Tests -ignore - nothing we can do in terms of IAM policy
klog.Warningf("ignoring memfs path %q for IAM policy builder", vfsPath)
} else if _, ok := vfsPath.(*vfs.VaultPath); ok {
// Vault access needs to come from somewhere else
klog.Warningf("ignoring valult path %q for IAM policy builder", vfsPath)
} else {
// We could implement this approach, but it seems better to
// get all clouds using cluster-readable storage

View File

@ -113,10 +113,13 @@ func (c *VFSSecretStore) DeleteSecret(name string) error {
func (c *VFSSecretStore) ListSecrets() ([]string, error) {
files, err := c.basedir.ReadDir()
var ids []string
if os.IsNotExist(err) {
return ids, nil
}
if err != nil {
return nil, fmt.Errorf("error listing secrets directory: %v", err)
}
var ids []string
for _, f := range files {
id := f.Base()
ids = append(ids, id)

View File

@ -19,6 +19,7 @@ package fi
import (
"math/big"
"math/rand"
"os"
"strings"
"testing"
"time"
@ -223,7 +224,6 @@ spec:
s.DeleteKeysetItem(keyset, "237054359138908419352140518924933177492")
_, err := pathMap["memfs://tests/private/ca/237054359138908419352140518924933177492.key"].ReadFile()
pathMap["memfs://tests/private/ca/237054359138908419352140518924933177492.key"].ReadFile()
if err == nil {
t.Fatalf("File memfs://tests/private/ca/237054359138908419352140518924933177492.key still exists")
}
@ -231,3 +231,197 @@ spec:
}
}
func TestVFSCAStoreRoundTripWithVault(t *testing.T) {
token := os.Getenv("VAULT_DEV_ROOT_TOKEN_ID")
if token == "" {
t.Skip("No vault dev token set")
}
if os.Getenv("VAULT_TOKEN") != token {
t.Skip("RoundTrip test needs VAULT_TOKEN == VAULT_DEV_ROOT_TOKEN_ID")
}
basePath, err := vfs.Context.BuildVfsPath("vault://localhost:8200/secret/clusters/vdscastoretest?tls=false")
if err != nil {
t.Fatalf("error building vfspath: %v", err)
}
s := &VFSCAStore{
basedir: basePath,
}
privateKeyData := "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA4JwpEprZ5n8RIEt6jT2lAh+UDgRgx/4px21gjgywQivYHVxH\nAZexVb/E9pBa9Q2G9B1Q7TCO7YsUVRQy4JMDZVt+McFnWVwexnqBYFNcVjkEmDgA\ngvCYGE0P9d/RwRL4KuLHo+u6fv7P0jXMN+CpOxyLhYZZNa0ZOZDHsSiJSQSj9WGF\nGHrbCf0KVDpKieR1uBqHrRO+mLR5zkX2L58m74kjK4dsBhmjeq/7OAoTmiG2QgJ/\nP2IjyhiA2mRqY+hl55lwEUV/0yHYEkJC8LdGkwwZz2eF77aSPGmi/A2CSKgMwDTx\n9m+P7jcpWreYw6NG9BueGoDIve/tgFKwvVFF6QIDAQABAoIBAA0ktjaTfyrAxsTI\nBezb7Zr5NBW55dvuII299cd6MJo+rI/TRYhvUv48kY8IFXp/hyUjzgeDLunxmIf9\n/Zgsoic9Ol44/g45mMduhcGYPzAAeCdcJ5OB9rR9VfDCXyjYLlN8H8iU0734tTqM\n0V13tQ9zdSqkGPZOIcq/kR/pylbOZaQMe97BTlsAnOMSMKDgnftY4122Lq3GYy+t\nvpr+bKVaQZwvkLoSU3rECCaKaghgwCyX7jft9aEkhdJv+KlwbsGY6WErvxOaLWHd\ncuMQjGapY1Fa/4UD00mvrA260NyKfzrp6+P46RrVMwEYRJMIQ8YBAk6N6Hh7dc0G\n8Z6i1m0CgYEA9HeCJR0TSwbIQ1bDXUrzpftHuidG5BnSBtax/ND9qIPhR/FBW5nj\n22nwLc48KkyirlfIULd0ae4qVXJn7wfYcuX/cJMLDmSVtlM5Dzmi/91xRiFgIzx1\nAsbBzaFjISP2HpSgL+e9FtSXaaqeZVrflitVhYKUpI/AKV31qGHf04sCgYEA6zTV\n99Sb49Wdlns5IgsfnXl6ToRttB18lfEKcVfjAM4frnkk06JpFAZeR+9GGKUXZHqs\nz2qcplw4d/moCC6p3rYPBMLXsrGNEUFZqBlgz72QA6BBq3X0Cg1Bc2ZbK5VIzwkg\nST2SSux6ccROfgULmN5ZiLOtdUKNEZpFF3i3qtsCgYADT/s7dYFlatobz3kmMnXK\nsfTu2MllHdRys0YGHu7Q8biDuQkhrJwhxPW0KS83g4JQym+0aEfzh36bWcl+u6R7\nKhKj+9oSf9pndgk345gJz35RbPJYh+EuAHNvzdgCAvK6x1jETWeKf6btj5pF1U1i\nQ4QNIw/QiwIXjWZeubTGsQKBgQCbduLu2rLnlyyAaJZM8DlHZyH2gAXbBZpxqU8T\nt9mtkJDUS/KRiEoYGFV9CqS0aXrayVMsDfXY6B/S/UuZjO5u7LtklDzqOf1aKG3Q\ndGXPKibknqqJYH+bnUNjuYYNerETV57lijMGHuSYCf8vwLn3oxBfERRX61M/DU8Z\nworz/QKBgQDCTJI2+jdXg26XuYUmM4XXfnocfzAXhXBULt1nENcogNf1fcptAVtu\nBAiz4/HipQKqoWVUYmxfgbbLRKKLK0s0lOWKbYdVjhEm/m2ZU8wtXTagNwkIGoyq\nY/C1Lox4f1ROJnCjc/hfcOjcxX5M8A8peecHWlVtUPKTJgxQ7oMKcw==\n-----END RSA PRIVATE KEY-----\n"
privateKey, err := pki.ParsePEMPrivateKey([]byte(privateKeyData))
if err != nil {
t.Fatalf("error from ParsePEMPrivateKey: %v", err)
}
certData := "-----BEGIN CERTIFICATE-----\nMIIC2DCCAcCgAwIBAgIRALJXAkVj964tq67wMSI8oJQwDQYJKoZIhvcNAQELBQAw\nFTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0xNzEyMjcyMzUyNDBaFw0yNzEyMjcy\nMzUyNDBaMBUxEzARBgNVBAMTCmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUA\nA4IBDwAwggEKAoIBAQDgnCkSmtnmfxEgS3qNPaUCH5QOBGDH/inHbWCODLBCK9gd\nXEcBl7FVv8T2kFr1DYb0HVDtMI7tixRVFDLgkwNlW34xwWdZXB7GeoFgU1xWOQSY\nOACC8JgYTQ/139HBEvgq4sej67p+/s/SNcw34Kk7HIuFhlk1rRk5kMexKIlJBKP1\nYYUYetsJ/QpUOkqJ5HW4GoetE76YtHnORfYvnybviSMrh2wGGaN6r/s4ChOaIbZC\nAn8/YiPKGIDaZGpj6GXnmXARRX/TIdgSQkLwt0aTDBnPZ4XvtpI8aaL8DYJIqAzA\nNPH2b4/uNylat5jDo0b0G54agMi97+2AUrC9UUXpAgMBAAGjIzAhMA4GA1UdDwEB\n/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4IBAQBVGR2r\nhzXzRMU5wriPQAJScszNORvoBpXfZoZ09FIupudFxBVU3d4hV9StKnQgPSGA5XQO\nHE97+BxJDuA/rB5oBUsMBjc7y1cde/T6hmi3rLoEYBSnSudCOXJE4G9/0f8byAJe\nrN8+No1r2VgZvZh6p74TEkXv/l3HBPWM7IdUV0HO9JDhSgOVF1fyQKJxRuLJR8jt\nO6mPH2UX0vMwVa4jvwtkddqk2OAdYQvH9rbDjjbzaiW0KnmdueRo92KHAN7BsDZy\nVpXHpqo1Kzg7D3fpaXCf5si7lqqrdJVXH4JC72zxsPehqgi8eIuqOBkiDWmRxAxh\n8yGeRx9AbknHh4Ia\n-----END CERTIFICATE-----\n"
cert, err := pki.ParsePEMCertificate([]byte(certData))
if err != nil {
t.Fatalf("error from ParsePEMCertificate: %v", err)
}
if err := s.StoreKeypair("ca", cert, privateKey); err != nil {
t.Fatalf("error from StoreKeypair: %v", err)
}
paths, err := basePath.ReadTree()
if err != nil {
t.Fatalf("error from ReadTree: %v", err)
}
pathMap := make(map[string]vfs.Path)
for _, p := range paths {
pathMap[p.Path()] = p
}
bp := basePath.Path()
for _, p := range []string{
bp + "/issued/ca/keyset.yaml",
bp + "/issued/ca/237054359138908419352140518924933177492.crt",
bp + "/private/ca/keyset.yaml",
bp + "/private/ca/237054359138908419352140518924933177492.key",
} {
if _, found := pathMap[p]; !found {
t.Fatalf("file not found: %v", p)
}
}
if len(pathMap) != 4 {
t.Fatalf("unexpected pathMap: %v", pathMap)
}
// Check issued/ca/keyset.yaml round-tripped
{
issuedKeysetYaml, err := pathMap[bp+"/issued/ca/keyset.yaml"].ReadFile()
if err != nil {
t.Fatalf("error reading file issued/ca/keyset.yaml: %v", err)
}
expected := `
apiVersion: kops.k8s.io/v1alpha2
kind: Keyset
metadata:
creationTimestamp: null
name: ca
spec:
keys:
- id: "237054359138908419352140518924933177492"
publicMaterial: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lSQUxKWEFrVmo5NjR0cTY3d01TSThvSlF3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4TnpFeU1qY3lNelV5TkRCYUZ3MHlOekV5TWpjeQpNelV5TkRCYU1CVXhFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURnbkNrU210bm1meEVnUzNxTlBhVUNINVFPQkdESC9pbkhiV0NPRExCQ0s5Z2QKWEVjQmw3RlZ2OFQya0ZyMURZYjBIVkR0TUk3dGl4UlZGRExna3dObFczNHh3V2RaWEI3R2VvRmdVMXhXT1FTWQpPQUNDOEpnWVRRLzEzOUhCRXZncTRzZWo2N3ArL3MvU05jdzM0S2s3SEl1RmhsazFyUms1a01leEtJbEpCS1AxCllZVVlldHNKL1FwVU9rcUo1SFc0R29ldEU3Nll0SG5PUmZZdm55YnZpU01yaDJ3R0dhTjZyL3M0Q2hPYUliWkMKQW44L1lpUEtHSURhWkdwajZHWG5tWEFSUlgvVElkZ1NRa0x3dDBhVERCblBaNFh2dHBJOGFhTDhEWUpJcUF6QQpOUEgyYjQvdU55bGF0NWpEbzBiMEc1NGFnTWk5NysyQVVyQzlVVVhwQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCVkdSMnIKaHpYelJNVTV3cmlQUUFKU2Nzek5PUnZvQnBYZlpvWjA5Rkl1cHVkRnhCVlUzZDRoVjlTdEtuUWdQU0dBNVhRTwpIRTk3K0J4SkR1QS9yQjVvQlVzTUJqYzd5MWNkZS9UNmhtaTNyTG9FWUJTblN1ZENPWEpFNEc5LzBmOGJ5QUplCnJOOCtObzFyMlZnWnZaaDZwNzRURWtYdi9sM0hCUFdNN0lkVVYwSE85SkRoU2dPVkYxZnlRS0p4UnVMSlI4anQKTzZtUEgyVVgwdk13VmE0anZ3dGtkZHFrMk9BZFlRdkg5cmJEampiemFpVzBLbm1kdWVSbzkyS0hBTjdCc0RaeQpWcFhIcHFvMUt6ZzdEM2ZwYVhDZjVzaTdscXFyZEpWWEg0SkM3Mnp4c1BlaHFnaThlSXVxT0JraURXbVJ4QXhoCjh5R2VSeDlBYmtuSGg0SWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
type: Keypair
`
if strings.TrimSpace(string(issuedKeysetYaml)) != strings.TrimSpace(expected) {
t.Fatalf("unexpected issued/ca/keyset.yaml: %q", string(issuedKeysetYaml))
}
pool, err := s.FindCertificatePool("ca")
if err != nil {
t.Fatalf("error reading certificate pool: %v", err)
}
if len(pool.Secondary) != 0 {
t.Fatalf("unexpected secondary certificates: %v", pool)
}
if pool.Primary == nil {
t.Fatalf("primary certificate was nil: %v", pool)
}
roundTrip, err := pool.Primary.AsString()
if err != nil {
t.Fatalf("error serializing primary cert: %v", err)
}
if roundTrip != certData {
t.Fatalf("unexpected round-tripped certificate data: %q", roundTrip)
}
}
// Check issued/ca/237054359138908419352140518924933177492.crt round-tripped
{
roundTrip, err := pathMap[bp+"/issued/ca/237054359138908419352140518924933177492.crt"].ReadFile()
if err != nil {
t.Fatalf("error reading file issued/ca/237054359138908419352140518924933177492.crt: %v", err)
}
if string(roundTrip) != certData {
t.Fatalf("unexpected round-tripped certificate data: %q", string(roundTrip))
}
}
// Check private/ca/keyset.yaml round-tripped
{
privateKeysetYaml, err := pathMap[bp+"/private/ca/keyset.yaml"].ReadFile()
if err != nil {
t.Fatalf("error reading file private/ca/keyset.yaml: %v", err)
}
expected := `
apiVersion: kops.k8s.io/v1alpha2
kind: Keyset
metadata:
creationTimestamp: null
name: ca
spec:
keys:
- id: "237054359138908419352140518924933177492"
privateMaterial: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBNEp3cEVwclo1bjhSSUV0NmpUMmxBaCtVRGdSZ3gvNHB4MjFnamd5d1FpdllIVnhICkFaZXhWYi9FOXBCYTlRMkc5QjFRN1RDTzdZc1VWUlF5NEpNRFpWdCtNY0ZuV1Z3ZXhucUJZRk5jVmprRW1EZ0EKZ3ZDWUdFMFA5ZC9Sd1JMNEt1TEhvK3U2ZnY3UDBqWE1OK0NwT3h5TGhZWlpOYTBaT1pESHNTaUpTUVNqOVdHRgpHSHJiQ2YwS1ZEcEtpZVIxdUJxSHJSTyttTFI1emtYMkw1OG03NGtqSzRkc0JobWplcS83T0FvVG1pRzJRZ0ovClAySWp5aGlBMm1ScVkraGw1NWx3RVVWLzB5SFlFa0pDOExkR2t3d1p6MmVGNzdhU1BHbWkvQTJDU0tnTXdEVHgKOW0rUDdqY3BXcmVZdzZORzlCdWVHb0RJdmUvdGdGS3d2VkZGNlFJREFRQUJBb0lCQUEwa3RqYVRmeXJBeHNUSQpCZXpiN1pyNU5CVzU1ZHZ1SUkyOTljZDZNSm8rckkvVFJZaHZVdjQ4a1k4SUZYcC9oeVVqemdlREx1bnhtSWY5Ci9aZ3NvaWM5T2w0NC9nNDVtTWR1aGNHWVB6QUFlQ2RjSjVPQjlyUjlWZkRDWHlqWUxsTjhIOGlVMDczNHRUcU0KMFYxM3RROXpkU3FrR1BaT0ljcS9rUi9weWxiT1phUU1lOTdCVGxzQW5PTVNNS0RnbmZ0WTQxMjJMcTNHWXkrdAp2cHIrYktWYVFad3ZrTG9TVTNyRUNDYUthZ2hnd0N5WDdqZnQ5YUVraGRKditLbHdic0dZNldFcnZ4T2FMV0hkCmN1TVFqR2FwWTFGYS80VUQwMG12ckEyNjBOeUtmenJwNitQNDZSclZNd0VZUkpNSVE4WUJBazZONkhoN2RjMEcKOFo2aTFtMENnWUVBOUhlQ0pSMFRTd2JJUTFiRFhVcnpwZnRIdWlkRzVCblNCdGF4L05EOXFJUGhSL0ZCVzVuagoyMm53TGM0OEtreWlybGZJVUxkMGFlNHFWWEpuN3dmWWN1WC9jSk1MRG1TVnRsTTVEem1pLzkxeFJpRmdJengxCkFzYkJ6YUZqSVNQMkhwU2dMK2U5RnRTWGFhcWVaVnJmbGl0VmhZS1VwSS9BS1YzMXFHSGYwNHNDZ1lFQTZ6VFYKOTlTYjQ5V2RsbnM1SWdzZm5YbDZUb1J0dEIxOGxmRUtjVmZqQU00ZnJua2swNkpwRkFaZVIrOUdHS1VYWkhxcwp6MnFjcGx3NGQvbW9DQzZwM3JZUEJNTFhzckdORVVGWnFCbGd6NzJRQTZCQnEzWDBDZzFCYzJaYks1Vkl6d2tnClNUMlNTdXg2Y2NST2ZnVUxtTjVaaUxPdGRVS05FWnBGRjNpM3F0c0NnWUFEVC9zN2RZRmxhdG9iejNrbU1uWEsKc2ZUdTJNbGxIZFJ5czBZR0h1N1E4YmlEdVFraHJKd2h4UFcwS1M4M2c0SlF5bSswYUVmemgzNmJXY2wrdTZSNwpLaEtqKzlvU2Y5cG5kZ2szNDVnSnozNVJiUEpZaCtFdUFITnZ6ZGdDQXZLNngxakVUV2VLZjZidGo1cEYxVTFpClE0UU5Jdy9RaXdJWGpXWmV1YlRHc1FLQmdRQ2JkdUx1MnJMbmx5eUFhSlpNOERsSFp5SDJnQVhiQlpweHFVOFQKdDltdGtKRFVTL0tSaUVvWUdGVjlDcVMwYVhyYXlWTXNEZlhZNkIvUy9VdVpqTzV1N0x0a2xEenFPZjFhS0czUQpkR1hQS2lia25xcUpZSCtiblVOanVZWU5lckVUVjU3bGlqTUdIdVNZQ2Y4dndMbjNveEJmRVJSWDYxTS9EVThaCndvcnovUUtCZ1FEQ1RKSTIramRYZzI2WHVZVW1NNFhYZm5vY2Z6QVhoWEJVTHQxbkVOY29nTmYxZmNwdEFWdHUKQkFpejQvSGlwUUtxb1dWVVlteGZnYmJMUktLTEswczBsT1dLYllkVmpoRW0vbTJaVTh3dFhUYWdOd2tJR295cQpZL0MxTG94NGYxUk9KbkNqYy9oZmNPamN4WDVNOEE4cGVlY0hXbFZ0VVBLVEpneFE3b01LY3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
publicMaterial: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMyRENDQWNDZ0F3SUJBZ0lSQUxKWEFrVmo5NjR0cTY3d01TSThvSlF3RFFZSktvWklodmNOQVFFTEJRQXcKRlRFVE1CRUdBMVVFQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4TnpFeU1qY3lNelV5TkRCYUZ3MHlOekV5TWpjeQpNelV5TkRCYU1CVXhFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURnbkNrU210bm1meEVnUzNxTlBhVUNINVFPQkdESC9pbkhiV0NPRExCQ0s5Z2QKWEVjQmw3RlZ2OFQya0ZyMURZYjBIVkR0TUk3dGl4UlZGRExna3dObFczNHh3V2RaWEI3R2VvRmdVMXhXT1FTWQpPQUNDOEpnWVRRLzEzOUhCRXZncTRzZWo2N3ArL3MvU05jdzM0S2s3SEl1RmhsazFyUms1a01leEtJbEpCS1AxCllZVVlldHNKL1FwVU9rcUo1SFc0R29ldEU3Nll0SG5PUmZZdm55YnZpU01yaDJ3R0dhTjZyL3M0Q2hPYUliWkMKQW44L1lpUEtHSURhWkdwajZHWG5tWEFSUlgvVElkZ1NRa0x3dDBhVERCblBaNFh2dHBJOGFhTDhEWUpJcUF6QQpOUEgyYjQvdU55bGF0NWpEbzBiMEc1NGFnTWk5NysyQVVyQzlVVVhwQWdNQkFBR2pJekFoTUE0R0ExVWREd0VCCi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCVkdSMnIKaHpYelJNVTV3cmlQUUFKU2Nzek5PUnZvQnBYZlpvWjA5Rkl1cHVkRnhCVlUzZDRoVjlTdEtuUWdQU0dBNVhRTwpIRTk3K0J4SkR1QS9yQjVvQlVzTUJqYzd5MWNkZS9UNmhtaTNyTG9FWUJTblN1ZENPWEpFNEc5LzBmOGJ5QUplCnJOOCtObzFyMlZnWnZaaDZwNzRURWtYdi9sM0hCUFdNN0lkVVYwSE85SkRoU2dPVkYxZnlRS0p4UnVMSlI4anQKTzZtUEgyVVgwdk13VmE0anZ3dGtkZHFrMk9BZFlRdkg5cmJEampiemFpVzBLbm1kdWVSbzkyS0hBTjdCc0RaeQpWcFhIcHFvMUt6ZzdEM2ZwYVhDZjVzaTdscXFyZEpWWEg0SkM3Mnp4c1BlaHFnaThlSXVxT0JraURXbVJ4QXhoCjh5R2VSeDlBYmtuSGg0SWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
type: Keypair
`
if strings.TrimSpace(string(privateKeysetYaml)) != strings.TrimSpace(expected) {
t.Fatalf("unexpected private/ca/keyset.yaml: %q", string(privateKeysetYaml))
}
key, err := s.FindPrivateKey("ca")
if err != nil {
t.Fatalf("error reading certificate pool: %v", err)
}
if key == nil {
t.Fatalf("private key was nil")
}
roundTrip, err := key.AsString()
if err != nil {
t.Fatalf("error serializing private key: %v", err)
}
if roundTrip != privateKeyData {
t.Fatalf("unexpected round-tripped private key data: %q", roundTrip)
}
}
// Check private/ca/237054359138908419352140518924933177492.key round-tripped
{
roundTrip, err := pathMap[bp+"/private/ca/237054359138908419352140518924933177492.key"].ReadFile()
if err != nil {
t.Fatalf("error reading file private/ca/237054359138908419352140518924933177492.key: %v", err)
}
if string(roundTrip) != privateKeyData {
t.Fatalf("unexpected round-tripped private key data: %q", string(roundTrip))
}
}
// Check that keyset gets deleted
{
keyset := &kops.Keyset{}
keyset.Name = "ca"
keyset.Spec.Type = kops.SecretTypeKeypair
s.DeleteKeysetItem(keyset, "237054359138908419352140518924933177492")
_, err := pathMap[bp+"/private/ca/237054359138908419352140518924933177492.key"].ReadFile()
if err == nil {
t.Fatalf("File private/ca/237054359138908419352140518924933177492.key still exists")
}
}
}

View File

@ -16,6 +16,8 @@ go_library(
"s3fs.go",
"sshfs.go",
"swiftfs.go",
"vaultcontext.go",
"vaultfs.go",
"vfs.go",
"writeoption.go",
],
@ -32,6 +34,7 @@ go_library(
"//vendor/github.com/aws/aws-sdk-go/aws/session:go_default_library",
"//vendor/github.com/aws/aws-sdk-go/service/ec2:go_default_library",
"//vendor/github.com/aws/aws-sdk-go/service/s3:go_default_library",
"//vendor/github.com/aws/aws-sdk-go/service/sts:go_default_library",
"//vendor/github.com/denverdino/aliyungo/metadata:go_default_library",
"//vendor/github.com/denverdino/aliyungo/oss:go_default_library",
"//vendor/github.com/go-ini/ini:go_default_library",
@ -40,6 +43,7 @@ go_library(
"//vendor/github.com/gophercloud/gophercloud/openstack/objectstorage/v1/containers:go_default_library",
"//vendor/github.com/gophercloud/gophercloud/openstack/objectstorage/v1/objects:go_default_library",
"//vendor/github.com/gophercloud/gophercloud/pagination:go_default_library",
"//vendor/github.com/hashicorp/vault/api:go_default_library",
"//vendor/github.com/pkg/sftp:go_default_library",
"//vendor/golang.org/x/crypto/ssh:go_default_library",
"//vendor/google.golang.org/api/googleapi:go_default_library",
@ -54,10 +58,13 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"context_test.go",
"fs_test.go",
"memfs_test.go",
"s3context_test.go",
"s3fs_test.go",
"vaultfs_test.go",
],
embed = [":go_default_library"],
deps = ["//vendor/github.com/hashicorp/vault/api:go_default_library"],
)

View File

@ -35,6 +35,8 @@ import (
storage "google.golang.org/api/storage/v1"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/klog"
vault "github.com/hashicorp/vault/api"
)
// VFSContext is a 'context' for VFS, that is normally a singleton
@ -51,6 +53,8 @@ type VFSContext struct {
swiftClient *gophercloud.ServiceClient
// ossClient is the Aliyun Open Source Storage client
ossClient *oss.Client
vaultClient *vault.Client
}
var Context = VFSContext{
@ -169,6 +173,10 @@ func (c *VFSContext) BuildVfsPath(p string) (Path, error) {
return c.buildOSSPath(p)
}
if strings.HasPrefix(p, "vault://") {
return c.buildVaultPath(p)
}
return nil, fmt.Errorf("unknown / unhandled path type: %q", p)
}
@ -443,3 +451,35 @@ func (c *VFSContext) buildOSSPath(p string) (*OSSPath, error) {
return NewOSSPath(c.ossClient, bucket, u.Path)
}
func (c *VFSContext) buildVaultPath(p string) (*VaultPath, error) {
u, err := url.Parse(p)
if err != nil {
return nil, fmt.Errorf("invalid vault url: %q", p)
}
var scheme string
if u.Scheme != "vault" {
return nil, fmt.Errorf("invalid vault url: %q", p)
}
queryValues := u.Query()
scheme = "https://"
if queryValues.Get("tls") == "false" {
scheme = "http://"
}
if c.vaultClient == nil {
vaultClient, err := newVaultClient(scheme, u.Hostname(), u.Port())
if err != nil {
return nil, err
}
c.vaultClient = vaultClient
}
return newVaultPath(c.vaultClient, scheme, u.Path)
}

View File

@ -0,0 +1,75 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package vfs
import (
"os"
"testing"
)
func TestBuildVaultPath(t *testing.T) {
token := os.Getenv("VAULT_DEV_ROOT_TOKEN_ID")
if token == "" {
t.Skip("No vault dev token set")
}
if os.Getenv("VAULT_TOKEN") != token {
t.Skip("BuildVaultPath test needs VAULT_TOKEN == VAULT_DEV_ROOT_TOKEN_ID")
}
grid := []struct {
url string
scheme string
vaultAddr string
}{
{
url: "vault://localhost:8200/foo/bar?tls=false",
scheme: "http://",
vaultAddr: "http://localhost:8200",
},
{
url: "vault://foo.test.bar/foo/bar?tls=false",
scheme: "http://",
vaultAddr: "http://foo.test.bar",
},
{
url: "vault://foo.test.bar/foo/bar",
scheme: "https://",
vaultAddr: "https://foo.test.bar",
},
}
for _, g := range grid {
context := &VFSContext{}
p, err := context.buildVaultPath(g.url)
if err != nil {
t.Fatalf("Unexepcted error for %q: %v", g.url, err)
}
if p.String() != g.url {
t.Errorf("Unexpected path: expected %q, actual %q", g.url, p.Path())
}
vaultAddr := p.vaultClient.Address()
if vaultAddr != g.vaultAddr {
t.Errorf("Unexpected vaultAddr: expected %q, actual %q", g.vaultAddr, vaultAddr)
}
if p.scheme != g.scheme {
t.Errorf("Unexpected scheme for %q: expected %q, actual %q", g.url, g.scheme, p.scheme)
}
}
}

View File

@ -0,0 +1,93 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package vfs
import (
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
"k8s.io/klog"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/sts"
vault "github.com/hashicorp/vault/api"
)
func newVaultClient(scheme string, host string, port string) (*vault.Client, error) {
addr := scheme + host
if port != "" {
addr = addr + ":" + port
}
config := &vault.Config{
Address: addr,
HttpClient: &http.Client{},
}
client, err := vault.NewClient(config)
if err != nil {
return nil, err
}
token := os.Getenv("VAULT_TOKEN")
if token == "" {
token, err = awsAuth(client, host)
if err != nil {
return nil, fmt.Errorf("error authenticating vault to AWS: %v", err)
}
}
client.SetToken(token)
return client, nil
}
func awsAuth(client *vault.Client, host string) (string, error) {
klog.Infof("Using AWS IAM to authenticate to Vault at %q", host)
config := aws.NewConfig()
config = config.WithCredentialsChainVerboseErrors(true)
session, err := session.NewSession(config)
if err != nil {
return "", err
}
svc := sts.New(session)
stsRequest, _ := svc.GetCallerIdentityRequest(nil)
stsRequest.HTTPRequest.Header.Add("X-Vault-AWS-IAM-Server-ID", host)
stsRequest.Sign()
headersJson, err := json.Marshal(stsRequest.HTTPRequest.Header)
if err != nil {
return "", err
}
requestBody, err := ioutil.ReadAll(stsRequest.HTTPRequest.Body)
if err != nil {
return "", err
}
loginData := make(map[string]interface{})
loginData["iam_http_request_method"] = stsRequest.HTTPRequest.Method
loginData["iam_request_url"] = base64.StdEncoding.EncodeToString([]byte(stsRequest.HTTPRequest.URL.String()))
loginData["iam_request_headers"] = base64.StdEncoding.EncodeToString(headersJson)
loginData["iam_request_body"] = base64.StdEncoding.EncodeToString(requestBody)
loginData["role"] = ""
path := "auth/aws/login"
secret, err := client.Logical().Write(path, loginData)
if err != nil {
return "", err
}
return secret.Auth.ClientToken, err
}

274
util/pkg/vfs/vaultfs.go Normal file
View File

@ -0,0 +1,274 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package vfs
import (
"context"
"encoding/base64"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
vault "github.com/hashicorp/vault/api"
"k8s.io/klog"
)
type VaultPath struct {
vaultClient *vault.Client
scheme string
mountPoint string
path string
}
var _ Path = &VaultPath{}
func newVaultPath(client *vault.Client, scheme string, path string) (*VaultPath, error) {
if scheme != "https://" && scheme != "http://" {
return nil, fmt.Errorf("scheme must be http:// or https://")
}
path = strings.TrimPrefix(path, "/")
dirs := strings.SplitN(path, "/", 2)
if len(dirs) != 2 {
return nil, fmt.Errorf("vault path must have both a mount point and a path. Got: %q", path)
}
if client == nil {
return nil, fmt.Errorf("vault path needs to have a vault client")
}
return &VaultPath{
vaultClient: client,
scheme: scheme,
mountPoint: dirs[0],
path: dirs[1],
}, nil
}
func (p *VaultPath) WriteFile(data io.ReadSeeker, acl ACL) error {
klog.V(4).Infof("Writing file %q", p)
file, err := encodeData(data)
if err != nil {
return err
}
payload := map[string]interface{}{
"data": map[string]interface{}{
"file": file,
},
}
_, err = p.vaultClient.Logical().Write(p.dataPath(), payload)
return err
}
func (p *VaultPath) CreateFile(data io.ReadSeeker, acl ACL) error {
file, _ := p.ReadFile()
if file == nil {
return p.WriteFile(data, acl)
} else {
return fmt.Errorf("file already exists: %v", p.path)
}
}
func (p *VaultPath) ReadFile() ([]byte, error) {
secret, err := p.vaultClient.Logical().Read(p.dataPath())
if err != nil {
return nil, err
}
if secret == nil || secret.Data["data"] == nil {
return nil, os.ErrNotExist
}
data := secret.Data["data"].(map[string]interface{})
encodedString := data["file"].(string)
return base64.StdEncoding.DecodeString(encodedString)
}
func (p *VaultPath) Remove() error {
klog.V(8).Infof("removing file %s", p)
_, err := p.vaultClient.Logical().Delete(p.dataPath())
return err
}
func (p *VaultPath) RemoveAllVersions() error {
klog.V(8).Infof("removing all versions of file %s", p)
data := map[string][]string{
"versions": {"1"},
}
_, err := p.vaultClient.Logical().DeleteWithData(p.dataPath(), data)
if err != nil {
return err
}
err = p.destroy()
if err != nil {
return err
}
err = p.deleteMetadata()
return err
}
func (p *VaultPath) Base() string {
return path.Base(p.path)
}
func (p *VaultPath) Path() string {
query := ""
if p.scheme == "http://" {
query = "?tls=false"
}
return fmt.Sprintf("vault://%s/%s/%s%s", strings.TrimPrefix(p.vaultClient.Address(), p.scheme), p.mountPoint, p.path, query)
}
func (p *VaultPath) ReadDir() ([]Path, error) {
secret, err := p.vaultClient.Logical().List(p.metadataPath())
if err != nil {
return nil, err
}
if secret == nil {
return nil, os.ErrNotExist
}
data := secret.Data["keys"].([]interface{})
paths := make([]Path, 0)
for _, key := range data {
path := p.Join(key.(string))
paths = append(paths, path)
}
return paths, nil
}
func (p *VaultPath) ReadTree() ([]Path, error) {
content, err := p.ReadDir()
files := make([]Path, 0)
if os.IsNotExist(err) {
return files, nil
}
if err != nil {
return nil, fmt.Errorf("error reading path %v, %v", p, err)
}
for _, path := range content {
if IsDirectory(path) {
subTree, err := path.ReadTree()
if err != nil {
return nil, err
}
files = append(files, subTree...)
} else {
files = append(files, path)
}
}
return files, nil
}
func (p *VaultPath) Join(relativePath ...string) Path {
args := []string{p.fullPath()}
args = append(args, relativePath...)
joined := path.Join(args...)
path, _ := newVaultPath(p.vaultClient, p.scheme, joined)
return path
}
func (p *VaultPath) WriteTo(out io.Writer) (int64, error) {
panic("writeTo not implemented")
}
func encodeData(data io.ReadSeeker) (string, error) {
pr, pw := io.Pipe()
encoder := base64.NewEncoder(base64.StdEncoding, pw)
go func() {
_, err := io.Copy(encoder, data)
encoder.Close()
if err != nil {
pw.CloseWithError(err)
} else {
pw.Close()
}
}()
out, err := ioutil.ReadAll(pr)
if err != nil {
return "", err
}
return string(out), nil
}
func (p *VaultPath) dataPath() string {
return p.mountPoint + "/data/" + p.path
}
func (p *VaultPath) metadataPath() string {
return p.mountPoint + "/metadata/" + p.path
}
func (p *VaultPath) fullPath() string {
return p.mountPoint + "/" + p.path
}
func (p *VaultPath) String() string {
return p.Path()
}
func (p VaultPath) destroy() error {
data := map[string][]string{
"versions": {"1"},
}
r := p.vaultClient.NewRequest("PUT", "/v1/"+p.mountPoint+"/destroy/"+p.path)
r.SetJSONBody(data)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := p.vaultClient.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
return err
}
func (p VaultPath) deleteMetadata() error {
r := p.vaultClient.NewRequest("DELETE", "/v1/"+p.mountPoint+"/metadata/"+p.path)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := p.vaultClient.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
return err
}
func (p VaultPath) SetClientToken(token string) {
p.vaultClient.SetToken(token)
}

View File

@ -0,0 +1,328 @@
/*
Copyright 2020 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package vfs
import (
"bytes"
"math/rand"
"os"
"reflect"
"sort"
"strconv"
"strings"
"testing"
"time"
vault "github.com/hashicorp/vault/api"
)
func createClient(t *testing.T) *vault.Client {
token := os.Getenv("VAULT_DEV_ROOT_TOKEN_ID")
if token == "" {
t.Skip("No vault dev token set. Skipping")
}
client, _ := newVaultClient("http://", "localhost", "8200")
client.SetToken(token)
return client
}
func Test_newVaultPath(t *testing.T) {
client := createClient(t)
vaultPath, err := newVaultPath(client, "http://", "/secret/foo/bar")
if err != nil {
t.Errorf("Failed to create vault path: %v", err)
}
actual := vaultPath.mountPoint
expected := "secret"
if actual != expected {
t.Errorf("Expected mountPoint %v, got %v", expected, actual)
}
actual = vaultPath.path
expected = "foo/bar"
if actual != expected {
t.Errorf("Expected path %v, got %v", expected, actual)
}
actual = vaultPath.metadataPath()
expected = "secret/metadata/foo/bar"
if actual != expected {
t.Errorf("expected path %v, got %v", expected, actual)
}
actual = vaultPath.dataPath()
expected = "secret/data/foo/bar"
if actual != expected {
t.Errorf("expected path %v, got %v", expected, actual)
}
actual = vaultPath.Path()
expected = "vault://localhost:8200/secret/foo/bar?tls=false"
if actual != expected {
t.Errorf("expected path %v, got %v", expected, actual)
}
}
func Test_newVaultPathHostOnly(t *testing.T) {
_, err := newVaultPath(nil, "", "/")
if err == nil {
t.Error("Failed to return error on incorrect path")
}
}
func Test_WriteFileReadFile(t *testing.T) {
client := createClient(t)
p, _ := newVaultPath(client, "http://", "/secret/createfiletest")
secret := "my very special secret"
err := p.WriteFile(strings.NewReader(secret), nil)
if err != nil {
t.Errorf("Failed to write file: %v", err)
}
fileBytes, err := p.ReadFile()
if err != nil {
t.Errorf("Failed to read file: %v", err)
}
if !bytes.Equal(fileBytes, []byte(secret)) {
t.Errorf("Failed to read file. Got %v, expected %v", fileBytes, secret)
}
}
func Test_WriteFileDeleteFile(t *testing.T) {
client := createClient(t)
p, _ := newVaultPath(client, "http://", "/secret/createfiletestdelete")
secret := "my very special secret"
err := p.WriteFile(strings.NewReader(secret), nil)
if err != nil {
t.Errorf("Failed to write file: %v", err)
}
err = p.Remove()
if err != nil {
t.Errorf("Failed to write file: %v", err)
}
_, err = p.ReadFile()
if !os.IsNotExist(err) {
t.Error("Failed to delete file")
}
}
func Test_CreateFile(t *testing.T) {
client := createClient(t)
rand.Seed(time.Now().UnixNano())
postfix := rand.Int()
postfixs := strconv.Itoa(postfix)
path := "/secret/createfiletest/" + postfixs
p, _ := newVaultPath(client, "http://", path)
secret := "my very special secret"
err := p.CreateFile(strings.NewReader(secret), nil)
if err != nil {
t.Errorf("Failed to create file at %v: %v", path, err)
}
err = p.CreateFile(strings.NewReader(secret), nil)
if err == nil {
t.Errorf("Should have failed to create file at %v", path)
}
}
func Test_ReadFile(t *testing.T) {
client := createClient(t)
p, _ := newVaultPath(client, "http://", "/secret/readfiletest")
_, err := p.ReadFile()
if !os.IsNotExist(err) {
if err != nil {
t.Errorf("Unexpected error reading file: %v", err)
} else {
t.Error("File should not exist")
}
}
}
func Test_Join(t *testing.T) {
client := createClient(t)
p, _ := newVaultPath(client, "http://", "/secret/joinfiletest")
p2 := p.Join("another", "path")
expected := "vault://localhost:8200/secret/joinfiletest/another/path?tls=false"
if p2.Path() != expected {
t.Errorf("Error joining path. Got: %v, Expected: %v", p2.Path(), expected)
}
}
func Test_VaultReadDirList(t *testing.T) {
tests := []struct {
path string
subpaths []string
expected []string
}{
{
path: "/secret/readdirtest/",
subpaths: []string{
"subdir1/foo",
"subdir2/foo",
"test.data",
},
expected: []string{
"subdir1",
"subdir2",
"test.data",
},
},
}
for _, test := range tests {
client := createClient(t)
vaultPath, _ := newVaultPath(client, "http://", test.path)
// Create sub-paths
for _, subpath := range test.subpaths {
file := strings.NewReader("some data")
vaultPath.Join(subpath).WriteFile(file, nil)
}
// Read dir
paths, err := vaultPath.ReadDir()
if err != nil {
t.Errorf("Failed reading dir %s, error: %v", test.path, err)
continue
}
// There is no consistent alphabetical order in the result, so we sort it
sort.Slice(paths, func(i, j int) bool {
return paths[i].Path() < paths[j].Path()
})
// Expected sub-paths
count := len(test.expected)
expected := make([]Path, count)
for i := 0; i < count; i++ {
expected[i], _ = newVaultPath(client, "http://", test.path+test.expected[i])
}
if !reflect.DeepEqual(paths, expected) {
t.Errorf("Expected sub-paths %v, got %v", expected, paths)
}
}
}
func Test_ReadDir(t *testing.T) {
directory := "/secret/path"
file := "somefile"
client := createClient(t)
directoryPath, _ := newVaultPath(client, "http://", directory)
filePath := directoryPath.Join(file)
filePath.WriteFile(strings.NewReader("foo"), nil)
_, err := filePath.ReadDir()
if err == nil {
t.Error("File considered directory")
}
_, err = directoryPath.ReadDir()
if err != nil {
t.Error("Directory not considered directory")
}
nonExistingPath, _ := newVaultPath(client, "http://", "/secret/does/not/exist")
_, err = nonExistingPath.ReadDir()
if !os.IsNotExist(err) {
t.Error("Found non-existing directory")
}
}
func Test_IsDirectory(t *testing.T) {
directory := "/secret/path"
file := "somefile"
client := createClient(t)
directoryPath, _ := newVaultPath(client, "http://", directory)
filePath := directoryPath.Join(file)
filePath.WriteFile(strings.NewReader("foo"), nil)
if IsDirectory(filePath) {
t.Error("File considered directory")
}
if !IsDirectory(directoryPath) {
t.Error("Directory not considered directory")
}
}
func Test_VaultReadTree(t *testing.T) {
tests := []struct {
path string
subpaths []string
expected []string
}{
{
path: "/secret/dir/",
subpaths: []string{
"subdir/test1.data",
"subdir2/test2.data",
},
expected: []string{
"/secret/dir/subdir/test1.data",
"/secret/dir/subdir2/test2.data",
},
},
}
for _, test := range tests {
client := createClient(t)
vaultPath, _ := newVaultPath(client, "http://", test.path)
// Create sub-paths
for _, subpath := range test.subpaths {
vaultPath.Join(subpath).WriteFile(strings.NewReader("foo"), nil)
}
// Read dir tree
paths, err := vaultPath.ReadTree()
if err != nil {
t.Errorf("Failed reading dir tree %s, error: %v", test.path, err)
continue
}
// There is no consistent alphabetical order in the result, so we sort it
sort.Slice(paths, func(i, j int) bool {
return paths[i].Path() < paths[j].Path()
})
// Expected sub-paths
count := len(test.expected)
expected := make([]Path, count)
for i := 0; i < count; i++ {
expected[i], _ = newVaultPath(client, "http://", test.expected[i])
}
if !reflect.DeepEqual(paths, expected) {
t.Errorf("Expected tree paths %v, got %v", expected, paths)
}
}
}

View File

@ -105,7 +105,7 @@ func IsClusterReadable(p Path) bool {
}
switch p.(type) {
case *S3Path, *GSPath, *SwiftPath, *OSSPath, *FSPath:
case *S3Path, *GSPath, *SwiftPath, *OSSPath, *FSPath, *VaultPath:
return true
case *KubernetesPath:

16
vendor/github.com/golang/snappy/.gitignore generated vendored Normal file
View File

@ -0,0 +1,16 @@
cmd/snappytool/snappytool
testdata/bench
# These explicitly listed benchmark data files are for an obsolete version of
# snappy_test.go.
testdata/alice29.txt
testdata/asyoulik.txt
testdata/fireworks.jpeg
testdata/geo.protodata
testdata/html
testdata/html_x_4
testdata/kppkn.gtb
testdata/lcet10.txt
testdata/paper-100k.pdf
testdata/plrabn12.txt
testdata/urls.10K

15
vendor/github.com/golang/snappy/AUTHORS generated vendored Normal file
View File

@ -0,0 +1,15 @@
# This is the official list of Snappy-Go authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as
# Name or Organization <email address>
# The email address is not required for organizations.
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Google Inc.
Jan Mercl <0xjnml@gmail.com>
Rodolfo Carvalho <rhcarvalho@gmail.com>
Sebastien Binet <seb.binet@gmail.com>

19
vendor/github.com/golang/snappy/BUILD.bazel generated vendored Normal file
View File

@ -0,0 +1,19 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"decode.go",
"decode_amd64.go",
"decode_amd64.s",
"decode_other.go",
"encode.go",
"encode_amd64.go",
"encode_amd64.s",
"encode_other.go",
"snappy.go",
],
importmap = "k8s.io/kops/vendor/github.com/golang/snappy",
importpath = "github.com/golang/snappy",
visibility = ["//visibility:public"],
)

37
vendor/github.com/golang/snappy/CONTRIBUTORS generated vendored Normal file
View File

@ -0,0 +1,37 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the Snappy-Go repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Google employees are listed here
# but not in AUTHORS, because Google holds the copyright.
#
# The submission process automatically checks to make sure
# that people submitting code are listed in this file (by email address).
#
# Names should be added to this file only after verifying that
# the individual or the individual's organization has agreed to
# the appropriate Contributor License Agreement, found here:
#
# http://code.google.com/legal/individual-cla-v1.0.html
# http://code.google.com/legal/corporate-cla-v1.0.html
#
# The agreement for individuals can be filled out on the web.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file, depending on whether the
# individual or corporate CLA was used.
# Names should be added to this file like so:
# Name <email address>
# Please keep the list sorted.
Damian Gryski <dgryski@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Kai Backman <kaib@golang.org>
Marc-Antoine Ruel <maruel@chromium.org>
Nigel Tao <nigeltao@golang.org>
Rob Pike <r@golang.org>
Rodolfo Carvalho <rhcarvalho@gmail.com>
Russ Cox <rsc@golang.org>
Sebastien Binet <seb.binet@gmail.com>

27
vendor/github.com/golang/snappy/LICENSE generated vendored Normal file
View File

@ -0,0 +1,27 @@
Copyright (c) 2011 The Snappy-Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

107
vendor/github.com/golang/snappy/README generated vendored Normal file
View File

@ -0,0 +1,107 @@
The Snappy compression format in the Go programming language.
To download and install from source:
$ go get github.com/golang/snappy
Unless otherwise noted, the Snappy-Go source files are distributed
under the BSD-style license found in the LICENSE file.
Benchmarks.
The golang/snappy benchmarks include compressing (Z) and decompressing (U) ten
or so files, the same set used by the C++ Snappy code (github.com/google/snappy
and note the "google", not "golang"). On an "Intel(R) Core(TM) i7-3770 CPU @
3.40GHz", Go's GOARCH=amd64 numbers as of 2016-05-29:
"go test -test.bench=."
_UFlat0-8 2.19GB/s ± 0% html
_UFlat1-8 1.41GB/s ± 0% urls
_UFlat2-8 23.5GB/s ± 2% jpg
_UFlat3-8 1.91GB/s ± 0% jpg_200
_UFlat4-8 14.0GB/s ± 1% pdf
_UFlat5-8 1.97GB/s ± 0% html4
_UFlat6-8 814MB/s ± 0% txt1
_UFlat7-8 785MB/s ± 0% txt2
_UFlat8-8 857MB/s ± 0% txt3
_UFlat9-8 719MB/s ± 1% txt4
_UFlat10-8 2.84GB/s ± 0% pb
_UFlat11-8 1.05GB/s ± 0% gaviota
_ZFlat0-8 1.04GB/s ± 0% html
_ZFlat1-8 534MB/s ± 0% urls
_ZFlat2-8 15.7GB/s ± 1% jpg
_ZFlat3-8 740MB/s ± 3% jpg_200
_ZFlat4-8 9.20GB/s ± 1% pdf
_ZFlat5-8 991MB/s ± 0% html4
_ZFlat6-8 379MB/s ± 0% txt1
_ZFlat7-8 352MB/s ± 0% txt2
_ZFlat8-8 396MB/s ± 1% txt3
_ZFlat9-8 327MB/s ± 1% txt4
_ZFlat10-8 1.33GB/s ± 1% pb
_ZFlat11-8 605MB/s ± 1% gaviota
"go test -test.bench=. -tags=noasm"
_UFlat0-8 621MB/s ± 2% html
_UFlat1-8 494MB/s ± 1% urls
_UFlat2-8 23.2GB/s ± 1% jpg
_UFlat3-8 1.12GB/s ± 1% jpg_200
_UFlat4-8 4.35GB/s ± 1% pdf
_UFlat5-8 609MB/s ± 0% html4
_UFlat6-8 296MB/s ± 0% txt1
_UFlat7-8 288MB/s ± 0% txt2
_UFlat8-8 309MB/s ± 1% txt3
_UFlat9-8 280MB/s ± 1% txt4
_UFlat10-8 753MB/s ± 0% pb
_UFlat11-8 400MB/s ± 0% gaviota
_ZFlat0-8 409MB/s ± 1% html
_ZFlat1-8 250MB/s ± 1% urls
_ZFlat2-8 12.3GB/s ± 1% jpg
_ZFlat3-8 132MB/s ± 0% jpg_200
_ZFlat4-8 2.92GB/s ± 0% pdf
_ZFlat5-8 405MB/s ± 1% html4
_ZFlat6-8 179MB/s ± 1% txt1
_ZFlat7-8 170MB/s ± 1% txt2
_ZFlat8-8 189MB/s ± 1% txt3
_ZFlat9-8 164MB/s ± 1% txt4
_ZFlat10-8 479MB/s ± 1% pb
_ZFlat11-8 270MB/s ± 1% gaviota
For comparison (Go's encoded output is byte-for-byte identical to C++'s), here
are the numbers from C++ Snappy's
make CXXFLAGS="-O2 -DNDEBUG -g" clean snappy_unittest.log && cat snappy_unittest.log
BM_UFlat/0 2.4GB/s html
BM_UFlat/1 1.4GB/s urls
BM_UFlat/2 21.8GB/s jpg
BM_UFlat/3 1.5GB/s jpg_200
BM_UFlat/4 13.3GB/s pdf
BM_UFlat/5 2.1GB/s html4
BM_UFlat/6 1.0GB/s txt1
BM_UFlat/7 959.4MB/s txt2
BM_UFlat/8 1.0GB/s txt3
BM_UFlat/9 864.5MB/s txt4
BM_UFlat/10 2.9GB/s pb
BM_UFlat/11 1.2GB/s gaviota
BM_ZFlat/0 944.3MB/s html (22.31 %)
BM_ZFlat/1 501.6MB/s urls (47.78 %)
BM_ZFlat/2 14.3GB/s jpg (99.95 %)
BM_ZFlat/3 538.3MB/s jpg_200 (73.00 %)
BM_ZFlat/4 8.3GB/s pdf (83.30 %)
BM_ZFlat/5 903.5MB/s html4 (22.52 %)
BM_ZFlat/6 336.0MB/s txt1 (57.88 %)
BM_ZFlat/7 312.3MB/s txt2 (61.91 %)
BM_ZFlat/8 353.1MB/s txt3 (54.99 %)
BM_ZFlat/9 289.9MB/s txt4 (66.26 %)
BM_ZFlat/10 1.2GB/s pb (19.68 %)
BM_ZFlat/11 527.4MB/s gaviota (37.72 %)

237
vendor/github.com/golang/snappy/decode.go generated vendored Normal file
View File

@ -0,0 +1,237 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
"errors"
"io"
)
var (
// ErrCorrupt reports that the input is invalid.
ErrCorrupt = errors.New("snappy: corrupt input")
// ErrTooLarge reports that the uncompressed length is too large.
ErrTooLarge = errors.New("snappy: decoded block is too large")
// ErrUnsupported reports that the input isn't supported.
ErrUnsupported = errors.New("snappy: unsupported input")
errUnsupportedLiteralLength = errors.New("snappy: unsupported literal length")
)
// DecodedLen returns the length of the decoded block.
func DecodedLen(src []byte) (int, error) {
v, _, err := decodedLen(src)
return v, err
}
// decodedLen returns the length of the decoded block and the number of bytes
// that the length header occupied.
func decodedLen(src []byte) (blockLen, headerLen int, err error) {
v, n := binary.Uvarint(src)
if n <= 0 || v > 0xffffffff {
return 0, 0, ErrCorrupt
}
const wordSize = 32 << (^uint(0) >> 32 & 1)
if wordSize == 32 && v > 0x7fffffff {
return 0, 0, ErrTooLarge
}
return int(v), n, nil
}
const (
decodeErrCodeCorrupt = 1
decodeErrCodeUnsupportedLiteralLength = 2
)
// Decode returns the decoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire decoded block.
// Otherwise, a newly allocated slice will be returned.
//
// The dst and src must not overlap. It is valid to pass a nil dst.
func Decode(dst, src []byte) ([]byte, error) {
dLen, s, err := decodedLen(src)
if err != nil {
return nil, err
}
if dLen <= len(dst) {
dst = dst[:dLen]
} else {
dst = make([]byte, dLen)
}
switch decode(dst, src[s:]) {
case 0:
return dst, nil
case decodeErrCodeUnsupportedLiteralLength:
return nil, errUnsupportedLiteralLength
}
return nil, ErrCorrupt
}
// NewReader returns a new Reader that decompresses from r, using the framing
// format described at
// https://github.com/google/snappy/blob/master/framing_format.txt
func NewReader(r io.Reader) *Reader {
return &Reader{
r: r,
decoded: make([]byte, maxBlockSize),
buf: make([]byte, maxEncodedLenOfMaxBlockSize+checksumSize),
}
}
// Reader is an io.Reader that can read Snappy-compressed bytes.
type Reader struct {
r io.Reader
err error
decoded []byte
buf []byte
// decoded[i:j] contains decoded bytes that have not yet been passed on.
i, j int
readHeader bool
}
// Reset discards any buffered data, resets all state, and switches the Snappy
// reader to read from r. This permits reusing a Reader rather than allocating
// a new one.
func (r *Reader) Reset(reader io.Reader) {
r.r = reader
r.err = nil
r.i = 0
r.j = 0
r.readHeader = false
}
func (r *Reader) readFull(p []byte, allowEOF bool) (ok bool) {
if _, r.err = io.ReadFull(r.r, p); r.err != nil {
if r.err == io.ErrUnexpectedEOF || (r.err == io.EOF && !allowEOF) {
r.err = ErrCorrupt
}
return false
}
return true
}
// Read satisfies the io.Reader interface.
func (r *Reader) Read(p []byte) (int, error) {
if r.err != nil {
return 0, r.err
}
for {
if r.i < r.j {
n := copy(p, r.decoded[r.i:r.j])
r.i += n
return n, nil
}
if !r.readFull(r.buf[:4], true) {
return 0, r.err
}
chunkType := r.buf[0]
if !r.readHeader {
if chunkType != chunkTypeStreamIdentifier {
r.err = ErrCorrupt
return 0, r.err
}
r.readHeader = true
}
chunkLen := int(r.buf[1]) | int(r.buf[2])<<8 | int(r.buf[3])<<16
if chunkLen > len(r.buf) {
r.err = ErrUnsupported
return 0, r.err
}
// The chunk types are specified at
// https://github.com/google/snappy/blob/master/framing_format.txt
switch chunkType {
case chunkTypeCompressedData:
// Section 4.2. Compressed data (chunk type 0x00).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:chunkLen]
if !r.readFull(buf, false) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
buf = buf[checksumSize:]
n, err := DecodedLen(buf)
if err != nil {
r.err = err
return 0, r.err
}
if n > len(r.decoded) {
r.err = ErrCorrupt
return 0, r.err
}
if _, err := Decode(r.decoded, buf); err != nil {
r.err = err
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeUncompressedData:
// Section 4.3. Uncompressed data (chunk type 0x01).
if chunkLen < checksumSize {
r.err = ErrCorrupt
return 0, r.err
}
buf := r.buf[:checksumSize]
if !r.readFull(buf, false) {
return 0, r.err
}
checksum := uint32(buf[0]) | uint32(buf[1])<<8 | uint32(buf[2])<<16 | uint32(buf[3])<<24
// Read directly into r.decoded instead of via r.buf.
n := chunkLen - checksumSize
if n > len(r.decoded) {
r.err = ErrCorrupt
return 0, r.err
}
if !r.readFull(r.decoded[:n], false) {
return 0, r.err
}
if crc(r.decoded[:n]) != checksum {
r.err = ErrCorrupt
return 0, r.err
}
r.i, r.j = 0, n
continue
case chunkTypeStreamIdentifier:
// Section 4.1. Stream identifier (chunk type 0xff).
if chunkLen != len(magicBody) {
r.err = ErrCorrupt
return 0, r.err
}
if !r.readFull(r.buf[:len(magicBody)], false) {
return 0, r.err
}
for i := 0; i < len(magicBody); i++ {
if r.buf[i] != magicBody[i] {
r.err = ErrCorrupt
return 0, r.err
}
}
continue
}
if chunkType <= 0x7f {
// Section 4.5. Reserved unskippable chunks (chunk types 0x02-0x7f).
r.err = ErrUnsupported
return 0, r.err
}
// Section 4.4 Padding (chunk type 0xfe).
// Section 4.6. Reserved skippable chunks (chunk types 0x80-0xfd).
if !r.readFull(r.buf[:chunkLen], false) {
return 0, r.err
}
}
}

14
vendor/github.com/golang/snappy/decode_amd64.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
package snappy
// decode has the same semantics as in decode_other.go.
//
//go:noescape
func decode(dst, src []byte) int

490
vendor/github.com/golang/snappy/decode_amd64.s generated vendored Normal file
View File

@ -0,0 +1,490 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
#include "textflag.h"
// The asm code generally follows the pure Go code in decode_other.go, except
// where marked with a "!!!".
// func decode(dst, src []byte) int
//
// All local variables fit into registers. The non-zero stack size is only to
// spill registers and push args when issuing a CALL. The register allocation:
// - AX scratch
// - BX scratch
// - CX length or x
// - DX offset
// - SI &src[s]
// - DI &dst[d]
// + R8 dst_base
// + R9 dst_len
// + R10 dst_base + dst_len
// + R11 src_base
// + R12 src_len
// + R13 src_base + src_len
// - R14 used by doCopy
// - R15 used by doCopy
//
// The registers R8-R13 (marked with a "+") are set at the start of the
// function, and after a CALL returns, and are not otherwise modified.
//
// The d variable is implicitly DI - R8, and len(dst)-d is R10 - DI.
// The s variable is implicitly SI - R11, and len(src)-s is R13 - SI.
TEXT ·decode(SB), NOSPLIT, $48-56
// Initialize SI, DI and R8-R13.
MOVQ dst_base+0(FP), R8
MOVQ dst_len+8(FP), R9
MOVQ R8, DI
MOVQ R8, R10
ADDQ R9, R10
MOVQ src_base+24(FP), R11
MOVQ src_len+32(FP), R12
MOVQ R11, SI
MOVQ R11, R13
ADDQ R12, R13
loop:
// for s < len(src)
CMPQ SI, R13
JEQ end
// CX = uint32(src[s])
//
// switch src[s] & 0x03
MOVBLZX (SI), CX
MOVL CX, BX
ANDL $3, BX
CMPL BX, $1
JAE tagCopy
// ----------------------------------------
// The code below handles literal tags.
// case tagLiteral:
// x := uint32(src[s] >> 2)
// switch
SHRL $2, CX
CMPL CX, $60
JAE tagLit60Plus
// case x < 60:
// s++
INCQ SI
doLit:
// This is the end of the inner "switch", when we have a literal tag.
//
// We assume that CX == x and x fits in a uint32, where x is the variable
// used in the pure Go decode_other.go code.
// length = int(x) + 1
//
// Unlike the pure Go code, we don't need to check if length <= 0 because
// CX can hold 64 bits, so the increment cannot overflow.
INCQ CX
// Prepare to check if copying length bytes will run past the end of dst or
// src.
//
// AX = len(dst) - d
// BX = len(src) - s
MOVQ R10, AX
SUBQ DI, AX
MOVQ R13, BX
SUBQ SI, BX
// !!! Try a faster technique for short (16 or fewer bytes) copies.
//
// if length > 16 || len(dst)-d < 16 || len(src)-s < 16 {
// goto callMemmove // Fall back on calling runtime·memmove.
// }
//
// The C++ snappy code calls this TryFastAppend. It also checks len(src)-s
// against 21 instead of 16, because it cannot assume that all of its input
// is contiguous in memory and so it needs to leave enough source bytes to
// read the next tag without refilling buffers, but Go's Decode assumes
// contiguousness (the src argument is a []byte).
CMPQ CX, $16
JGT callMemmove
CMPQ AX, $16
JLT callMemmove
CMPQ BX, $16
JLT callMemmove
// !!! Implement the copy from src to dst as a 16-byte load and store.
// (Decode's documentation says that dst and src must not overlap.)
//
// This always copies 16 bytes, instead of only length bytes, but that's
// OK. If the input is a valid Snappy encoding then subsequent iterations
// will fix up the overrun. Otherwise, Decode returns a nil []byte (and a
// non-nil error), so the overrun will be ignored.
//
// Note that on amd64, it is legal and cheap to issue unaligned 8-byte or
// 16-byte loads and stores. This technique probably wouldn't be as
// effective on architectures that are fussier about alignment.
MOVOU 0(SI), X0
MOVOU X0, 0(DI)
// d += length
// s += length
ADDQ CX, DI
ADDQ CX, SI
JMP loop
callMemmove:
// if length > len(dst)-d || length > len(src)-s { etc }
CMPQ CX, AX
JGT errCorrupt
CMPQ CX, BX
JGT errCorrupt
// copy(dst[d:], src[s:s+length])
//
// This means calling runtime·memmove(&dst[d], &src[s], length), so we push
// DI, SI and CX as arguments. Coincidentally, we also need to spill those
// three registers to the stack, to save local variables across the CALL.
MOVQ DI, 0(SP)
MOVQ SI, 8(SP)
MOVQ CX, 16(SP)
MOVQ DI, 24(SP)
MOVQ SI, 32(SP)
MOVQ CX, 40(SP)
CALL runtime·memmove(SB)
// Restore local variables: unspill registers from the stack and
// re-calculate R8-R13.
MOVQ 24(SP), DI
MOVQ 32(SP), SI
MOVQ 40(SP), CX
MOVQ dst_base+0(FP), R8
MOVQ dst_len+8(FP), R9
MOVQ R8, R10
ADDQ R9, R10
MOVQ src_base+24(FP), R11
MOVQ src_len+32(FP), R12
MOVQ R11, R13
ADDQ R12, R13
// d += length
// s += length
ADDQ CX, DI
ADDQ CX, SI
JMP loop
tagLit60Plus:
// !!! This fragment does the
//
// s += x - 58; if uint(s) > uint(len(src)) { etc }
//
// checks. In the asm version, we code it once instead of once per switch case.
ADDQ CX, SI
SUBQ $58, SI
MOVQ SI, BX
SUBQ R11, BX
CMPQ BX, R12
JA errCorrupt
// case x == 60:
CMPL CX, $61
JEQ tagLit61
JA tagLit62Plus
// x = uint32(src[s-1])
MOVBLZX -1(SI), CX
JMP doLit
tagLit61:
// case x == 61:
// x = uint32(src[s-2]) | uint32(src[s-1])<<8
MOVWLZX -2(SI), CX
JMP doLit
tagLit62Plus:
CMPL CX, $62
JA tagLit63
// case x == 62:
// x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
MOVWLZX -3(SI), CX
MOVBLZX -1(SI), BX
SHLL $16, BX
ORL BX, CX
JMP doLit
tagLit63:
// case x == 63:
// x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
MOVL -4(SI), CX
JMP doLit
// The code above handles literal tags.
// ----------------------------------------
// The code below handles copy tags.
tagCopy4:
// case tagCopy4:
// s += 5
ADDQ $5, SI
// if uint(s) > uint(len(src)) { etc }
MOVQ SI, BX
SUBQ R11, BX
CMPQ BX, R12
JA errCorrupt
// length = 1 + int(src[s-5])>>2
SHRQ $2, CX
INCQ CX
// offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
MOVLQZX -4(SI), DX
JMP doCopy
tagCopy2:
// case tagCopy2:
// s += 3
ADDQ $3, SI
// if uint(s) > uint(len(src)) { etc }
MOVQ SI, BX
SUBQ R11, BX
CMPQ BX, R12
JA errCorrupt
// length = 1 + int(src[s-3])>>2
SHRQ $2, CX
INCQ CX
// offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
MOVWQZX -2(SI), DX
JMP doCopy
tagCopy:
// We have a copy tag. We assume that:
// - BX == src[s] & 0x03
// - CX == src[s]
CMPQ BX, $2
JEQ tagCopy2
JA tagCopy4
// case tagCopy1:
// s += 2
ADDQ $2, SI
// if uint(s) > uint(len(src)) { etc }
MOVQ SI, BX
SUBQ R11, BX
CMPQ BX, R12
JA errCorrupt
// offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
MOVQ CX, DX
ANDQ $0xe0, DX
SHLQ $3, DX
MOVBQZX -1(SI), BX
ORQ BX, DX
// length = 4 + int(src[s-2])>>2&0x7
SHRQ $2, CX
ANDQ $7, CX
ADDQ $4, CX
doCopy:
// This is the end of the outer "switch", when we have a copy tag.
//
// We assume that:
// - CX == length && CX > 0
// - DX == offset
// if offset <= 0 { etc }
CMPQ DX, $0
JLE errCorrupt
// if d < offset { etc }
MOVQ DI, BX
SUBQ R8, BX
CMPQ BX, DX
JLT errCorrupt
// if length > len(dst)-d { etc }
MOVQ R10, BX
SUBQ DI, BX
CMPQ CX, BX
JGT errCorrupt
// forwardCopy(dst[d:d+length], dst[d-offset:]); d += length
//
// Set:
// - R14 = len(dst)-d
// - R15 = &dst[d-offset]
MOVQ R10, R14
SUBQ DI, R14
MOVQ DI, R15
SUBQ DX, R15
// !!! Try a faster technique for short (16 or fewer bytes) forward copies.
//
// First, try using two 8-byte load/stores, similar to the doLit technique
// above. Even if dst[d:d+length] and dst[d-offset:] can overlap, this is
// still OK if offset >= 8. Note that this has to be two 8-byte load/stores
// and not one 16-byte load/store, and the first store has to be before the
// second load, due to the overlap if offset is in the range [8, 16).
//
// if length > 16 || offset < 8 || len(dst)-d < 16 {
// goto slowForwardCopy
// }
// copy 16 bytes
// d += length
CMPQ CX, $16
JGT slowForwardCopy
CMPQ DX, $8
JLT slowForwardCopy
CMPQ R14, $16
JLT slowForwardCopy
MOVQ 0(R15), AX
MOVQ AX, 0(DI)
MOVQ 8(R15), BX
MOVQ BX, 8(DI)
ADDQ CX, DI
JMP loop
slowForwardCopy:
// !!! If the forward copy is longer than 16 bytes, or if offset < 8, we
// can still try 8-byte load stores, provided we can overrun up to 10 extra
// bytes. As above, the overrun will be fixed up by subsequent iterations
// of the outermost loop.
//
// The C++ snappy code calls this technique IncrementalCopyFastPath. Its
// commentary says:
//
// ----
//
// The main part of this loop is a simple copy of eight bytes at a time
// until we've copied (at least) the requested amount of bytes. However,
// if d and d-offset are less than eight bytes apart (indicating a
// repeating pattern of length < 8), we first need to expand the pattern in
// order to get the correct results. For instance, if the buffer looks like
// this, with the eight-byte <d-offset> and <d> patterns marked as
// intervals:
//
// abxxxxxxxxxxxx
// [------] d-offset
// [------] d
//
// a single eight-byte copy from <d-offset> to <d> will repeat the pattern
// once, after which we can move <d> two bytes without moving <d-offset>:
//
// ababxxxxxxxxxx
// [------] d-offset
// [------] d
//
// and repeat the exercise until the two no longer overlap.
//
// This allows us to do very well in the special case of one single byte
// repeated many times, without taking a big hit for more general cases.
//
// The worst case of extra writing past the end of the match occurs when
// offset == 1 and length == 1; the last copy will read from byte positions
// [0..7] and write to [4..11], whereas it was only supposed to write to
// position 1. Thus, ten excess bytes.
//
// ----
//
// That "10 byte overrun" worst case is confirmed by Go's
// TestSlowForwardCopyOverrun, which also tests the fixUpSlowForwardCopy
// and finishSlowForwardCopy algorithm.
//
// if length > len(dst)-d-10 {
// goto verySlowForwardCopy
// }
SUBQ $10, R14
CMPQ CX, R14
JGT verySlowForwardCopy
makeOffsetAtLeast8:
// !!! As above, expand the pattern so that offset >= 8 and we can use
// 8-byte load/stores.
//
// for offset < 8 {
// copy 8 bytes from dst[d-offset:] to dst[d:]
// length -= offset
// d += offset
// offset += offset
// // The two previous lines together means that d-offset, and therefore
// // R15, is unchanged.
// }
CMPQ DX, $8
JGE fixUpSlowForwardCopy
MOVQ (R15), BX
MOVQ BX, (DI)
SUBQ DX, CX
ADDQ DX, DI
ADDQ DX, DX
JMP makeOffsetAtLeast8
fixUpSlowForwardCopy:
// !!! Add length (which might be negative now) to d (implied by DI being
// &dst[d]) so that d ends up at the right place when we jump back to the
// top of the loop. Before we do that, though, we save DI to AX so that, if
// length is positive, copying the remaining length bytes will write to the
// right place.
MOVQ DI, AX
ADDQ CX, DI
finishSlowForwardCopy:
// !!! Repeat 8-byte load/stores until length <= 0. Ending with a negative
// length means that we overrun, but as above, that will be fixed up by
// subsequent iterations of the outermost loop.
CMPQ CX, $0
JLE loop
MOVQ (R15), BX
MOVQ BX, (AX)
ADDQ $8, R15
ADDQ $8, AX
SUBQ $8, CX
JMP finishSlowForwardCopy
verySlowForwardCopy:
// verySlowForwardCopy is a simple implementation of forward copy. In C
// parlance, this is a do/while loop instead of a while loop, since we know
// that length > 0. In Go syntax:
//
// for {
// dst[d] = dst[d - offset]
// d++
// length--
// if length == 0 {
// break
// }
// }
MOVB (R15), BX
MOVB BX, (DI)
INCQ R15
INCQ DI
DECQ CX
JNZ verySlowForwardCopy
JMP loop
// The code above handles copy tags.
// ----------------------------------------
end:
// This is the end of the "for s < len(src)".
//
// if d != len(dst) { etc }
CMPQ DI, R10
JNE errCorrupt
// return 0
MOVQ $0, ret+48(FP)
RET
errCorrupt:
// return decodeErrCodeCorrupt
MOVQ $1, ret+48(FP)
RET

101
vendor/github.com/golang/snappy/decode_other.go generated vendored Normal file
View File

@ -0,0 +1,101 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !amd64 appengine !gc noasm
package snappy
// decode writes the decoding of src to dst. It assumes that the varint-encoded
// length of the decompressed bytes has already been read, and that len(dst)
// equals that length.
//
// It returns 0 on success or a decodeErrCodeXxx error code on failure.
func decode(dst, src []byte) int {
var d, s, offset, length int
for s < len(src) {
switch src[s] & 0x03 {
case tagLiteral:
x := uint32(src[s] >> 2)
switch {
case x < 60:
s++
case x == 60:
s += 2
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-1])
case x == 61:
s += 3
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-2]) | uint32(src[s-1])<<8
case x == 62:
s += 4
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-3]) | uint32(src[s-2])<<8 | uint32(src[s-1])<<16
case x == 63:
s += 5
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
x = uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24
}
length = int(x) + 1
if length <= 0 {
return decodeErrCodeUnsupportedLiteralLength
}
if length > len(dst)-d || length > len(src)-s {
return decodeErrCodeCorrupt
}
copy(dst[d:], src[s:s+length])
d += length
s += length
continue
case tagCopy1:
s += 2
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = 4 + int(src[s-2])>>2&0x7
offset = int(uint32(src[s-2])&0xe0<<3 | uint32(src[s-1]))
case tagCopy2:
s += 3
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = 1 + int(src[s-3])>>2
offset = int(uint32(src[s-2]) | uint32(src[s-1])<<8)
case tagCopy4:
s += 5
if uint(s) > uint(len(src)) { // The uint conversions catch overflow from the previous line.
return decodeErrCodeCorrupt
}
length = 1 + int(src[s-5])>>2
offset = int(uint32(src[s-4]) | uint32(src[s-3])<<8 | uint32(src[s-2])<<16 | uint32(src[s-1])<<24)
}
if offset <= 0 || d < offset || length > len(dst)-d {
return decodeErrCodeCorrupt
}
// Copy from an earlier sub-slice of dst to a later sub-slice. Unlike
// the built-in copy function, this byte-by-byte copy always runs
// forwards, even if the slices overlap. Conceptually, this is:
//
// d += forwardCopy(dst[d:d+length], dst[d-offset:])
for end := d + length; d != end; d++ {
dst[d] = dst[d-offset]
}
}
if d != len(dst) {
return decodeErrCodeCorrupt
}
return 0
}

285
vendor/github.com/golang/snappy/encode.go generated vendored Normal file
View File

@ -0,0 +1,285 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package snappy
import (
"encoding/binary"
"errors"
"io"
)
// Encode returns the encoded form of src. The returned slice may be a sub-
// slice of dst if dst was large enough to hold the entire encoded block.
// Otherwise, a newly allocated slice will be returned.
//
// The dst and src must not overlap. It is valid to pass a nil dst.
func Encode(dst, src []byte) []byte {
if n := MaxEncodedLen(len(src)); n < 0 {
panic(ErrTooLarge)
} else if len(dst) < n {
dst = make([]byte, n)
}
// The block starts with the varint-encoded length of the decompressed bytes.
d := binary.PutUvarint(dst, uint64(len(src)))
for len(src) > 0 {
p := src
src = nil
if len(p) > maxBlockSize {
p, src = p[:maxBlockSize], p[maxBlockSize:]
}
if len(p) < minNonLiteralBlockSize {
d += emitLiteral(dst[d:], p)
} else {
d += encodeBlock(dst[d:], p)
}
}
return dst[:d]
}
// inputMargin is the minimum number of extra input bytes to keep, inside
// encodeBlock's inner loop. On some architectures, this margin lets us
// implement a fast path for emitLiteral, where the copy of short (<= 16 byte)
// literals can be implemented as a single load to and store from a 16-byte
// register. That literal's actual length can be as short as 1 byte, so this
// can copy up to 15 bytes too much, but that's OK as subsequent iterations of
// the encoding loop will fix up the copy overrun, and this inputMargin ensures
// that we don't overrun the dst and src buffers.
const inputMargin = 16 - 1
// minNonLiteralBlockSize is the minimum size of the input to encodeBlock that
// could be encoded with a copy tag. This is the minimum with respect to the
// algorithm used by encodeBlock, not a minimum enforced by the file format.
//
// The encoded output must start with at least a 1 byte literal, as there are
// no previous bytes to copy. A minimal (1 byte) copy after that, generated
// from an emitCopy call in encodeBlock's main loop, would require at least
// another inputMargin bytes, for the reason above: we want any emitLiteral
// calls inside encodeBlock's main loop to use the fast path if possible, which
// requires being able to overrun by inputMargin bytes. Thus,
// minNonLiteralBlockSize equals 1 + 1 + inputMargin.
//
// The C++ code doesn't use this exact threshold, but it could, as discussed at
// https://groups.google.com/d/topic/snappy-compression/oGbhsdIJSJ8/discussion
// The difference between Go (2+inputMargin) and C++ (inputMargin) is purely an
// optimization. It should not affect the encoded form. This is tested by
// TestSameEncodingAsCppShortCopies.
const minNonLiteralBlockSize = 1 + 1 + inputMargin
// MaxEncodedLen returns the maximum length of a snappy block, given its
// uncompressed length.
//
// It will return a negative value if srcLen is too large to encode.
func MaxEncodedLen(srcLen int) int {
n := uint64(srcLen)
if n > 0xffffffff {
return -1
}
// Compressed data can be defined as:
// compressed := item* literal*
// item := literal* copy
//
// The trailing literal sequence has a space blowup of at most 62/60
// since a literal of length 60 needs one tag byte + one extra byte
// for length information.
//
// Item blowup is trickier to measure. Suppose the "copy" op copies
// 4 bytes of data. Because of a special check in the encoding code,
// we produce a 4-byte copy only if the offset is < 65536. Therefore
// the copy op takes 3 bytes to encode, and this type of item leads
// to at most the 62/60 blowup for representing literals.
//
// Suppose the "copy" op copies 5 bytes of data. If the offset is big
// enough, it will take 5 bytes to encode the copy op. Therefore the
// worst case here is a one-byte literal followed by a five-byte copy.
// That is, 6 bytes of input turn into 7 bytes of "compressed" data.
//
// This last factor dominates the blowup, so the final estimate is:
n = 32 + n + n/6
if n > 0xffffffff {
return -1
}
return int(n)
}
var errClosed = errors.New("snappy: Writer is closed")
// NewWriter returns a new Writer that compresses to w.
//
// The Writer returned does not buffer writes. There is no need to Flush or
// Close such a Writer.
//
// Deprecated: the Writer returned is not suitable for many small writes, only
// for few large writes. Use NewBufferedWriter instead, which is efficient
// regardless of the frequency and shape of the writes, and remember to Close
// that Writer when done.
func NewWriter(w io.Writer) *Writer {
return &Writer{
w: w,
obuf: make([]byte, obufLen),
}
}
// NewBufferedWriter returns a new Writer that compresses to w, using the
// framing format described at
// https://github.com/google/snappy/blob/master/framing_format.txt
//
// The Writer returned buffers writes. Users must call Close to guarantee all
// data has been forwarded to the underlying io.Writer. They may also call
// Flush zero or more times before calling Close.
func NewBufferedWriter(w io.Writer) *Writer {
return &Writer{
w: w,
ibuf: make([]byte, 0, maxBlockSize),
obuf: make([]byte, obufLen),
}
}
// Writer is an io.Writer that can write Snappy-compressed bytes.
type Writer struct {
w io.Writer
err error
// ibuf is a buffer for the incoming (uncompressed) bytes.
//
// Its use is optional. For backwards compatibility, Writers created by the
// NewWriter function have ibuf == nil, do not buffer incoming bytes, and
// therefore do not need to be Flush'ed or Close'd.
ibuf []byte
// obuf is a buffer for the outgoing (compressed) bytes.
obuf []byte
// wroteStreamHeader is whether we have written the stream header.
wroteStreamHeader bool
}
// Reset discards the writer's state and switches the Snappy writer to write to
// w. This permits reusing a Writer rather than allocating a new one.
func (w *Writer) Reset(writer io.Writer) {
w.w = writer
w.err = nil
if w.ibuf != nil {
w.ibuf = w.ibuf[:0]
}
w.wroteStreamHeader = false
}
// Write satisfies the io.Writer interface.
func (w *Writer) Write(p []byte) (nRet int, errRet error) {
if w.ibuf == nil {
// Do not buffer incoming bytes. This does not perform or compress well
// if the caller of Writer.Write writes many small slices. This
// behavior is therefore deprecated, but still supported for backwards
// compatibility with code that doesn't explicitly Flush or Close.
return w.write(p)
}
// The remainder of this method is based on bufio.Writer.Write from the
// standard library.
for len(p) > (cap(w.ibuf)-len(w.ibuf)) && w.err == nil {
var n int
if len(w.ibuf) == 0 {
// Large write, empty buffer.
// Write directly from p to avoid copy.
n, _ = w.write(p)
} else {
n = copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p)
w.ibuf = w.ibuf[:len(w.ibuf)+n]
w.Flush()
}
nRet += n
p = p[n:]
}
if w.err != nil {
return nRet, w.err
}
n := copy(w.ibuf[len(w.ibuf):cap(w.ibuf)], p)
w.ibuf = w.ibuf[:len(w.ibuf)+n]
nRet += n
return nRet, nil
}
func (w *Writer) write(p []byte) (nRet int, errRet error) {
if w.err != nil {
return 0, w.err
}
for len(p) > 0 {
obufStart := len(magicChunk)
if !w.wroteStreamHeader {
w.wroteStreamHeader = true
copy(w.obuf, magicChunk)
obufStart = 0
}
var uncompressed []byte
if len(p) > maxBlockSize {
uncompressed, p = p[:maxBlockSize], p[maxBlockSize:]
} else {
uncompressed, p = p, nil
}
checksum := crc(uncompressed)
// Compress the buffer, discarding the result if the improvement
// isn't at least 12.5%.
compressed := Encode(w.obuf[obufHeaderLen:], uncompressed)
chunkType := uint8(chunkTypeCompressedData)
chunkLen := 4 + len(compressed)
obufEnd := obufHeaderLen + len(compressed)
if len(compressed) >= len(uncompressed)-len(uncompressed)/8 {
chunkType = chunkTypeUncompressedData
chunkLen = 4 + len(uncompressed)
obufEnd = obufHeaderLen
}
// Fill in the per-chunk header that comes before the body.
w.obuf[len(magicChunk)+0] = chunkType
w.obuf[len(magicChunk)+1] = uint8(chunkLen >> 0)
w.obuf[len(magicChunk)+2] = uint8(chunkLen >> 8)
w.obuf[len(magicChunk)+3] = uint8(chunkLen >> 16)
w.obuf[len(magicChunk)+4] = uint8(checksum >> 0)
w.obuf[len(magicChunk)+5] = uint8(checksum >> 8)
w.obuf[len(magicChunk)+6] = uint8(checksum >> 16)
w.obuf[len(magicChunk)+7] = uint8(checksum >> 24)
if _, err := w.w.Write(w.obuf[obufStart:obufEnd]); err != nil {
w.err = err
return nRet, err
}
if chunkType == chunkTypeUncompressedData {
if _, err := w.w.Write(uncompressed); err != nil {
w.err = err
return nRet, err
}
}
nRet += len(uncompressed)
}
return nRet, nil
}
// Flush flushes the Writer to its underlying io.Writer.
func (w *Writer) Flush() error {
if w.err != nil {
return w.err
}
if len(w.ibuf) == 0 {
return nil
}
w.write(w.ibuf)
w.ibuf = w.ibuf[:0]
return w.err
}
// Close calls Flush and then closes the Writer.
func (w *Writer) Close() error {
w.Flush()
ret := w.err
if w.err == nil {
w.err = errClosed
}
return ret
}

29
vendor/github.com/golang/snappy/encode_amd64.go generated vendored Normal file
View File

@ -0,0 +1,29 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
package snappy
// emitLiteral has the same semantics as in encode_other.go.
//
//go:noescape
func emitLiteral(dst, lit []byte) int
// emitCopy has the same semantics as in encode_other.go.
//
//go:noescape
func emitCopy(dst []byte, offset, length int) int
// extendMatch has the same semantics as in encode_other.go.
//
//go:noescape
func extendMatch(src []byte, i, j int) int
// encodeBlock has the same semantics as in encode_other.go.
//
//go:noescape
func encodeBlock(dst, src []byte) (d int)

730
vendor/github.com/golang/snappy/encode_amd64.s generated vendored Normal file
View File

@ -0,0 +1,730 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
// +build gc
// +build !noasm
#include "textflag.h"
// The XXX lines assemble on Go 1.4, 1.5 and 1.7, but not 1.6, due to a
// Go toolchain regression. See https://github.com/golang/go/issues/15426 and
// https://github.com/golang/snappy/issues/29
//
// As a workaround, the package was built with a known good assembler, and
// those instructions were disassembled by "objdump -d" to yield the
// 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
// style comments, in AT&T asm syntax. Note that rsp here is a physical
// register, not Go/asm's SP pseudo-register (see https://golang.org/doc/asm).
// The instructions were then encoded as "BYTE $0x.." sequences, which assemble
// fine on Go 1.6.
// The asm code generally follows the pure Go code in encode_other.go, except
// where marked with a "!!!".
// ----------------------------------------------------------------------------
// func emitLiteral(dst, lit []byte) int
//
// All local variables fit into registers. The register allocation:
// - AX len(lit)
// - BX n
// - DX return value
// - DI &dst[i]
// - R10 &lit[0]
//
// The 24 bytes of stack space is to call runtime·memmove.
//
// The unusual register allocation of local variables, such as R10 for the
// source pointer, matches the allocation used at the call site in encodeBlock,
// which makes it easier to manually inline this function.
TEXT ·emitLiteral(SB), NOSPLIT, $24-56
MOVQ dst_base+0(FP), DI
MOVQ lit_base+24(FP), R10
MOVQ lit_len+32(FP), AX
MOVQ AX, DX
MOVL AX, BX
SUBL $1, BX
CMPL BX, $60
JLT oneByte
CMPL BX, $256
JLT twoBytes
threeBytes:
MOVB $0xf4, 0(DI)
MOVW BX, 1(DI)
ADDQ $3, DI
ADDQ $3, DX
JMP memmove
twoBytes:
MOVB $0xf0, 0(DI)
MOVB BX, 1(DI)
ADDQ $2, DI
ADDQ $2, DX
JMP memmove
oneByte:
SHLB $2, BX
MOVB BX, 0(DI)
ADDQ $1, DI
ADDQ $1, DX
memmove:
MOVQ DX, ret+48(FP)
// copy(dst[i:], lit)
//
// This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push
// DI, R10 and AX as arguments.
MOVQ DI, 0(SP)
MOVQ R10, 8(SP)
MOVQ AX, 16(SP)
CALL runtime·memmove(SB)
RET
// ----------------------------------------------------------------------------
// func emitCopy(dst []byte, offset, length int) int
//
// All local variables fit into registers. The register allocation:
// - AX length
// - SI &dst[0]
// - DI &dst[i]
// - R11 offset
//
// The unusual register allocation of local variables, such as R11 for the
// offset, matches the allocation used at the call site in encodeBlock, which
// makes it easier to manually inline this function.
TEXT ·emitCopy(SB), NOSPLIT, $0-48
MOVQ dst_base+0(FP), DI
MOVQ DI, SI
MOVQ offset+24(FP), R11
MOVQ length+32(FP), AX
loop0:
// for length >= 68 { etc }
CMPL AX, $68
JLT step1
// Emit a length 64 copy, encoded as 3 bytes.
MOVB $0xfe, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
SUBL $64, AX
JMP loop0
step1:
// if length > 64 { etc }
CMPL AX, $64
JLE step2
// Emit a length 60 copy, encoded as 3 bytes.
MOVB $0xee, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
SUBL $60, AX
step2:
// if length >= 12 || offset >= 2048 { goto step3 }
CMPL AX, $12
JGE step3
CMPL R11, $2048
JGE step3
// Emit the remaining copy, encoded as 2 bytes.
MOVB R11, 1(DI)
SHRL $8, R11
SHLB $5, R11
SUBB $4, AX
SHLB $2, AX
ORB AX, R11
ORB $1, R11
MOVB R11, 0(DI)
ADDQ $2, DI
// Return the number of bytes written.
SUBQ SI, DI
MOVQ DI, ret+40(FP)
RET
step3:
// Emit the remaining copy, encoded as 3 bytes.
SUBL $1, AX
SHLB $2, AX
ORB $2, AX
MOVB AX, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
// Return the number of bytes written.
SUBQ SI, DI
MOVQ DI, ret+40(FP)
RET
// ----------------------------------------------------------------------------
// func extendMatch(src []byte, i, j int) int
//
// All local variables fit into registers. The register allocation:
// - DX &src[0]
// - SI &src[j]
// - R13 &src[len(src) - 8]
// - R14 &src[len(src)]
// - R15 &src[i]
//
// The unusual register allocation of local variables, such as R15 for a source
// pointer, matches the allocation used at the call site in encodeBlock, which
// makes it easier to manually inline this function.
TEXT ·extendMatch(SB), NOSPLIT, $0-48
MOVQ src_base+0(FP), DX
MOVQ src_len+8(FP), R14
MOVQ i+24(FP), R15
MOVQ j+32(FP), SI
ADDQ DX, R14
ADDQ DX, R15
ADDQ DX, SI
MOVQ R14, R13
SUBQ $8, R13
cmp8:
// As long as we are 8 or more bytes before the end of src, we can load and
// compare 8 bytes at a time. If those 8 bytes are equal, repeat.
CMPQ SI, R13
JA cmp1
MOVQ (R15), AX
MOVQ (SI), BX
CMPQ AX, BX
JNE bsf
ADDQ $8, R15
ADDQ $8, SI
JMP cmp8
bsf:
// If those 8 bytes were not equal, XOR the two 8 byte values, and return
// the index of the first byte that differs. The BSF instruction finds the
// least significant 1 bit, the amd64 architecture is little-endian, and
// the shift by 3 converts a bit index to a byte index.
XORQ AX, BX
BSFQ BX, BX
SHRQ $3, BX
ADDQ BX, SI
// Convert from &src[ret] to ret.
SUBQ DX, SI
MOVQ SI, ret+40(FP)
RET
cmp1:
// In src's tail, compare 1 byte at a time.
CMPQ SI, R14
JAE extendMatchEnd
MOVB (R15), AX
MOVB (SI), BX
CMPB AX, BX
JNE extendMatchEnd
ADDQ $1, R15
ADDQ $1, SI
JMP cmp1
extendMatchEnd:
// Convert from &src[ret] to ret.
SUBQ DX, SI
MOVQ SI, ret+40(FP)
RET
// ----------------------------------------------------------------------------
// func encodeBlock(dst, src []byte) (d int)
//
// All local variables fit into registers, other than "var table". The register
// allocation:
// - AX . .
// - BX . .
// - CX 56 shift (note that amd64 shifts by non-immediates must use CX).
// - DX 64 &src[0], tableSize
// - SI 72 &src[s]
// - DI 80 &dst[d]
// - R9 88 sLimit
// - R10 . &src[nextEmit]
// - R11 96 prevHash, currHash, nextHash, offset
// - R12 104 &src[base], skip
// - R13 . &src[nextS], &src[len(src) - 8]
// - R14 . len(src), bytesBetweenHashLookups, &src[len(src)], x
// - R15 112 candidate
//
// The second column (56, 64, etc) is the stack offset to spill the registers
// when calling other functions. We could pack this slightly tighter, but it's
// simpler to have a dedicated spill map independent of the function called.
//
// "var table [maxTableSize]uint16" takes up 32768 bytes of stack space. An
// extra 56 bytes, to call other functions, and an extra 64 bytes, to spill
// local variables (registers) during calls gives 32768 + 56 + 64 = 32888.
TEXT ·encodeBlock(SB), 0, $32888-56
MOVQ dst_base+0(FP), DI
MOVQ src_base+24(FP), SI
MOVQ src_len+32(FP), R14
// shift, tableSize := uint32(32-8), 1<<8
MOVQ $24, CX
MOVQ $256, DX
calcShift:
// for ; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 {
// shift--
// }
CMPQ DX, $16384
JGE varTable
CMPQ DX, R14
JGE varTable
SUBQ $1, CX
SHLQ $1, DX
JMP calcShift
varTable:
// var table [maxTableSize]uint16
//
// In the asm code, unlike the Go code, we can zero-initialize only the
// first tableSize elements. Each uint16 element is 2 bytes and each MOVOU
// writes 16 bytes, so we can do only tableSize/8 writes instead of the
// 2048 writes that would zero-initialize all of table's 32768 bytes.
SHRQ $3, DX
LEAQ table-32768(SP), BX
PXOR X0, X0
memclr:
MOVOU X0, 0(BX)
ADDQ $16, BX
SUBQ $1, DX
JNZ memclr
// !!! DX = &src[0]
MOVQ SI, DX
// sLimit := len(src) - inputMargin
MOVQ R14, R9
SUBQ $15, R9
// !!! Pre-emptively spill CX, DX and R9 to the stack. Their values don't
// change for the rest of the function.
MOVQ CX, 56(SP)
MOVQ DX, 64(SP)
MOVQ R9, 88(SP)
// nextEmit := 0
MOVQ DX, R10
// s := 1
ADDQ $1, SI
// nextHash := hash(load32(src, s), shift)
MOVL 0(SI), R11
IMULL $0x1e35a7bd, R11
SHRL CX, R11
outer:
// for { etc }
// skip := 32
MOVQ $32, R12
// nextS := s
MOVQ SI, R13
// candidate := 0
MOVQ $0, R15
inner0:
// for { etc }
// s := nextS
MOVQ R13, SI
// bytesBetweenHashLookups := skip >> 5
MOVQ R12, R14
SHRQ $5, R14
// nextS = s + bytesBetweenHashLookups
ADDQ R14, R13
// skip += bytesBetweenHashLookups
ADDQ R14, R12
// if nextS > sLimit { goto emitRemainder }
MOVQ R13, AX
SUBQ DX, AX
CMPQ AX, R9
JA emitRemainder
// candidate = int(table[nextHash])
// XXX: MOVWQZX table-32768(SP)(R11*2), R15
// XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
BYTE $0x4e
BYTE $0x0f
BYTE $0xb7
BYTE $0x7c
BYTE $0x5c
BYTE $0x78
// table[nextHash] = uint16(s)
MOVQ SI, AX
SUBQ DX, AX
// XXX: MOVW AX, table-32768(SP)(R11*2)
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
BYTE $0x66
BYTE $0x42
BYTE $0x89
BYTE $0x44
BYTE $0x5c
BYTE $0x78
// nextHash = hash(load32(src, nextS), shift)
MOVL 0(R13), R11
IMULL $0x1e35a7bd, R11
SHRL CX, R11
// if load32(src, s) != load32(src, candidate) { continue } break
MOVL 0(SI), AX
MOVL (DX)(R15*1), BX
CMPL AX, BX
JNE inner0
fourByteMatch:
// As per the encode_other.go code:
//
// A 4-byte match has been found. We'll later see etc.
// !!! Jump to a fast path for short (<= 16 byte) literals. See the comment
// on inputMargin in encode.go.
MOVQ SI, AX
SUBQ R10, AX
CMPQ AX, $16
JLE emitLiteralFastPath
// ----------------------------------------
// Begin inline of the emitLiteral call.
//
// d += emitLiteral(dst[d:], src[nextEmit:s])
MOVL AX, BX
SUBL $1, BX
CMPL BX, $60
JLT inlineEmitLiteralOneByte
CMPL BX, $256
JLT inlineEmitLiteralTwoBytes
inlineEmitLiteralThreeBytes:
MOVB $0xf4, 0(DI)
MOVW BX, 1(DI)
ADDQ $3, DI
JMP inlineEmitLiteralMemmove
inlineEmitLiteralTwoBytes:
MOVB $0xf0, 0(DI)
MOVB BX, 1(DI)
ADDQ $2, DI
JMP inlineEmitLiteralMemmove
inlineEmitLiteralOneByte:
SHLB $2, BX
MOVB BX, 0(DI)
ADDQ $1, DI
inlineEmitLiteralMemmove:
// Spill local variables (registers) onto the stack; call; unspill.
//
// copy(dst[i:], lit)
//
// This means calling runtime·memmove(&dst[i], &lit[0], len(lit)), so we push
// DI, R10 and AX as arguments.
MOVQ DI, 0(SP)
MOVQ R10, 8(SP)
MOVQ AX, 16(SP)
ADDQ AX, DI // Finish the "d +=" part of "d += emitLiteral(etc)".
MOVQ SI, 72(SP)
MOVQ DI, 80(SP)
MOVQ R15, 112(SP)
CALL runtime·memmove(SB)
MOVQ 56(SP), CX
MOVQ 64(SP), DX
MOVQ 72(SP), SI
MOVQ 80(SP), DI
MOVQ 88(SP), R9
MOVQ 112(SP), R15
JMP inner1
inlineEmitLiteralEnd:
// End inline of the emitLiteral call.
// ----------------------------------------
emitLiteralFastPath:
// !!! Emit the 1-byte encoding "uint8(len(lit)-1)<<2".
MOVB AX, BX
SUBB $1, BX
SHLB $2, BX
MOVB BX, (DI)
ADDQ $1, DI
// !!! Implement the copy from lit to dst as a 16-byte load and store.
// (Encode's documentation says that dst and src must not overlap.)
//
// This always copies 16 bytes, instead of only len(lit) bytes, but that's
// OK. Subsequent iterations will fix up the overrun.
//
// Note that on amd64, it is legal and cheap to issue unaligned 8-byte or
// 16-byte loads and stores. This technique probably wouldn't be as
// effective on architectures that are fussier about alignment.
MOVOU 0(R10), X0
MOVOU X0, 0(DI)
ADDQ AX, DI
inner1:
// for { etc }
// base := s
MOVQ SI, R12
// !!! offset := base - candidate
MOVQ R12, R11
SUBQ R15, R11
SUBQ DX, R11
// ----------------------------------------
// Begin inline of the extendMatch call.
//
// s = extendMatch(src, candidate+4, s+4)
// !!! R14 = &src[len(src)]
MOVQ src_len+32(FP), R14
ADDQ DX, R14
// !!! R13 = &src[len(src) - 8]
MOVQ R14, R13
SUBQ $8, R13
// !!! R15 = &src[candidate + 4]
ADDQ $4, R15
ADDQ DX, R15
// !!! s += 4
ADDQ $4, SI
inlineExtendMatchCmp8:
// As long as we are 8 or more bytes before the end of src, we can load and
// compare 8 bytes at a time. If those 8 bytes are equal, repeat.
CMPQ SI, R13
JA inlineExtendMatchCmp1
MOVQ (R15), AX
MOVQ (SI), BX
CMPQ AX, BX
JNE inlineExtendMatchBSF
ADDQ $8, R15
ADDQ $8, SI
JMP inlineExtendMatchCmp8
inlineExtendMatchBSF:
// If those 8 bytes were not equal, XOR the two 8 byte values, and return
// the index of the first byte that differs. The BSF instruction finds the
// least significant 1 bit, the amd64 architecture is little-endian, and
// the shift by 3 converts a bit index to a byte index.
XORQ AX, BX
BSFQ BX, BX
SHRQ $3, BX
ADDQ BX, SI
JMP inlineExtendMatchEnd
inlineExtendMatchCmp1:
// In src's tail, compare 1 byte at a time.
CMPQ SI, R14
JAE inlineExtendMatchEnd
MOVB (R15), AX
MOVB (SI), BX
CMPB AX, BX
JNE inlineExtendMatchEnd
ADDQ $1, R15
ADDQ $1, SI
JMP inlineExtendMatchCmp1
inlineExtendMatchEnd:
// End inline of the extendMatch call.
// ----------------------------------------
// ----------------------------------------
// Begin inline of the emitCopy call.
//
// d += emitCopy(dst[d:], base-candidate, s-base)
// !!! length := s - base
MOVQ SI, AX
SUBQ R12, AX
inlineEmitCopyLoop0:
// for length >= 68 { etc }
CMPL AX, $68
JLT inlineEmitCopyStep1
// Emit a length 64 copy, encoded as 3 bytes.
MOVB $0xfe, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
SUBL $64, AX
JMP inlineEmitCopyLoop0
inlineEmitCopyStep1:
// if length > 64 { etc }
CMPL AX, $64
JLE inlineEmitCopyStep2
// Emit a length 60 copy, encoded as 3 bytes.
MOVB $0xee, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
SUBL $60, AX
inlineEmitCopyStep2:
// if length >= 12 || offset >= 2048 { goto inlineEmitCopyStep3 }
CMPL AX, $12
JGE inlineEmitCopyStep3
CMPL R11, $2048
JGE inlineEmitCopyStep3
// Emit the remaining copy, encoded as 2 bytes.
MOVB R11, 1(DI)
SHRL $8, R11
SHLB $5, R11
SUBB $4, AX
SHLB $2, AX
ORB AX, R11
ORB $1, R11
MOVB R11, 0(DI)
ADDQ $2, DI
JMP inlineEmitCopyEnd
inlineEmitCopyStep3:
// Emit the remaining copy, encoded as 3 bytes.
SUBL $1, AX
SHLB $2, AX
ORB $2, AX
MOVB AX, 0(DI)
MOVW R11, 1(DI)
ADDQ $3, DI
inlineEmitCopyEnd:
// End inline of the emitCopy call.
// ----------------------------------------
// nextEmit = s
MOVQ SI, R10
// if s >= sLimit { goto emitRemainder }
MOVQ SI, AX
SUBQ DX, AX
CMPQ AX, R9
JAE emitRemainder
// As per the encode_other.go code:
//
// We could immediately etc.
// x := load64(src, s-1)
MOVQ -1(SI), R14
// prevHash := hash(uint32(x>>0), shift)
MOVL R14, R11
IMULL $0x1e35a7bd, R11
SHRL CX, R11
// table[prevHash] = uint16(s-1)
MOVQ SI, AX
SUBQ DX, AX
SUBQ $1, AX
// XXX: MOVW AX, table-32768(SP)(R11*2)
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
BYTE $0x66
BYTE $0x42
BYTE $0x89
BYTE $0x44
BYTE $0x5c
BYTE $0x78
// currHash := hash(uint32(x>>8), shift)
SHRQ $8, R14
MOVL R14, R11
IMULL $0x1e35a7bd, R11
SHRL CX, R11
// candidate = int(table[currHash])
// XXX: MOVWQZX table-32768(SP)(R11*2), R15
// XXX: 4e 0f b7 7c 5c 78 movzwq 0x78(%rsp,%r11,2),%r15
BYTE $0x4e
BYTE $0x0f
BYTE $0xb7
BYTE $0x7c
BYTE $0x5c
BYTE $0x78
// table[currHash] = uint16(s)
ADDQ $1, AX
// XXX: MOVW AX, table-32768(SP)(R11*2)
// XXX: 66 42 89 44 5c 78 mov %ax,0x78(%rsp,%r11,2)
BYTE $0x66
BYTE $0x42
BYTE $0x89
BYTE $0x44
BYTE $0x5c
BYTE $0x78
// if uint32(x>>8) == load32(src, candidate) { continue }
MOVL (DX)(R15*1), BX
CMPL R14, BX
JEQ inner1
// nextHash = hash(uint32(x>>16), shift)
SHRQ $8, R14
MOVL R14, R11
IMULL $0x1e35a7bd, R11
SHRL CX, R11
// s++
ADDQ $1, SI
// break out of the inner1 for loop, i.e. continue the outer loop.
JMP outer
emitRemainder:
// if nextEmit < len(src) { etc }
MOVQ src_len+32(FP), AX
ADDQ DX, AX
CMPQ R10, AX
JEQ encodeBlockEnd
// d += emitLiteral(dst[d:], src[nextEmit:])
//
// Push args.
MOVQ DI, 0(SP)
MOVQ $0, 8(SP) // Unnecessary, as the callee ignores it, but conservative.
MOVQ $0, 16(SP) // Unnecessary, as the callee ignores it, but conservative.
MOVQ R10, 24(SP)
SUBQ R10, AX
MOVQ AX, 32(SP)
MOVQ AX, 40(SP) // Unnecessary, as the callee ignores it, but conservative.
// Spill local variables (registers) onto the stack; call; unspill.
MOVQ DI, 80(SP)
CALL ·emitLiteral(SB)
MOVQ 80(SP), DI
// Finish the "d +=" part of "d += emitLiteral(etc)".
ADDQ 48(SP), DI
encodeBlockEnd:
MOVQ dst_base+0(FP), AX
SUBQ AX, DI
MOVQ DI, d+48(FP)
RET

238
vendor/github.com/golang/snappy/encode_other.go generated vendored Normal file
View File

@ -0,0 +1,238 @@
// Copyright 2016 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !amd64 appengine !gc noasm
package snappy
func load32(b []byte, i int) uint32 {
b = b[i : i+4 : len(b)] // Help the compiler eliminate bounds checks on the next line.
return uint32(b[0]) | uint32(b[1])<<8 | uint32(b[2])<<16 | uint32(b[3])<<24
}
func load64(b []byte, i int) uint64 {
b = b[i : i+8 : len(b)] // Help the compiler eliminate bounds checks on the next line.
return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 |
uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56
}
// emitLiteral writes a literal chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= len(lit) && len(lit) <= 65536
func emitLiteral(dst, lit []byte) int {
i, n := 0, uint(len(lit)-1)
switch {
case n < 60:
dst[0] = uint8(n)<<2 | tagLiteral
i = 1
case n < 1<<8:
dst[0] = 60<<2 | tagLiteral
dst[1] = uint8(n)
i = 2
default:
dst[0] = 61<<2 | tagLiteral
dst[1] = uint8(n)
dst[2] = uint8(n >> 8)
i = 3
}
return i + copy(dst[i:], lit)
}
// emitCopy writes a copy chunk and returns the number of bytes written.
//
// It assumes that:
// dst is long enough to hold the encoded bytes
// 1 <= offset && offset <= 65535
// 4 <= length && length <= 65535
func emitCopy(dst []byte, offset, length int) int {
i := 0
// The maximum length for a single tagCopy1 or tagCopy2 op is 64 bytes. The
// threshold for this loop is a little higher (at 68 = 64 + 4), and the
// length emitted down below is is a little lower (at 60 = 64 - 4), because
// it's shorter to encode a length 67 copy as a length 60 tagCopy2 followed
// by a length 7 tagCopy1 (which encodes as 3+2 bytes) than to encode it as
// a length 64 tagCopy2 followed by a length 3 tagCopy2 (which encodes as
// 3+3 bytes). The magic 4 in the 64±4 is because the minimum length for a
// tagCopy1 op is 4 bytes, which is why a length 3 copy has to be an
// encodes-as-3-bytes tagCopy2 instead of an encodes-as-2-bytes tagCopy1.
for length >= 68 {
// Emit a length 64 copy, encoded as 3 bytes.
dst[i+0] = 63<<2 | tagCopy2
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
i += 3
length -= 64
}
if length > 64 {
// Emit a length 60 copy, encoded as 3 bytes.
dst[i+0] = 59<<2 | tagCopy2
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
i += 3
length -= 60
}
if length >= 12 || offset >= 2048 {
// Emit the remaining copy, encoded as 3 bytes.
dst[i+0] = uint8(length-1)<<2 | tagCopy2
dst[i+1] = uint8(offset)
dst[i+2] = uint8(offset >> 8)
return i + 3
}
// Emit the remaining copy, encoded as 2 bytes.
dst[i+0] = uint8(offset>>8)<<5 | uint8(length-4)<<2 | tagCopy1
dst[i+1] = uint8(offset)
return i + 2
}
// extendMatch returns the largest k such that k <= len(src) and that
// src[i:i+k-j] and src[j:k] have the same contents.
//
// It assumes that:
// 0 <= i && i < j && j <= len(src)
func extendMatch(src []byte, i, j int) int {
for ; j < len(src) && src[i] == src[j]; i, j = i+1, j+1 {
}
return j
}
func hash(u, shift uint32) uint32 {
return (u * 0x1e35a7bd) >> shift
}
// encodeBlock encodes a non-empty src to a guaranteed-large-enough dst. It
// assumes that the varint-encoded length of the decompressed bytes has already
// been written.
//
// It also assumes that:
// len(dst) >= MaxEncodedLen(len(src)) &&
// minNonLiteralBlockSize <= len(src) && len(src) <= maxBlockSize
func encodeBlock(dst, src []byte) (d int) {
// Initialize the hash table. Its size ranges from 1<<8 to 1<<14 inclusive.
// The table element type is uint16, as s < sLimit and sLimit < len(src)
// and len(src) <= maxBlockSize and maxBlockSize == 65536.
const (
maxTableSize = 1 << 14
// tableMask is redundant, but helps the compiler eliminate bounds
// checks.
tableMask = maxTableSize - 1
)
shift := uint32(32 - 8)
for tableSize := 1 << 8; tableSize < maxTableSize && tableSize < len(src); tableSize *= 2 {
shift--
}
// In Go, all array elements are zero-initialized, so there is no advantage
// to a smaller tableSize per se. However, it matches the C++ algorithm,
// and in the asm versions of this code, we can get away with zeroing only
// the first tableSize elements.
var table [maxTableSize]uint16
// sLimit is when to stop looking for offset/length copies. The inputMargin
// lets us use a fast path for emitLiteral in the main loop, while we are
// looking for copies.
sLimit := len(src) - inputMargin
// nextEmit is where in src the next emitLiteral should start from.
nextEmit := 0
// The encoded form must start with a literal, as there are no previous
// bytes to copy, so we start looking for hash matches at s == 1.
s := 1
nextHash := hash(load32(src, s), shift)
for {
// Copied from the C++ snappy implementation:
//
// Heuristic match skipping: If 32 bytes are scanned with no matches
// found, start looking only at every other byte. If 32 more bytes are
// scanned (or skipped), look at every third byte, etc.. When a match
// is found, immediately go back to looking at every byte. This is a
// small loss (~5% performance, ~0.1% density) for compressible data
// due to more bookkeeping, but for non-compressible data (such as
// JPEG) it's a huge win since the compressor quickly "realizes" the
// data is incompressible and doesn't bother looking for matches
// everywhere.
//
// The "skip" variable keeps track of how many bytes there are since
// the last match; dividing it by 32 (ie. right-shifting by five) gives
// the number of bytes to move ahead for each iteration.
skip := 32
nextS := s
candidate := 0
for {
s = nextS
bytesBetweenHashLookups := skip >> 5
nextS = s + bytesBetweenHashLookups
skip += bytesBetweenHashLookups
if nextS > sLimit {
goto emitRemainder
}
candidate = int(table[nextHash&tableMask])
table[nextHash&tableMask] = uint16(s)
nextHash = hash(load32(src, nextS), shift)
if load32(src, s) == load32(src, candidate) {
break
}
}
// A 4-byte match has been found. We'll later see if more than 4 bytes
// match. But, prior to the match, src[nextEmit:s] are unmatched. Emit
// them as literal bytes.
d += emitLiteral(dst[d:], src[nextEmit:s])
// Call emitCopy, and then see if another emitCopy could be our next
// move. Repeat until we find no match for the input immediately after
// what was consumed by the last emitCopy call.
//
// If we exit this loop normally then we need to call emitLiteral next,
// though we don't yet know how big the literal will be. We handle that
// by proceeding to the next iteration of the main loop. We also can
// exit this loop via goto if we get close to exhausting the input.
for {
// Invariant: we have a 4-byte match at s, and no need to emit any
// literal bytes prior to s.
base := s
// Extend the 4-byte match as long as possible.
//
// This is an inlined version of:
// s = extendMatch(src, candidate+4, s+4)
s += 4
for i := candidate + 4; s < len(src) && src[i] == src[s]; i, s = i+1, s+1 {
}
d += emitCopy(dst[d:], base-candidate, s-base)
nextEmit = s
if s >= sLimit {
goto emitRemainder
}
// We could immediately start working at s now, but to improve
// compression we first update the hash table at s-1 and at s. If
// another emitCopy is not our next move, also calculate nextHash
// at s+1. At least on GOARCH=amd64, these three hash calculations
// are faster as one load64 call (with some shifts) instead of
// three load32 calls.
x := load64(src, s-1)
prevHash := hash(uint32(x>>0), shift)
table[prevHash&tableMask] = uint16(s - 1)
currHash := hash(uint32(x>>8), shift)
candidate = int(table[currHash&tableMask])
table[currHash&tableMask] = uint16(s)
if uint32(x>>8) != load32(src, candidate) {
nextHash = hash(uint32(x>>16), shift)
s++
break
}
}
}
emitRemainder:
if nextEmit < len(src) {
d += emitLiteral(dst[d:], src[nextEmit:])
}
return d
}

1
vendor/github.com/golang/snappy/go.mod generated vendored Normal file
View File

@ -0,0 +1 @@
module github.com/golang/snappy

98
vendor/github.com/golang/snappy/snappy.go generated vendored Normal file
View File

@ -0,0 +1,98 @@
// Copyright 2011 The Snappy-Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package snappy implements the Snappy compression format. It aims for very
// high speeds and reasonable compression.
//
// There are actually two Snappy formats: block and stream. They are related,
// but different: trying to decompress block-compressed data as a Snappy stream
// will fail, and vice versa. The block format is the Decode and Encode
// functions and the stream format is the Reader and Writer types.
//
// The block format, the more common case, is used when the complete size (the
// number of bytes) of the original data is known upfront, at the time
// compression starts. The stream format, also known as the framing format, is
// for when that isn't always true.
//
// The canonical, C++ implementation is at https://github.com/google/snappy and
// it only implements the block format.
package snappy // import "github.com/golang/snappy"
import (
"hash/crc32"
)
/*
Each encoded block begins with the varint-encoded length of the decoded data,
followed by a sequence of chunks. Chunks begin and end on byte boundaries. The
first byte of each chunk is broken into its 2 least and 6 most significant bits
called l and m: l ranges in [0, 4) and m ranges in [0, 64). l is the chunk tag.
Zero means a literal tag. All other values mean a copy tag.
For literal tags:
- If m < 60, the next 1 + m bytes are literal bytes.
- Otherwise, let n be the little-endian unsigned integer denoted by the next
m - 59 bytes. The next 1 + n bytes after that are literal bytes.
For copy tags, length bytes are copied from offset bytes ago, in the style of
Lempel-Ziv compression algorithms. In particular:
- For l == 1, the offset ranges in [0, 1<<11) and the length in [4, 12).
The length is 4 + the low 3 bits of m. The high 3 bits of m form bits 8-10
of the offset. The next byte is bits 0-7 of the offset.
- For l == 2, the offset ranges in [0, 1<<16) and the length in [1, 65).
The length is 1 + m. The offset is the little-endian unsigned integer
denoted by the next 2 bytes.
- For l == 3, this tag is a legacy format that is no longer issued by most
encoders. Nonetheless, the offset ranges in [0, 1<<32) and the length in
[1, 65). The length is 1 + m. The offset is the little-endian unsigned
integer denoted by the next 4 bytes.
*/
const (
tagLiteral = 0x00
tagCopy1 = 0x01
tagCopy2 = 0x02
tagCopy4 = 0x03
)
const (
checksumSize = 4
chunkHeaderSize = 4
magicChunk = "\xff\x06\x00\x00" + magicBody
magicBody = "sNaPpY"
// maxBlockSize is the maximum size of the input to encodeBlock. It is not
// part of the wire format per se, but some parts of the encoder assume
// that an offset fits into a uint16.
//
// Also, for the framing format (Writer type instead of Encode function),
// https://github.com/google/snappy/blob/master/framing_format.txt says
// that "the uncompressed data in a chunk must be no longer than 65536
// bytes".
maxBlockSize = 65536
// maxEncodedLenOfMaxBlockSize equals MaxEncodedLen(maxBlockSize), but is
// hard coded to be a const instead of a variable, so that obufLen can also
// be a const. Their equivalence is confirmed by
// TestMaxEncodedLenOfMaxBlockSize.
maxEncodedLenOfMaxBlockSize = 76490
obufHeaderLen = len(magicChunk) + checksumSize + chunkHeaderSize
obufLen = obufHeaderLen + maxEncodedLenOfMaxBlockSize
)
const (
chunkTypeCompressedData = 0x00
chunkTypeUncompressedData = 0x01
chunkTypePadding = 0xfe
chunkTypeStreamIdentifier = 0xff
)
var crcTable = crc32.MakeTable(crc32.Castagnoli)
// crc implements the checksum specified in section 3 of
// https://github.com/google/snappy/blob/master/framing_format.txt
func crc(b []byte) uint32 {
c := crc32.Update(0, crcTable, b)
return uint32(c>>15|c<<17) + 0xa282ead8
}

13
vendor/github.com/hashicorp/go-cleanhttp/BUILD.bazel generated vendored Normal file
View File

@ -0,0 +1,13 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"cleanhttp.go",
"doc.go",
"handlers.go",
],
importmap = "k8s.io/kops/vendor/github.com/hashicorp/go-cleanhttp",
importpath = "github.com/hashicorp/go-cleanhttp",
visibility = ["//visibility:public"],
)

363
vendor/github.com/hashicorp/go-cleanhttp/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

30
vendor/github.com/hashicorp/go-cleanhttp/README.md generated vendored Normal file
View File

@ -0,0 +1,30 @@
# cleanhttp
Functions for accessing "clean" Go http.Client values
-------------
The Go standard library contains a default `http.Client` called
`http.DefaultClient`. It is a common idiom in Go code to start with
`http.DefaultClient` and tweak it as necessary, and in fact, this is
encouraged; from the `http` package documentation:
> The Client's Transport typically has internal state (cached TCP connections),
so Clients should be reused instead of created as needed. Clients are safe for
concurrent use by multiple goroutines.
Unfortunately, this is a shared value, and it is not uncommon for libraries to
assume that they are free to modify it at will. With enough dependencies, it
can be very easy to encounter strange problems and race conditions due to
manipulation of this shared value across libraries and goroutines (clients are
safe for concurrent use, but writing values to the client struct itself is not
protected).
Making things worse is the fact that a bare `http.Client` will use a default
`http.Transport` called `http.DefaultTransport`, which is another global value
that behaves the same way. So it is not simply enough to replace
`http.DefaultClient` with `&http.Client{}`.
This repository provides some simple functions to get a "clean" `http.Client`
-- one that uses the same default values as the Go standard library, but
returns a client that does not share any state with other clients.

57
vendor/github.com/hashicorp/go-cleanhttp/cleanhttp.go generated vendored Normal file
View File

@ -0,0 +1,57 @@
package cleanhttp
import (
"net"
"net/http"
"runtime"
"time"
)
// DefaultTransport returns a new http.Transport with similar default values to
// http.DefaultTransport, but with idle connections and keepalives disabled.
func DefaultTransport() *http.Transport {
transport := DefaultPooledTransport()
transport.DisableKeepAlives = true
transport.MaxIdleConnsPerHost = -1
return transport
}
// DefaultPooledTransport returns a new http.Transport with similar default
// values to http.DefaultTransport. Do not use this for transient transports as
// it can leak file descriptors over time. Only use this for transports that
// will be re-used for the same host(s).
func DefaultPooledTransport() *http.Transport {
transport := &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: 100,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 1 * time.Second,
MaxIdleConnsPerHost: runtime.GOMAXPROCS(0) + 1,
}
return transport
}
// DefaultClient returns a new http.Client with similar default values to
// http.Client, but with a non-shared Transport, idle connections disabled, and
// keepalives disabled.
func DefaultClient() *http.Client {
return &http.Client{
Transport: DefaultTransport(),
}
}
// DefaultPooledClient returns a new http.Client with similar default values to
// http.Client, but with a shared Transport. Do not use this function for
// transient clients as it can leak file descriptors over time. Only use this
// for clients that will be re-used for the same host(s).
func DefaultPooledClient() *http.Client {
return &http.Client{
Transport: DefaultPooledTransport(),
}
}

20
vendor/github.com/hashicorp/go-cleanhttp/doc.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
// Package cleanhttp offers convenience utilities for acquiring "clean"
// http.Transport and http.Client structs.
//
// Values set on http.DefaultClient and http.DefaultTransport affect all
// callers. This can have detrimental effects, esepcially in TLS contexts,
// where client or root certificates set to talk to multiple endpoints can end
// up displacing each other, leading to hard-to-debug issues. This package
// provides non-shared http.Client and http.Transport structs to ensure that
// the configuration will not be overwritten by other parts of the application
// or dependencies.
//
// The DefaultClient and DefaultTransport functions disable idle connections
// and keepalives. Without ensuring that idle connections are closed before
// garbage collection, short-term clients/transports can leak file descriptors,
// eventually leading to "too many open files" errors. If you will be
// connecting to the same hosts repeatedly from the same client, you can use
// DefaultPooledClient to receive a client that has connection pooling
// semantics similar to http.DefaultClient.
//
package cleanhttp

1
vendor/github.com/hashicorp/go-cleanhttp/go.mod generated vendored Normal file
View File

@ -0,0 +1 @@
module github.com/hashicorp/go-cleanhttp

48
vendor/github.com/hashicorp/go-cleanhttp/handlers.go generated vendored Normal file
View File

@ -0,0 +1,48 @@
package cleanhttp
import (
"net/http"
"strings"
"unicode"
)
// HandlerInput provides input options to cleanhttp's handlers
type HandlerInput struct {
ErrStatus int
}
// PrintablePathCheckHandler is a middleware that ensures the request path
// contains only printable runes.
func PrintablePathCheckHandler(next http.Handler, input *HandlerInput) http.Handler {
// Nil-check on input to make it optional
if input == nil {
input = &HandlerInput{
ErrStatus: http.StatusBadRequest,
}
}
// Default to http.StatusBadRequest on error
if input.ErrStatus == 0 {
input.ErrStatus = http.StatusBadRequest
}
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r != nil {
// Check URL path for non-printable characters
idx := strings.IndexFunc(r.URL.Path, func(c rune) bool {
return !unicode.IsPrint(c)
})
if idx != -1 {
w.WriteHeader(input.ErrStatus)
return
}
if next != nil {
next.ServeHTTP(w, r)
}
}
return
})
}

View File

@ -0,0 +1,4 @@
.idea/
*.iml
*.test
.vscode/

View File

@ -0,0 +1,12 @@
sudo: false
language: go
go:
- 1.12.4
branches:
only:
- master
script: make updatedeps test

View File

@ -0,0 +1,10 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importmap = "k8s.io/kops/vendor/github.com/hashicorp/go-retryablehttp",
importpath = "github.com/hashicorp/go-retryablehttp",
visibility = ["//visibility:public"],
deps = ["//vendor/github.com/hashicorp/go-cleanhttp:go_default_library"],
)

363
vendor/github.com/hashicorp/go-retryablehttp/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

11
vendor/github.com/hashicorp/go-retryablehttp/Makefile generated vendored Normal file
View File

@ -0,0 +1,11 @@
default: test
test:
go vet ./...
go test -race ./...
updatedeps:
go get -f -t -u ./...
go get -f -u ./...
.PHONY: default test updatedeps

46
vendor/github.com/hashicorp/go-retryablehttp/README.md generated vendored Normal file
View File

@ -0,0 +1,46 @@
go-retryablehttp
================
[![Build Status](http://img.shields.io/travis/hashicorp/go-retryablehttp.svg?style=flat-square)][travis]
[![Go Documentation](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)][godocs]
[travis]: http://travis-ci.org/hashicorp/go-retryablehttp
[godocs]: http://godoc.org/github.com/hashicorp/go-retryablehttp
The `retryablehttp` package provides a familiar HTTP client interface with
automatic retries and exponential backoff. It is a thin wrapper over the
standard `net/http` client library and exposes nearly the same public API. This
makes `retryablehttp` very easy to drop into existing programs.
`retryablehttp` performs automatic retries under certain conditions. Mainly, if
an error is returned by the client (connection errors, etc.), or if a 500-range
response code is received (except 501), then a retry is invoked after a wait
period. Otherwise, the response is returned and left to the caller to
interpret.
The main difference from `net/http` is that requests which take a request body
(POST/PUT et. al) can have the body provided in a number of ways (some more or
less efficient) that allow "rewinding" the request body if the initial request
fails so that the full request can be attempted again. See the
[godoc](http://godoc.org/github.com/hashicorp/go-retryablehttp) for more
details.
Example Use
===========
Using this library should look almost identical to what you would do with
`net/http`. The most simple example of a GET request is shown below:
```go
resp, err := retryablehttp.Get("/foo")
if err != nil {
panic(err)
}
```
The returned response object is an `*http.Response`, the same thing you would
usually get from `net/http`. Had the request failed one or more times, the above
call would block and retry with exponential backoff.
For more usage and examples see the
[godoc](http://godoc.org/github.com/hashicorp/go-retryablehttp).

549
vendor/github.com/hashicorp/go-retryablehttp/client.go generated vendored Normal file
View File

@ -0,0 +1,549 @@
// The retryablehttp package provides a familiar HTTP client interface with
// automatic retries and exponential backoff. It is a thin wrapper over the
// standard net/http client library and exposes nearly the same public API.
// This makes retryablehttp very easy to drop into existing programs.
//
// retryablehttp performs automatic retries under certain conditions. Mainly, if
// an error is returned by the client (connection errors etc), or if a 500-range
// response is received, then a retry is invoked. Otherwise, the response is
// returned and left to the caller to interpret.
//
// Requests which take a request body should provide a non-nil function
// parameter. The best choice is to provide either a function satisfying
// ReaderFunc which provides multiple io.Readers in an efficient manner, a
// *bytes.Buffer (the underlying raw byte slice will be used) or a raw byte
// slice. As it is a reference type, and we will wrap it as needed by readers,
// we can efficiently re-use the request body without needing to copy it. If an
// io.Reader (such as a *bytes.Reader) is provided, the full body will be read
// prior to the first request, and will be efficiently re-used for any retries.
// ReadSeeker can be used, but some users have observed occasional data races
// between the net/http library and the Seek functionality of some
// implementations of ReadSeeker, so should be avoided if possible.
package retryablehttp
import (
"bytes"
"context"
"fmt"
"io"
"io/ioutil"
"log"
"math"
"math/rand"
"net/http"
"net/url"
"os"
"strings"
"time"
cleanhttp "github.com/hashicorp/go-cleanhttp"
)
var (
// Default retry configuration
defaultRetryWaitMin = 1 * time.Second
defaultRetryWaitMax = 30 * time.Second
defaultRetryMax = 4
// defaultClient is used for performing requests without explicitly making
// a new client. It is purposely private to avoid modifications.
defaultClient = NewClient()
// We need to consume response bodies to maintain http connections, but
// limit the size we consume to respReadLimit.
respReadLimit = int64(4096)
)
// ReaderFunc is the type of function that can be given natively to NewRequest
type ReaderFunc func() (io.Reader, error)
// LenReader is an interface implemented by many in-memory io.Reader's. Used
// for automatically sending the right Content-Length header when possible.
type LenReader interface {
Len() int
}
// Request wraps the metadata needed to create HTTP requests.
type Request struct {
// body is a seekable reader over the request body payload. This is
// used to rewind the request data in between retries.
body ReaderFunc
// Embed an HTTP request directly. This makes a *Request act exactly
// like an *http.Request so that all meta methods are supported.
*http.Request
}
// WithContext returns wrapped Request with a shallow copy of underlying *http.Request
// with its context changed to ctx. The provided ctx must be non-nil.
func (r *Request) WithContext(ctx context.Context) *Request {
r.Request = r.Request.WithContext(ctx)
return r
}
// BodyBytes allows accessing the request body. It is an analogue to
// http.Request's Body variable, but it returns a copy of the underlying data
// rather than consuming it.
//
// This function is not thread-safe; do not call it at the same time as another
// call, or at the same time this request is being used with Client.Do.
func (r *Request) BodyBytes() ([]byte, error) {
if r.body == nil {
return nil, nil
}
body, err := r.body()
if err != nil {
return nil, err
}
buf := new(bytes.Buffer)
_, err = buf.ReadFrom(body)
if err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func getBodyReaderAndContentLength(rawBody interface{}) (ReaderFunc, int64, error) {
var bodyReader ReaderFunc
var contentLength int64
if rawBody != nil {
switch body := rawBody.(type) {
// If they gave us a function already, great! Use it.
case ReaderFunc:
bodyReader = body
tmp, err := body()
if err != nil {
return nil, 0, err
}
if lr, ok := tmp.(LenReader); ok {
contentLength = int64(lr.Len())
}
if c, ok := tmp.(io.Closer); ok {
c.Close()
}
case func() (io.Reader, error):
bodyReader = body
tmp, err := body()
if err != nil {
return nil, 0, err
}
if lr, ok := tmp.(LenReader); ok {
contentLength = int64(lr.Len())
}
if c, ok := tmp.(io.Closer); ok {
c.Close()
}
// If a regular byte slice, we can read it over and over via new
// readers
case []byte:
buf := body
bodyReader = func() (io.Reader, error) {
return bytes.NewReader(buf), nil
}
contentLength = int64(len(buf))
// If a bytes.Buffer we can read the underlying byte slice over and
// over
case *bytes.Buffer:
buf := body
bodyReader = func() (io.Reader, error) {
return bytes.NewReader(buf.Bytes()), nil
}
contentLength = int64(buf.Len())
// We prioritize *bytes.Reader here because we don't really want to
// deal with it seeking so want it to match here instead of the
// io.ReadSeeker case.
case *bytes.Reader:
buf, err := ioutil.ReadAll(body)
if err != nil {
return nil, 0, err
}
bodyReader = func() (io.Reader, error) {
return bytes.NewReader(buf), nil
}
contentLength = int64(len(buf))
// Compat case
case io.ReadSeeker:
raw := body
bodyReader = func() (io.Reader, error) {
_, err := raw.Seek(0, 0)
return ioutil.NopCloser(raw), err
}
if lr, ok := raw.(LenReader); ok {
contentLength = int64(lr.Len())
}
// Read all in so we can reset
case io.Reader:
buf, err := ioutil.ReadAll(body)
if err != nil {
return nil, 0, err
}
bodyReader = func() (io.Reader, error) {
return bytes.NewReader(buf), nil
}
contentLength = int64(len(buf))
default:
return nil, 0, fmt.Errorf("cannot handle type %T", rawBody)
}
}
return bodyReader, contentLength, nil
}
// FromRequest wraps an http.Request in a retryablehttp.Request
func FromRequest(r *http.Request) (*Request, error) {
bodyReader, _, err := getBodyReaderAndContentLength(r.Body)
if err != nil {
return nil, err
}
// Could assert contentLength == r.ContentLength
return &Request{bodyReader, r}, nil
}
// NewRequest creates a new wrapped request.
func NewRequest(method, url string, rawBody interface{}) (*Request, error) {
bodyReader, contentLength, err := getBodyReaderAndContentLength(rawBody)
if err != nil {
return nil, err
}
httpReq, err := http.NewRequest(method, url, nil)
if err != nil {
return nil, err
}
httpReq.ContentLength = contentLength
return &Request{bodyReader, httpReq}, nil
}
// Logger interface allows to use other loggers than
// standard log.Logger.
type Logger interface {
Printf(string, ...interface{})
}
// RequestLogHook allows a function to run before each retry. The HTTP
// request which will be made, and the retry number (0 for the initial
// request) are available to users. The internal logger is exposed to
// consumers.
type RequestLogHook func(Logger, *http.Request, int)
// ResponseLogHook is like RequestLogHook, but allows running a function
// on each HTTP response. This function will be invoked at the end of
// every HTTP request executed, regardless of whether a subsequent retry
// needs to be performed or not. If the response body is read or closed
// from this method, this will affect the response returned from Do().
type ResponseLogHook func(Logger, *http.Response)
// CheckRetry specifies a policy for handling retries. It is called
// following each request with the response and error values returned by
// the http.Client. If CheckRetry returns false, the Client stops retrying
// and returns the response to the caller. If CheckRetry returns an error,
// that error value is returned in lieu of the error from the request. The
// Client will close any response body when retrying, but if the retry is
// aborted it is up to the CheckResponse callback to properly close any
// response body before returning.
type CheckRetry func(ctx context.Context, resp *http.Response, err error) (bool, error)
// Backoff specifies a policy for how long to wait between retries.
// It is called after a failing request to determine the amount of time
// that should pass before trying again.
type Backoff func(min, max time.Duration, attemptNum int, resp *http.Response) time.Duration
// ErrorHandler is called if retries are expired, containing the last status
// from the http library. If not specified, default behavior for the library is
// to close the body and return an error indicating how many tries were
// attempted. If overriding this, be sure to close the body if needed.
type ErrorHandler func(resp *http.Response, err error, numTries int) (*http.Response, error)
// Client is used to make HTTP requests. It adds additional functionality
// like automatic retries to tolerate minor outages.
type Client struct {
HTTPClient *http.Client // Internal HTTP client.
Logger Logger // Customer logger instance.
RetryWaitMin time.Duration // Minimum time to wait
RetryWaitMax time.Duration // Maximum time to wait
RetryMax int // Maximum number of retries
// RequestLogHook allows a user-supplied function to be called
// before each retry.
RequestLogHook RequestLogHook
// ResponseLogHook allows a user-supplied function to be called
// with the response from each HTTP request executed.
ResponseLogHook ResponseLogHook
// CheckRetry specifies the policy for handling retries, and is called
// after each request. The default policy is DefaultRetryPolicy.
CheckRetry CheckRetry
// Backoff specifies the policy for how long to wait between retries
Backoff Backoff
// ErrorHandler specifies the custom error handler to use, if any
ErrorHandler ErrorHandler
}
// NewClient creates a new Client with default settings.
func NewClient() *Client {
return &Client{
HTTPClient: cleanhttp.DefaultClient(),
Logger: log.New(os.Stderr, "", log.LstdFlags),
RetryWaitMin: defaultRetryWaitMin,
RetryWaitMax: defaultRetryWaitMax,
RetryMax: defaultRetryMax,
CheckRetry: DefaultRetryPolicy,
Backoff: DefaultBackoff,
}
}
// DefaultRetryPolicy provides a default callback for Client.CheckRetry, which
// will retry on connection errors and server errors.
func DefaultRetryPolicy(ctx context.Context, resp *http.Response, err error) (bool, error) {
// do not retry on context.Canceled or context.DeadlineExceeded
if ctx.Err() != nil {
return false, ctx.Err()
}
if err != nil {
return true, err
}
// Check the response code. We retry on 500-range responses to allow
// the server time to recover, as 500's are typically not permanent
// errors and may relate to outages on the server side. This will catch
// invalid response codes as well, like 0 and 999.
if resp.StatusCode == 0 || (resp.StatusCode >= 500 && resp.StatusCode != 501) {
return true, nil
}
return false, nil
}
// DefaultBackoff provides a default callback for Client.Backoff which
// will perform exponential backoff based on the attempt number and limited
// by the provided minimum and maximum durations.
func DefaultBackoff(min, max time.Duration, attemptNum int, resp *http.Response) time.Duration {
mult := math.Pow(2, float64(attemptNum)) * float64(min)
sleep := time.Duration(mult)
if float64(sleep) != mult || sleep > max {
sleep = max
}
return sleep
}
// LinearJitterBackoff provides a callback for Client.Backoff which will
// perform linear backoff based on the attempt number and with jitter to
// prevent a thundering herd.
//
// min and max here are *not* absolute values. The number to be multipled by
// the attempt number will be chosen at random from between them, thus they are
// bounding the jitter.
//
// For instance:
// * To get strictly linear backoff of one second increasing each retry, set
// both to one second (1s, 2s, 3s, 4s, ...)
// * To get a small amount of jitter centered around one second increasing each
// retry, set to around one second, such as a min of 800ms and max of 1200ms
// (892ms, 2102ms, 2945ms, 4312ms, ...)
// * To get extreme jitter, set to a very wide spread, such as a min of 100ms
// and a max of 20s (15382ms, 292ms, 51321ms, 35234ms, ...)
func LinearJitterBackoff(min, max time.Duration, attemptNum int, resp *http.Response) time.Duration {
// attemptNum always starts at zero but we want to start at 1 for multiplication
attemptNum++
if max <= min {
// Unclear what to do here, or they are the same, so return min *
// attemptNum
return min * time.Duration(attemptNum)
}
// Seed rand; doing this every time is fine
rand := rand.New(rand.NewSource(int64(time.Now().Nanosecond())))
// Pick a random number that lies somewhere between the min and max and
// multiply by the attemptNum. attemptNum starts at zero so we always
// increment here. We first get a random percentage, then apply that to the
// difference between min and max, and add to min.
jitter := rand.Float64() * float64(max-min)
jitterMin := int64(jitter) + int64(min)
return time.Duration(jitterMin * int64(attemptNum))
}
// PassthroughErrorHandler is an ErrorHandler that directly passes through the
// values from the net/http library for the final request. The body is not
// closed.
func PassthroughErrorHandler(resp *http.Response, err error, _ int) (*http.Response, error) {
return resp, err
}
// Do wraps calling an HTTP method with retries.
func (c *Client) Do(req *Request) (*http.Response, error) {
if c.Logger != nil {
c.Logger.Printf("[DEBUG] %s %s", req.Method, req.URL)
}
var resp *http.Response
var err error
for i := 0; ; i++ {
var code int // HTTP response code
// Always rewind the request body when non-nil.
if req.body != nil {
body, err := req.body()
if err != nil {
return resp, err
}
if c, ok := body.(io.ReadCloser); ok {
req.Body = c
} else {
req.Body = ioutil.NopCloser(body)
}
}
if c.RequestLogHook != nil {
c.RequestLogHook(c.Logger, req.Request, i)
}
// Attempt the request
resp, err = c.HTTPClient.Do(req.Request)
if resp != nil {
code = resp.StatusCode
}
// Check if we should continue with retries.
checkOK, checkErr := c.CheckRetry(req.Context(), resp, err)
if err != nil {
if c.Logger != nil {
c.Logger.Printf("[ERR] %s %s request failed: %v", req.Method, req.URL, err)
}
} else {
// Call this here to maintain the behavior of logging all requests,
// even if CheckRetry signals to stop.
if c.ResponseLogHook != nil {
// Call the response logger function if provided.
c.ResponseLogHook(c.Logger, resp)
}
}
// Now decide if we should continue.
if !checkOK {
if checkErr != nil {
err = checkErr
}
return resp, err
}
// We do this before drainBody beause there's no need for the I/O if
// we're breaking out
remain := c.RetryMax - i
if remain <= 0 {
break
}
// We're going to retry, consume any response to reuse the connection.
if err == nil && resp != nil {
c.drainBody(resp.Body)
}
wait := c.Backoff(c.RetryWaitMin, c.RetryWaitMax, i, resp)
desc := fmt.Sprintf("%s %s", req.Method, req.URL)
if code > 0 {
desc = fmt.Sprintf("%s (status: %d)", desc, code)
}
if c.Logger != nil {
c.Logger.Printf("[DEBUG] %s: retrying in %s (%d left)", desc, wait, remain)
}
select {
case <-req.Context().Done():
return nil, req.Context().Err()
case <-time.After(wait):
}
}
if c.ErrorHandler != nil {
return c.ErrorHandler(resp, err, c.RetryMax+1)
}
// By default, we close the response body and return an error without
// returning the response
if resp != nil {
resp.Body.Close()
}
return nil, fmt.Errorf("%s %s giving up after %d attempts",
req.Method, req.URL, c.RetryMax+1)
}
// Try to read the response body so we can reuse this connection.
func (c *Client) drainBody(body io.ReadCloser) {
defer body.Close()
_, err := io.Copy(ioutil.Discard, io.LimitReader(body, respReadLimit))
if err != nil {
if c.Logger != nil {
c.Logger.Printf("[ERR] error reading response body: %v", err)
}
}
}
// Get is a shortcut for doing a GET request without making a new client.
func Get(url string) (*http.Response, error) {
return defaultClient.Get(url)
}
// Get is a convenience helper for doing simple GET requests.
func (c *Client) Get(url string) (*http.Response, error) {
req, err := NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
return c.Do(req)
}
// Head is a shortcut for doing a HEAD request without making a new client.
func Head(url string) (*http.Response, error) {
return defaultClient.Head(url)
}
// Head is a convenience method for doing simple HEAD requests.
func (c *Client) Head(url string) (*http.Response, error) {
req, err := NewRequest("HEAD", url, nil)
if err != nil {
return nil, err
}
return c.Do(req)
}
// Post is a shortcut for doing a POST request without making a new client.
func Post(url, bodyType string, body interface{}) (*http.Response, error) {
return defaultClient.Post(url, bodyType, body)
}
// Post is a convenience method for doing simple POST requests.
func (c *Client) Post(url, bodyType string, body interface{}) (*http.Response, error) {
req, err := NewRequest("POST", url, body)
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", bodyType)
return c.Do(req)
}
// PostForm is a shortcut to perform a POST with form data without creating
// a new client.
func PostForm(url string, data url.Values) (*http.Response, error) {
return defaultClient.PostForm(url, data)
}
// PostForm is a convenience method for doing simple POST operations using
// pre-filled url.Values form data.
func (c *Client) PostForm(url string, data url.Values) (*http.Response, error) {
return c.Post(url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode()))
}

3
vendor/github.com/hashicorp/go-retryablehttp/go.mod generated vendored Normal file
View File

@ -0,0 +1,3 @@
module github.com/hashicorp/go-retryablehttp
require github.com/hashicorp/go-cleanhttp v0.5.0

2
vendor/github.com/hashicorp/go-retryablehttp/go.sum generated vendored Normal file
View File

@ -0,0 +1,2 @@
github.com/hashicorp/go-cleanhttp v0.5.0 h1:wvCrVc9TjDls6+YGAF2hAifE1E5U1+b4tH6KdvN3Gig=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=

12
vendor/github.com/hashicorp/go-rootcerts/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,12 @@
sudo: false
language: go
go:
- 1.6
branches:
only:
- master
script: make test

23
vendor/github.com/hashicorp/go-rootcerts/BUILD.bazel generated vendored Normal file
View File

@ -0,0 +1,23 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"doc.go",
"rootcerts.go",
"rootcerts_base.go",
"rootcerts_darwin.go",
],
importmap = "k8s.io/kops/vendor/github.com/hashicorp/go-rootcerts",
importpath = "github.com/hashicorp/go-rootcerts",
visibility = ["//visibility:public"],
deps = select({
"@io_bazel_rules_go//go/platform:darwin": [
"//vendor/github.com/mitchellh/go-homedir:go_default_library",
],
"@io_bazel_rules_go//go/platform:ios": [
"//vendor/github.com/mitchellh/go-homedir:go_default_library",
],
"//conditions:default": [],
}),
)

363
vendor/github.com/hashicorp/go-rootcerts/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

8
vendor/github.com/hashicorp/go-rootcerts/Makefile generated vendored Normal file
View File

@ -0,0 +1,8 @@
TEST?=./...
test:
go test $(TEST) $(TESTARGS) -timeout=3s -parallel=4
go vet $(TEST)
go test $(TEST) -race
.PHONY: test

43
vendor/github.com/hashicorp/go-rootcerts/README.md generated vendored Normal file
View File

@ -0,0 +1,43 @@
# rootcerts
Functions for loading root certificates for TLS connections.
-----
Go's standard library `crypto/tls` provides a common mechanism for configuring
TLS connections in `tls.Config`. The `RootCAs` field on this struct is a pool
of certificates for the client to use as a trust store when verifying server
certificates.
This library contains utility functions for loading certificates destined for
that field, as well as one other important thing:
When the `RootCAs` field is `nil`, the standard library attempts to load the
host's root CA set. This behavior is OS-specific, and the Darwin
implementation contains [a bug that prevents trusted certificates from the
System and Login keychains from being loaded][1]. This library contains
Darwin-specific behavior that works around that bug.
[1]: https://github.com/golang/go/issues/14514
## Example Usage
Here's a snippet demonstrating how this library is meant to be used:
```go
func httpClient() (*http.Client, error)
tlsConfig := &tls.Config{}
err := rootcerts.ConfigureTLS(tlsConfig, &rootcerts.Config{
CAFile: os.Getenv("MYAPP_CAFILE"),
CAPath: os.Getenv("MYAPP_CAPATH"),
})
if err != nil {
return nil, err
}
c := cleanhttp.DefaultClient()
t := cleanhttp.DefaultTransport()
t.TLSClientConfig = tlsConfig
c.Transport = t
return c, nil
}
```

9
vendor/github.com/hashicorp/go-rootcerts/doc.go generated vendored Normal file
View File

@ -0,0 +1,9 @@
// Package rootcerts contains functions to aid in loading CA certificates for
// TLS connections.
//
// In addition, its default behavior on Darwin works around an open issue [1]
// in Go's crypto/x509 that prevents certicates from being loaded from the
// System or Login keychains.
//
// [1] https://github.com/golang/go/issues/14514
package rootcerts

5
vendor/github.com/hashicorp/go-rootcerts/go.mod generated vendored Normal file
View File

@ -0,0 +1,5 @@
module github.com/hashicorp/go-rootcerts
go 1.12
require github.com/mitchellh/go-homedir v1.1.0

2
vendor/github.com/hashicorp/go-rootcerts/go.sum generated vendored Normal file
View File

@ -0,0 +1,2 @@
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=

103
vendor/github.com/hashicorp/go-rootcerts/rootcerts.go generated vendored Normal file
View File

@ -0,0 +1,103 @@
package rootcerts
import (
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"os"
"path/filepath"
)
// Config determines where LoadCACerts will load certificates from. When both
// CAFile and CAPath are blank, this library's functions will either load
// system roots explicitly and return them, or set the CertPool to nil to allow
// Go's standard library to load system certs.
type Config struct {
// CAFile is a path to a PEM-encoded certificate file or bundle. Takes
// precedence over CAPath.
CAFile string
// CAPath is a path to a directory populated with PEM-encoded certificates.
CAPath string
}
// ConfigureTLS sets up the RootCAs on the provided tls.Config based on the
// Config specified.
func ConfigureTLS(t *tls.Config, c *Config) error {
if t == nil {
return nil
}
pool, err := LoadCACerts(c)
if err != nil {
return err
}
t.RootCAs = pool
return nil
}
// LoadCACerts loads a CertPool based on the Config specified.
func LoadCACerts(c *Config) (*x509.CertPool, error) {
if c == nil {
c = &Config{}
}
if c.CAFile != "" {
return LoadCAFile(c.CAFile)
}
if c.CAPath != "" {
return LoadCAPath(c.CAPath)
}
return LoadSystemCAs()
}
// LoadCAFile loads a single PEM-encoded file from the path specified.
func LoadCAFile(caFile string) (*x509.CertPool, error) {
pool := x509.NewCertPool()
pem, err := ioutil.ReadFile(caFile)
if err != nil {
return nil, fmt.Errorf("Error loading CA File: %s", err)
}
ok := pool.AppendCertsFromPEM(pem)
if !ok {
return nil, fmt.Errorf("Error loading CA File: Couldn't parse PEM in: %s", caFile)
}
return pool, nil
}
// LoadCAPath walks the provided path and loads all certificates encounted into
// a pool.
func LoadCAPath(caPath string) (*x509.CertPool, error) {
pool := x509.NewCertPool()
walkFn := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
pem, err := ioutil.ReadFile(path)
if err != nil {
return fmt.Errorf("Error loading file from CAPath: %s", err)
}
ok := pool.AppendCertsFromPEM(pem)
if !ok {
return fmt.Errorf("Error loading CA Path: Couldn't parse PEM in: %s", path)
}
return nil
}
err := filepath.Walk(caPath, walkFn)
if err != nil {
return nil, err
}
return pool, nil
}

View File

@ -0,0 +1,12 @@
// +build !darwin
package rootcerts
import "crypto/x509"
// LoadSystemCAs does nothing on non-Darwin systems. We return nil so that
// default behavior of standard TLS config libraries is triggered, which is to
// load system certs.
func LoadSystemCAs() (*x509.CertPool, error) {
return nil, nil
}

View File

@ -0,0 +1,48 @@
package rootcerts
import (
"crypto/x509"
"os/exec"
"path"
"github.com/mitchellh/go-homedir"
)
// LoadSystemCAs has special behavior on Darwin systems to work around
func LoadSystemCAs() (*x509.CertPool, error) {
pool := x509.NewCertPool()
for _, keychain := range certKeychains() {
err := addCertsFromKeychain(pool, keychain)
if err != nil {
return nil, err
}
}
return pool, nil
}
func addCertsFromKeychain(pool *x509.CertPool, keychain string) error {
cmd := exec.Command("/usr/bin/security", "find-certificate", "-a", "-p", keychain)
data, err := cmd.Output()
if err != nil {
return err
}
pool.AppendCertsFromPEM(data)
return nil
}
func certKeychains() []string {
keychains := []string{
"/System/Library/Keychains/SystemRootCertificates.keychain",
"/Library/Keychains/System.keychain",
}
home, err := homedir.Dir()
if err == nil {
loginKeychain := path.Join(home, "Library", "Keychains", "login.keychain")
keychains = append(keychains, loginKeychain)
}
return keychains
}

58
vendor/github.com/hashicorp/vault/api/BUILD.bazel generated vendored Normal file
View File

@ -0,0 +1,58 @@
load("@io_bazel_rules_go//go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"auth.go",
"auth_token.go",
"client.go",
"help.go",
"logical.go",
"output_string.go",
"plugin_helpers.go",
"renewer.go",
"request.go",
"response.go",
"secret.go",
"ssh.go",
"ssh_agent.go",
"sys.go",
"sys_audit.go",
"sys_auth.go",
"sys_capabilities.go",
"sys_config_cors.go",
"sys_generate_root.go",
"sys_health.go",
"sys_init.go",
"sys_leader.go",
"sys_leases.go",
"sys_mounts.go",
"sys_plugins.go",
"sys_policy.go",
"sys_raft.go",
"sys_rekey.go",
"sys_rotate.go",
"sys_seal.go",
"sys_stepdown.go",
],
importmap = "k8s.io/kops/vendor/github.com/hashicorp/vault/api",
importpath = "github.com/hashicorp/vault/api",
visibility = ["//visibility:public"],
deps = [
"//vendor/github.com/hashicorp/errwrap:go_default_library",
"//vendor/github.com/hashicorp/go-cleanhttp:go_default_library",
"//vendor/github.com/hashicorp/go-multierror:go_default_library",
"//vendor/github.com/hashicorp/go-retryablehttp:go_default_library",
"//vendor/github.com/hashicorp/go-rootcerts:go_default_library",
"//vendor/github.com/hashicorp/hcl:go_default_library",
"//vendor/github.com/hashicorp/hcl/hcl/ast:go_default_library",
"//vendor/github.com/hashicorp/vault/sdk/helper/consts:go_default_library",
"//vendor/github.com/hashicorp/vault/sdk/helper/hclutil:go_default_library",
"//vendor/github.com/hashicorp/vault/sdk/helper/jsonutil:go_default_library",
"//vendor/github.com/hashicorp/vault/sdk/helper/parseutil:go_default_library",
"//vendor/github.com/mitchellh/mapstructure:go_default_library",
"//vendor/golang.org/x/net/http2:go_default_library",
"//vendor/golang.org/x/time/rate:go_default_library",
"//vendor/gopkg.in/square/go-jose.v2/jwt:go_default_library",
],
)

363
vendor/github.com/hashicorp/vault/api/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

11
vendor/github.com/hashicorp/vault/api/auth.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
package api
// Auth is used to perform credential backend related operations.
type Auth struct {
c *Client
}
// Auth is used to return the client for credential-backend API calls.
func (c *Client) Auth() *Auth {
return &Auth{c: c}
}

276
vendor/github.com/hashicorp/vault/api/auth_token.go generated vendored Normal file
View File

@ -0,0 +1,276 @@
package api
import "context"
// TokenAuth is used to perform token backend operations on Vault
type TokenAuth struct {
c *Client
}
// Token is used to return the client for token-backend API calls
func (a *Auth) Token() *TokenAuth {
return &TokenAuth{c: a.c}
}
func (c *TokenAuth) Create(opts *TokenCreateRequest) (*Secret, error) {
r := c.c.NewRequest("POST", "/v1/auth/token/create")
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) CreateOrphan(opts *TokenCreateRequest) (*Secret, error) {
r := c.c.NewRequest("POST", "/v1/auth/token/create-orphan")
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) CreateWithRole(opts *TokenCreateRequest, roleName string) (*Secret, error) {
r := c.c.NewRequest("POST", "/v1/auth/token/create/"+roleName)
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) Lookup(token string) (*Secret, error) {
r := c.c.NewRequest("POST", "/v1/auth/token/lookup")
if err := r.SetJSONBody(map[string]interface{}{
"token": token,
}); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) LookupAccessor(accessor string) (*Secret, error) {
r := c.c.NewRequest("POST", "/v1/auth/token/lookup-accessor")
if err := r.SetJSONBody(map[string]interface{}{
"accessor": accessor,
}); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) LookupSelf() (*Secret, error) {
r := c.c.NewRequest("GET", "/v1/auth/token/lookup-self")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) Renew(token string, increment int) (*Secret, error) {
r := c.c.NewRequest("PUT", "/v1/auth/token/renew")
if err := r.SetJSONBody(map[string]interface{}{
"token": token,
"increment": increment,
}); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *TokenAuth) RenewSelf(increment int) (*Secret, error) {
r := c.c.NewRequest("PUT", "/v1/auth/token/renew-self")
body := map[string]interface{}{"increment": increment}
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
// RenewTokenAsSelf behaves like renew-self, but authenticates using a provided
// token instead of the token attached to the client.
func (c *TokenAuth) RenewTokenAsSelf(token string, increment int) (*Secret, error) {
r := c.c.NewRequest("PUT", "/v1/auth/token/renew-self")
r.ClientToken = token
body := map[string]interface{}{"increment": increment}
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
// RevokeAccessor revokes a token associated with the given accessor
// along with all the child tokens.
func (c *TokenAuth) RevokeAccessor(accessor string) error {
r := c.c.NewRequest("POST", "/v1/auth/token/revoke-accessor")
if err := r.SetJSONBody(map[string]interface{}{
"accessor": accessor,
}); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
// RevokeOrphan revokes a token without revoking the tree underneath it (so
// child tokens are orphaned rather than revoked)
func (c *TokenAuth) RevokeOrphan(token string) error {
r := c.c.NewRequest("PUT", "/v1/auth/token/revoke-orphan")
if err := r.SetJSONBody(map[string]interface{}{
"token": token,
}); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
// RevokeSelf revokes the token making the call. The `token` parameter is kept
// for backwards compatibility but is ignored; only the client's set token has
// an effect.
func (c *TokenAuth) RevokeSelf(token string) error {
r := c.c.NewRequest("PUT", "/v1/auth/token/revoke-self")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
// RevokeTree is the "normal" revoke operation that revokes the given token and
// the entire tree underneath -- all of its child tokens, their child tokens,
// etc.
func (c *TokenAuth) RevokeTree(token string) error {
r := c.c.NewRequest("PUT", "/v1/auth/token/revoke")
if err := r.SetJSONBody(map[string]interface{}{
"token": token,
}); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
// TokenCreateRequest is the options structure for creating a token.
type TokenCreateRequest struct {
ID string `json:"id,omitempty"`
Policies []string `json:"policies,omitempty"`
Metadata map[string]string `json:"meta,omitempty"`
Lease string `json:"lease,omitempty"`
TTL string `json:"ttl,omitempty"`
ExplicitMaxTTL string `json:"explicit_max_ttl,omitempty"`
Period string `json:"period,omitempty"`
NoParent bool `json:"no_parent,omitempty"`
NoDefaultPolicy bool `json:"no_default_policy,omitempty"`
DisplayName string `json:"display_name"`
NumUses int `json:"num_uses"`
Renewable *bool `json:"renewable,omitempty"`
Type string `json:"type"`
EntityAlias string `json:"entity_alias"`
}

850
vendor/github.com/hashicorp/vault/api/client.go generated vendored Normal file
View File

@ -0,0 +1,850 @@
package api
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
"sync"
"time"
"unicode"
"github.com/hashicorp/errwrap"
cleanhttp "github.com/hashicorp/go-cleanhttp"
retryablehttp "github.com/hashicorp/go-retryablehttp"
rootcerts "github.com/hashicorp/go-rootcerts"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/hashicorp/vault/sdk/helper/parseutil"
"golang.org/x/net/http2"
"golang.org/x/time/rate"
)
const EnvVaultAddress = "VAULT_ADDR"
const EnvVaultAgentAddr = "VAULT_AGENT_ADDR"
const EnvVaultCACert = "VAULT_CACERT"
const EnvVaultCAPath = "VAULT_CAPATH"
const EnvVaultClientCert = "VAULT_CLIENT_CERT"
const EnvVaultClientKey = "VAULT_CLIENT_KEY"
const EnvVaultClientTimeout = "VAULT_CLIENT_TIMEOUT"
const EnvVaultSkipVerify = "VAULT_SKIP_VERIFY"
const EnvVaultNamespace = "VAULT_NAMESPACE"
const EnvVaultTLSServerName = "VAULT_TLS_SERVER_NAME"
const EnvVaultWrapTTL = "VAULT_WRAP_TTL"
const EnvVaultMaxRetries = "VAULT_MAX_RETRIES"
const EnvVaultToken = "VAULT_TOKEN"
const EnvVaultMFA = "VAULT_MFA"
const EnvRateLimit = "VAULT_RATE_LIMIT"
// Deprecated values
const EnvVaultAgentAddress = "VAULT_AGENT_ADDR"
const EnvVaultInsecure = "VAULT_SKIP_VERIFY"
// WrappingLookupFunc is a function that, given an HTTP verb and a path,
// returns an optional string duration to be used for response wrapping (e.g.
// "15s", or simply "15"). The path will not begin with "/v1/" or "v1/" or "/",
// however, end-of-path forward slashes are not trimmed, so must match your
// called path precisely.
type WrappingLookupFunc func(operation, path string) string
// Config is used to configure the creation of the client.
type Config struct {
modifyLock sync.RWMutex
// Address is the address of the Vault server. This should be a complete
// URL such as "http://vault.example.com". If you need a custom SSL
// cert or want to enable insecure mode, you need to specify a custom
// HttpClient.
Address string
// AgentAddress is the address of the local Vault agent. This should be a
// complete URL such as "http://vault.example.com".
AgentAddress string
// HttpClient is the HTTP client to use. Vault sets sane defaults for the
// http.Client and its associated http.Transport created in DefaultConfig.
// If you must modify Vault's defaults, it is suggested that you start with
// that client and modify as needed rather than start with an empty client
// (or http.DefaultClient).
HttpClient *http.Client
// MaxRetries controls the maximum number of times to retry when a 5xx
// error occurs. Set to 0 to disable retrying. Defaults to 2 (for a total
// of three tries).
MaxRetries int
// Timeout is for setting custom timeout parameter in the HttpClient
Timeout time.Duration
// If there is an error when creating the configuration, this will be the
// error
Error error
// The Backoff function to use; a default is used if not provided
Backoff retryablehttp.Backoff
// Limiter is the rate limiter used by the client.
// If this pointer is nil, then there will be no limit set.
// In contrast, if this pointer is set, even to an empty struct,
// then that limiter will be used. Note that an empty Limiter
// is equivalent blocking all events.
Limiter *rate.Limiter
// OutputCurlString causes the actual request to return an error of type
// *OutputStringError. Type asserting the error message will allow
// fetching a cURL-compatible string for the operation.
//
// Note: It is not thread-safe to set this and make concurrent requests
// with the same client. Cloning a client will not clone this value.
OutputCurlString bool
}
// TLSConfig contains the parameters needed to configure TLS on the HTTP client
// used to communicate with Vault.
type TLSConfig struct {
// CACert is the path to a PEM-encoded CA cert file to use to verify the
// Vault server SSL certificate.
CACert string
// CAPath is the path to a directory of PEM-encoded CA cert files to verify
// the Vault server SSL certificate.
CAPath string
// ClientCert is the path to the certificate for Vault communication
ClientCert string
// ClientKey is the path to the private key for Vault communication
ClientKey string
// TLSServerName, if set, is used to set the SNI host when connecting via
// TLS.
TLSServerName string
// Insecure enables or disables SSL verification
Insecure bool
}
// DefaultConfig returns a default configuration for the client. It is
// safe to modify the return value of this function.
//
// The default Address is https://127.0.0.1:8200, but this can be overridden by
// setting the `VAULT_ADDR` environment variable.
//
// If an error is encountered, this will return nil.
func DefaultConfig() *Config {
config := &Config{
Address: "https://127.0.0.1:8200",
HttpClient: cleanhttp.DefaultPooledClient(),
}
config.HttpClient.Timeout = time.Second * 60
transport := config.HttpClient.Transport.(*http.Transport)
transport.TLSHandshakeTimeout = 10 * time.Second
transport.TLSClientConfig = &tls.Config{
MinVersion: tls.VersionTLS12,
}
if err := http2.ConfigureTransport(transport); err != nil {
config.Error = err
return config
}
if err := config.ReadEnvironment(); err != nil {
config.Error = err
return config
}
// Ensure redirects are not automatically followed
// Note that this is sane for the API client as it has its own
// redirect handling logic (and thus also for command/meta),
// but in e.g. http_test actual redirect handling is necessary
config.HttpClient.CheckRedirect = func(req *http.Request, via []*http.Request) error {
// Returning this value causes the Go net library to not close the
// response body and to nil out the error. Otherwise retry clients may
// try three times on every redirect because it sees an error from this
// function (to prevent redirects) passing through to it.
return http.ErrUseLastResponse
}
config.Backoff = retryablehttp.LinearJitterBackoff
config.MaxRetries = 2
return config
}
// ConfigureTLS takes a set of TLS configurations and applies those to the the
// HTTP client.
func (c *Config) ConfigureTLS(t *TLSConfig) error {
if c.HttpClient == nil {
c.HttpClient = DefaultConfig().HttpClient
}
clientTLSConfig := c.HttpClient.Transport.(*http.Transport).TLSClientConfig
var clientCert tls.Certificate
foundClientCert := false
switch {
case t.ClientCert != "" && t.ClientKey != "":
var err error
clientCert, err = tls.LoadX509KeyPair(t.ClientCert, t.ClientKey)
if err != nil {
return err
}
foundClientCert = true
case t.ClientCert != "" || t.ClientKey != "":
return fmt.Errorf("both client cert and client key must be provided")
}
if t.CACert != "" || t.CAPath != "" {
rootConfig := &rootcerts.Config{
CAFile: t.CACert,
CAPath: t.CAPath,
}
if err := rootcerts.ConfigureTLS(clientTLSConfig, rootConfig); err != nil {
return err
}
}
if t.Insecure {
clientTLSConfig.InsecureSkipVerify = true
}
if foundClientCert {
// We use this function to ignore the server's preferential list of
// CAs, otherwise any CA used for the cert auth backend must be in the
// server's CA pool
clientTLSConfig.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) {
return &clientCert, nil
}
}
if t.TLSServerName != "" {
clientTLSConfig.ServerName = t.TLSServerName
}
return nil
}
// ReadEnvironment reads configuration information from the environment. If
// there is an error, no configuration value is updated.
func (c *Config) ReadEnvironment() error {
var envAddress string
var envAgentAddress string
var envCACert string
var envCAPath string
var envClientCert string
var envClientKey string
var envClientTimeout time.Duration
var envInsecure bool
var envTLSServerName string
var envMaxRetries *uint64
var limit *rate.Limiter
// Parse the environment variables
if v := os.Getenv(EnvVaultAddress); v != "" {
envAddress = v
}
if v := os.Getenv(EnvVaultAgentAddr); v != "" {
envAgentAddress = v
} else if v := os.Getenv(EnvVaultAgentAddress); v != "" {
envAgentAddress = v
}
if v := os.Getenv(EnvVaultMaxRetries); v != "" {
maxRetries, err := strconv.ParseUint(v, 10, 32)
if err != nil {
return err
}
envMaxRetries = &maxRetries
}
if v := os.Getenv(EnvVaultCACert); v != "" {
envCACert = v
}
if v := os.Getenv(EnvVaultCAPath); v != "" {
envCAPath = v
}
if v := os.Getenv(EnvVaultClientCert); v != "" {
envClientCert = v
}
if v := os.Getenv(EnvVaultClientKey); v != "" {
envClientKey = v
}
if v := os.Getenv(EnvRateLimit); v != "" {
rateLimit, burstLimit, err := parseRateLimit(v)
if err != nil {
return err
}
limit = rate.NewLimiter(rate.Limit(rateLimit), burstLimit)
}
if t := os.Getenv(EnvVaultClientTimeout); t != "" {
clientTimeout, err := parseutil.ParseDurationSecond(t)
if err != nil {
return fmt.Errorf("could not parse %q", EnvVaultClientTimeout)
}
envClientTimeout = clientTimeout
}
if v := os.Getenv(EnvVaultSkipVerify); v != "" {
var err error
envInsecure, err = strconv.ParseBool(v)
if err != nil {
return fmt.Errorf("could not parse VAULT_SKIP_VERIFY")
}
} else if v := os.Getenv(EnvVaultInsecure); v != "" {
var err error
envInsecure, err = strconv.ParseBool(v)
if err != nil {
return fmt.Errorf("could not parse VAULT_INSECURE")
}
}
if v := os.Getenv(EnvVaultTLSServerName); v != "" {
envTLSServerName = v
}
// Configure the HTTP clients TLS configuration.
t := &TLSConfig{
CACert: envCACert,
CAPath: envCAPath,
ClientCert: envClientCert,
ClientKey: envClientKey,
TLSServerName: envTLSServerName,
Insecure: envInsecure,
}
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.Limiter = limit
if err := c.ConfigureTLS(t); err != nil {
return err
}
if envAddress != "" {
c.Address = envAddress
}
if envAgentAddress != "" {
c.AgentAddress = envAgentAddress
}
if envMaxRetries != nil {
c.MaxRetries = int(*envMaxRetries)
}
if envClientTimeout != 0 {
c.Timeout = envClientTimeout
}
return nil
}
func parseRateLimit(val string) (rate float64, burst int, err error) {
_, err = fmt.Sscanf(val, "%f:%d", &rate, &burst)
if err != nil {
rate, err = strconv.ParseFloat(val, 64)
if err != nil {
err = fmt.Errorf("%v was provided but incorrectly formatted", EnvRateLimit)
}
burst = int(rate)
}
return rate, burst, err
}
// Client is the client to the Vault API. Create a client with NewClient.
type Client struct {
modifyLock sync.RWMutex
addr *url.URL
config *Config
token string
headers http.Header
wrappingLookupFunc WrappingLookupFunc
mfaCreds []string
policyOverride bool
}
// NewClient returns a new client for the given configuration.
//
// If the configuration is nil, Vault will use configuration from
// DefaultConfig(), which is the recommended starting configuration.
//
// If the environment variable `VAULT_TOKEN` is present, the token will be
// automatically added to the client. Otherwise, you must manually call
// `SetToken()`.
func NewClient(c *Config) (*Client, error) {
def := DefaultConfig()
if def == nil {
return nil, fmt.Errorf("could not create/read default configuration")
}
if def.Error != nil {
return nil, errwrap.Wrapf("error encountered setting up default configuration: {{err}}", def.Error)
}
if c == nil {
c = def
}
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
if c.HttpClient == nil {
c.HttpClient = def.HttpClient
}
if c.HttpClient.Transport == nil {
c.HttpClient.Transport = def.HttpClient.Transport
}
address := c.Address
if c.AgentAddress != "" {
address = c.AgentAddress
}
u, err := url.Parse(address)
if err != nil {
return nil, err
}
if strings.HasPrefix(address, "unix://") {
socket := strings.TrimPrefix(address, "unix://")
transport := c.HttpClient.Transport.(*http.Transport)
transport.DialContext = func(context.Context, string, string) (net.Conn, error) {
return net.Dial("unix", socket)
}
// Since the address points to a unix domain socket, the scheme in the
// *URL would be set to `unix`. The *URL in the client is expected to
// be pointing to the protocol used in the application layer and not to
// the transport layer. Hence, setting the fields accordingly.
u.Scheme = "http"
u.Host = socket
u.Path = ""
}
client := &Client{
addr: u,
config: c,
}
if token := os.Getenv(EnvVaultToken); token != "" {
client.token = token
}
if namespace := os.Getenv(EnvVaultNamespace); namespace != "" {
client.setNamespace(namespace)
}
return client, nil
}
// Sets the address of Vault in the client. The format of address should be
// "<Scheme>://<Host>:<Port>". Setting this on a client will override the
// value of VAULT_ADDR environment variable.
func (c *Client) SetAddress(addr string) error {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
parsedAddr, err := url.Parse(addr)
if err != nil {
return errwrap.Wrapf("failed to set address: {{err}}", err)
}
c.addr = parsedAddr
return nil
}
// Address returns the Vault URL the client is configured to connect to
func (c *Client) Address() string {
c.modifyLock.RLock()
defer c.modifyLock.RUnlock()
return c.addr.String()
}
// SetLimiter will set the rate limiter for this client.
// This method is thread-safe.
// rateLimit and burst are specified according to https://godoc.org/golang.org/x/time/rate#NewLimiter
func (c *Client) SetLimiter(rateLimit float64, burst int) {
c.modifyLock.RLock()
c.config.modifyLock.Lock()
defer c.config.modifyLock.Unlock()
c.modifyLock.RUnlock()
c.config.Limiter = rate.NewLimiter(rate.Limit(rateLimit), burst)
}
// SetMaxRetries sets the number of retries that will be used in the case of certain errors
func (c *Client) SetMaxRetries(retries int) {
c.modifyLock.RLock()
c.config.modifyLock.Lock()
defer c.config.modifyLock.Unlock()
c.modifyLock.RUnlock()
c.config.MaxRetries = retries
}
// SetClientTimeout sets the client request timeout
func (c *Client) SetClientTimeout(timeout time.Duration) {
c.modifyLock.RLock()
c.config.modifyLock.Lock()
defer c.config.modifyLock.Unlock()
c.modifyLock.RUnlock()
c.config.Timeout = timeout
}
func (c *Client) OutputCurlString() bool {
c.modifyLock.RLock()
c.config.modifyLock.RLock()
defer c.config.modifyLock.RUnlock()
c.modifyLock.RUnlock()
return c.config.OutputCurlString
}
func (c *Client) SetOutputCurlString(curl bool) {
c.modifyLock.RLock()
c.config.modifyLock.Lock()
defer c.config.modifyLock.Unlock()
c.modifyLock.RUnlock()
c.config.OutputCurlString = curl
}
// CurrentWrappingLookupFunc sets a lookup function that returns desired wrap TTLs
// for a given operation and path
func (c *Client) CurrentWrappingLookupFunc() WrappingLookupFunc {
c.modifyLock.RLock()
defer c.modifyLock.RUnlock()
return c.wrappingLookupFunc
}
// SetWrappingLookupFunc sets a lookup function that returns desired wrap TTLs
// for a given operation and path
func (c *Client) SetWrappingLookupFunc(lookupFunc WrappingLookupFunc) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.wrappingLookupFunc = lookupFunc
}
// SetMFACreds sets the MFA credentials supplied either via the environment
// variable or via the command line.
func (c *Client) SetMFACreds(creds []string) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.mfaCreds = creds
}
// SetNamespace sets the namespace supplied either via the environment
// variable or via the command line.
func (c *Client) SetNamespace(namespace string) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.setNamespace(namespace)
}
func (c *Client) setNamespace(namespace string) {
if c.headers == nil {
c.headers = make(http.Header)
}
c.headers.Set(consts.NamespaceHeaderName, namespace)
}
// Token returns the access token being used by this client. It will
// return the empty string if there is no token set.
func (c *Client) Token() string {
c.modifyLock.RLock()
defer c.modifyLock.RUnlock()
return c.token
}
// SetToken sets the token directly. This won't perform any auth
// verification, it simply sets the token properly for future requests.
func (c *Client) SetToken(v string) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.token = v
}
// ClearToken deletes the token if it is set or does nothing otherwise.
func (c *Client) ClearToken() {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.token = ""
}
// Headers gets the current set of headers used for requests. This returns a
// copy; to modify it make modifications locally and use SetHeaders.
func (c *Client) Headers() http.Header {
c.modifyLock.RLock()
defer c.modifyLock.RUnlock()
if c.headers == nil {
return nil
}
ret := make(http.Header)
for k, v := range c.headers {
for _, val := range v {
ret[k] = append(ret[k], val)
}
}
return ret
}
// SetHeaders sets the headers to be used for future requests.
func (c *Client) SetHeaders(headers http.Header) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.headers = headers
}
// SetBackoff sets the backoff function to be used for future requests.
func (c *Client) SetBackoff(backoff retryablehttp.Backoff) {
c.modifyLock.RLock()
c.config.modifyLock.Lock()
defer c.config.modifyLock.Unlock()
c.modifyLock.RUnlock()
c.config.Backoff = backoff
}
// Clone creates a new client with the same configuration. Note that the same
// underlying http.Client is used; modifying the client from more than one
// goroutine at once may not be safe, so modify the client as needed and then
// clone.
//
// Also, only the client's config is currently copied; this means items not in
// the api.Config struct, such as policy override and wrapping function
// behavior, must currently then be set as desired on the new client.
func (c *Client) Clone() (*Client, error) {
c.modifyLock.RLock()
c.config.modifyLock.RLock()
config := c.config
c.modifyLock.RUnlock()
newConfig := &Config{
Address: config.Address,
HttpClient: config.HttpClient,
MaxRetries: config.MaxRetries,
Timeout: config.Timeout,
Backoff: config.Backoff,
Limiter: config.Limiter,
}
config.modifyLock.RUnlock()
return NewClient(newConfig)
}
// SetPolicyOverride sets whether requests should be sent with the policy
// override flag to request overriding soft-mandatory Sentinel policies (both
// RGPs and EGPs)
func (c *Client) SetPolicyOverride(override bool) {
c.modifyLock.Lock()
defer c.modifyLock.Unlock()
c.policyOverride = override
}
// NewRequest creates a new raw request object to query the Vault server
// configured for this client. This is an advanced method and generally
// doesn't need to be called externally.
func (c *Client) NewRequest(method, requestPath string) *Request {
c.modifyLock.RLock()
addr := c.addr
token := c.token
mfaCreds := c.mfaCreds
wrappingLookupFunc := c.wrappingLookupFunc
headers := c.headers
policyOverride := c.policyOverride
c.modifyLock.RUnlock()
// if SRV records exist (see https://tools.ietf.org/html/draft-andrews-http-srv-02), lookup the SRV
// record and take the highest match; this is not designed for high-availability, just discovery
var host string = addr.Host
if addr.Port() == "" {
// Internet Draft specifies that the SRV record is ignored if a port is given
_, addrs, err := net.LookupSRV("http", "tcp", addr.Hostname())
if err == nil && len(addrs) > 0 {
host = fmt.Sprintf("%s:%d", addrs[0].Target, addrs[0].Port)
}
}
req := &Request{
Method: method,
URL: &url.URL{
User: addr.User,
Scheme: addr.Scheme,
Host: host,
Path: path.Join(addr.Path, requestPath),
},
ClientToken: token,
Params: make(map[string][]string),
}
var lookupPath string
switch {
case strings.HasPrefix(requestPath, "/v1/"):
lookupPath = strings.TrimPrefix(requestPath, "/v1/")
case strings.HasPrefix(requestPath, "v1/"):
lookupPath = strings.TrimPrefix(requestPath, "v1/")
default:
lookupPath = requestPath
}
req.MFAHeaderVals = mfaCreds
if wrappingLookupFunc != nil {
req.WrapTTL = wrappingLookupFunc(method, lookupPath)
} else {
req.WrapTTL = DefaultWrappingLookupFunc(method, lookupPath)
}
if headers != nil {
req.Headers = headers
}
req.PolicyOverride = policyOverride
return req
}
// RawRequest performs the raw request given. This request may be against
// a Vault server not configured with this client. This is an advanced operation
// that generally won't need to be called externally.
func (c *Client) RawRequest(r *Request) (*Response, error) {
return c.RawRequestWithContext(context.Background(), r)
}
// RawRequestWithContext performs the raw request given. This request may be against
// a Vault server not configured with this client. This is an advanced operation
// that generally won't need to be called externally.
func (c *Client) RawRequestWithContext(ctx context.Context, r *Request) (*Response, error) {
c.modifyLock.RLock()
token := c.token
c.config.modifyLock.RLock()
limiter := c.config.Limiter
maxRetries := c.config.MaxRetries
backoff := c.config.Backoff
httpClient := c.config.HttpClient
timeout := c.config.Timeout
outputCurlString := c.config.OutputCurlString
c.config.modifyLock.RUnlock()
c.modifyLock.RUnlock()
if limiter != nil {
limiter.Wait(ctx)
}
// Sanity check the token before potentially erroring from the API
idx := strings.IndexFunc(token, func(c rune) bool {
return !unicode.IsPrint(c)
})
if idx != -1 {
return nil, fmt.Errorf("configured Vault token contains non-printable characters and cannot be used")
}
redirectCount := 0
START:
req, err := r.toRetryableHTTP()
if err != nil {
return nil, err
}
if req == nil {
return nil, fmt.Errorf("nil request created")
}
if outputCurlString {
LastOutputStringError = &OutputStringError{Request: req}
return nil, LastOutputStringError
}
if timeout != 0 {
ctx, _ = context.WithTimeout(ctx, timeout)
}
req.Request = req.Request.WithContext(ctx)
if backoff == nil {
backoff = retryablehttp.LinearJitterBackoff
}
client := &retryablehttp.Client{
HTTPClient: httpClient,
RetryWaitMin: 1000 * time.Millisecond,
RetryWaitMax: 1500 * time.Millisecond,
RetryMax: maxRetries,
CheckRetry: retryablehttp.DefaultRetryPolicy,
Backoff: backoff,
ErrorHandler: retryablehttp.PassthroughErrorHandler,
}
var result *Response
resp, err := client.Do(req)
if resp != nil {
result = &Response{Response: resp}
}
if err != nil {
if strings.Contains(err.Error(), "tls: oversized") {
err = errwrap.Wrapf(
"{{err}}\n\n"+
"This error usually means that the server is running with TLS disabled\n"+
"but the client is configured to use TLS. Please either enable TLS\n"+
"on the server or run the client with -address set to an address\n"+
"that uses the http protocol:\n\n"+
" vault <command> -address http://<address>\n\n"+
"You can also set the VAULT_ADDR environment variable:\n\n\n"+
" VAULT_ADDR=http://<address> vault <command>\n\n"+
"where <address> is replaced by the actual address to the server.",
err)
}
return result, err
}
// Check for a redirect, only allowing for a single redirect
if (resp.StatusCode == 301 || resp.StatusCode == 302 || resp.StatusCode == 307) && redirectCount == 0 {
// Parse the updated location
respLoc, err := resp.Location()
if err != nil {
return result, err
}
// Ensure a protocol downgrade doesn't happen
if req.URL.Scheme == "https" && respLoc.Scheme != "https" {
return result, fmt.Errorf("redirect would cause protocol downgrade")
}
// Update the request
r.URL = respLoc
// Reset the request body if any
if err := r.ResetJSONBody(); err != nil {
return result, err
}
// Retry the request
redirectCount++
goto START
}
if err := result.Error(); err != nil {
return result, err
}
return result, nil
}

19
vendor/github.com/hashicorp/vault/api/go.mod generated vendored Normal file
View File

@ -0,0 +1,19 @@
module github.com/hashicorp/vault/api
go 1.12
replace github.com/hashicorp/vault/sdk => ../sdk
require (
github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/go-cleanhttp v0.5.1
github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/go-retryablehttp v0.5.4
github.com/hashicorp/go-rootcerts v1.0.1
github.com/hashicorp/hcl v1.0.0
github.com/hashicorp/vault/sdk v0.1.13
github.com/mitchellh/mapstructure v1.1.2
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
gopkg.in/square/go-jose.v2 v2.3.1
)

118
vendor/github.com/hashicorp/vault/api/go.sum generated vendored Normal file
View File

@ -0,0 +1,118 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
github.com/go-ldap/ldap v3.0.2+incompatible/go.mod h1:qfd9rJvER9Q0/D/Sqn1DfHRoBp40uXYvFoEVrNEPqRc=
github.com/go-test/deep v1.0.2-0.20181118220953-042da051cf31/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-hclog v0.0.0-20180709165350-ff2cf002a8dd/go.mod h1:9bjs9uLqI8l75knNv3lV1kA55veR+WUPSiKIWcQHudI=
github.com/hashicorp/go-hclog v0.8.0/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-plugin v1.0.1/go.mod h1:++UyYGoz3o5w9ZzAdZxtQKrWWP+iqPBn3cQptSMzBuY=
github.com/hashicorp/go-retryablehttp v0.5.4 h1:1BZvpawXoJCWX6pNtow9+rpEj+3itIlutiqnntI6jOE=
github.com/hashicorp/go-retryablehttp v0.5.4/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.1 h1:DMo4fmknnz0E0evoNYnV48RjWndOsmd6OW+09R3cEP8=
github.com/hashicorp/go-rootcerts v1.0.1/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.2 h1:ztczhD1jLxIRjVejw8gFomI1BQZOe2WoVOu0SyteCQc=
github.com/hashicorp/go-sockaddr v1.0.2/go.mod h1:rB4wwRAUzs07qva3c5SdrY/NEtAUjGlgmH/UkBUC97A=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/yamux v0.0.0-20180604194846-3520598351bb/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/go-glob v1.0.0 h1:iQh3xXAumdQ+4Ufa5b25cRpC5TYKlno6hsv6Cb3pkBk=
github.com/ryanuber/go-glob v1.0.0/go.mod h1:807d1WSdnB0XRJzKNil9Om6lcp/3a0v4qIHxIXzX/Yc=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190129075346-302c3dd5f1cc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db h1:6/JqlYfC1CCaLnGceQTI+sDGhC9UBSPAsBqI0Gun6kU=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d/go.mod h1:cuepJuh7vyXfUyUwEgHQXw849cJrilpS5NeIjOWESAw=
gopkg.in/square/go-jose.v2 v2.3.1 h1:SK5KegNXmKmqE342YYN2qPHEnUYeoMiXXl1poUlI+o4=
gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

30
vendor/github.com/hashicorp/vault/api/help.go generated vendored Normal file
View File

@ -0,0 +1,30 @@
package api
import (
"context"
"fmt"
)
// Help reads the help information for the given path.
func (c *Client) Help(path string) (*Help, error) {
r := c.NewRequest("GET", fmt.Sprintf("/v1/%s", path))
r.Params.Add("help", "1")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result Help
err = resp.DecodeJSON(&result)
return &result, err
}
type Help struct {
Help string `json:"help"`
SeeAlso []string `json:"see_also"`
OpenAPI map[string]interface{} `json:"openapi"`
}

285
vendor/github.com/hashicorp/vault/api/logical.go generated vendored Normal file
View File

@ -0,0 +1,285 @@
package api
import (
"bytes"
"context"
"fmt"
"io"
"net/url"
"os"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/vault/sdk/helper/jsonutil"
)
const (
wrappedResponseLocation = "cubbyhole/response"
)
var (
// The default TTL that will be used with `sys/wrapping/wrap`, can be
// changed
DefaultWrappingTTL = "5m"
// The default function used if no other function is set, which honors the
// env var and wraps `sys/wrapping/wrap`
DefaultWrappingLookupFunc = func(operation, path string) string {
if os.Getenv(EnvVaultWrapTTL) != "" {
return os.Getenv(EnvVaultWrapTTL)
}
if (operation == "PUT" || operation == "POST") && path == "sys/wrapping/wrap" {
return DefaultWrappingTTL
}
return ""
}
)
// Logical is used to perform logical backend operations on Vault.
type Logical struct {
c *Client
}
// Logical is used to return the client for logical-backend API calls.
func (c *Client) Logical() *Logical {
return &Logical{c: c}
}
func (c *Logical) Read(path string) (*Secret, error) {
return c.ReadWithData(path, nil)
}
func (c *Logical) ReadWithData(path string, data map[string][]string) (*Secret, error) {
r := c.c.NewRequest("GET", "/v1/"+path)
var values url.Values
for k, v := range data {
if values == nil {
values = make(url.Values)
}
for _, val := range v {
values.Add(k, val)
}
}
if values != nil {
r.Params = values
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
if resp != nil && resp.StatusCode == 404 {
secret, parseErr := ParseSecret(resp.Body)
switch parseErr {
case nil:
case io.EOF:
return nil, nil
default:
return nil, err
}
if secret != nil && (len(secret.Warnings) > 0 || len(secret.Data) > 0) {
return secret, nil
}
return nil, nil
}
if err != nil {
return nil, err
}
return ParseSecret(resp.Body)
}
func (c *Logical) List(path string) (*Secret, error) {
r := c.c.NewRequest("LIST", "/v1/"+path)
// Set this for broader compatibility, but we use LIST above to be able to
// handle the wrapping lookup function
r.Method = "GET"
r.Params.Set("list", "true")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
if resp != nil && resp.StatusCode == 404 {
secret, parseErr := ParseSecret(resp.Body)
switch parseErr {
case nil:
case io.EOF:
return nil, nil
default:
return nil, err
}
if secret != nil && (len(secret.Warnings) > 0 || len(secret.Data) > 0) {
return secret, nil
}
return nil, nil
}
if err != nil {
return nil, err
}
return ParseSecret(resp.Body)
}
func (c *Logical) Write(path string, data map[string]interface{}) (*Secret, error) {
r := c.c.NewRequest("PUT", "/v1/"+path)
if err := r.SetJSONBody(data); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
if resp != nil && resp.StatusCode == 404 {
secret, parseErr := ParseSecret(resp.Body)
switch parseErr {
case nil:
case io.EOF:
return nil, nil
default:
return nil, err
}
if secret != nil && (len(secret.Warnings) > 0 || len(secret.Data) > 0) {
return secret, err
}
}
if err != nil {
return nil, err
}
return ParseSecret(resp.Body)
}
func (c *Logical) Delete(path string) (*Secret, error) {
return c.DeleteWithData(path, nil)
}
func (c *Logical) DeleteWithData(path string, data map[string][]string) (*Secret, error) {
r := c.c.NewRequest("DELETE", "/v1/"+path)
var values url.Values
for k, v := range data {
if values == nil {
values = make(url.Values)
}
for _, val := range v {
values.Add(k, val)
}
}
if values != nil {
r.Params = values
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
if resp != nil && resp.StatusCode == 404 {
secret, parseErr := ParseSecret(resp.Body)
switch parseErr {
case nil:
case io.EOF:
return nil, nil
default:
return nil, err
}
if secret != nil && (len(secret.Warnings) > 0 || len(secret.Data) > 0) {
return secret, err
}
}
if err != nil {
return nil, err
}
return ParseSecret(resp.Body)
}
func (c *Logical) Unwrap(wrappingToken string) (*Secret, error) {
var data map[string]interface{}
if wrappingToken != "" {
if c.c.Token() == "" {
c.c.SetToken(wrappingToken)
} else if wrappingToken != c.c.Token() {
data = map[string]interface{}{
"token": wrappingToken,
}
}
}
r := c.c.NewRequest("PUT", "/v1/sys/wrapping/unwrap")
if err := r.SetJSONBody(data); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
}
if resp == nil || resp.StatusCode != 404 {
if err != nil {
return nil, err
}
if resp == nil {
return nil, nil
}
return ParseSecret(resp.Body)
}
// In the 404 case this may actually be a wrapped 404 error
secret, parseErr := ParseSecret(resp.Body)
switch parseErr {
case nil:
case io.EOF:
return nil, nil
default:
return nil, err
}
if secret != nil && (len(secret.Warnings) > 0 || len(secret.Data) > 0) {
return secret, nil
}
// Otherwise this might be an old-style wrapping token so attempt the old
// method
if wrappingToken != "" {
origToken := c.c.Token()
defer c.c.SetToken(origToken)
c.c.SetToken(wrappingToken)
}
secret, err = c.Read(wrappedResponseLocation)
if err != nil {
return nil, errwrap.Wrapf(fmt.Sprintf("error reading %q: {{err}}", wrappedResponseLocation), err)
}
if secret == nil {
return nil, fmt.Errorf("no value found at %q", wrappedResponseLocation)
}
if secret.Data == nil {
return nil, fmt.Errorf("\"data\" not found in wrapping response")
}
if _, ok := secret.Data["response"]; !ok {
return nil, fmt.Errorf("\"response\" not found in wrapping response \"data\" map")
}
wrappedSecret := new(Secret)
buf := bytes.NewBufferString(secret.Data["response"].(string))
if err := jsonutil.DecodeJSONFromReader(buf, wrappedSecret); err != nil {
return nil, errwrap.Wrapf("error unmarshalling wrapped secret: {{err}}", err)
}
return wrappedSecret, nil
}

71
vendor/github.com/hashicorp/vault/api/output_string.go generated vendored Normal file
View File

@ -0,0 +1,71 @@
package api
import (
"fmt"
"strings"
retryablehttp "github.com/hashicorp/go-retryablehttp"
)
const (
ErrOutputStringRequest = "output a string, please"
)
var (
LastOutputStringError *OutputStringError
)
type OutputStringError struct {
*retryablehttp.Request
parsingError error
parsedCurlString string
}
func (d *OutputStringError) Error() string {
if d.parsedCurlString == "" {
d.parseRequest()
if d.parsingError != nil {
return d.parsingError.Error()
}
}
return ErrOutputStringRequest
}
func (d *OutputStringError) parseRequest() {
body, err := d.Request.BodyBytes()
if err != nil {
d.parsingError = err
return
}
// Build cURL string
d.parsedCurlString = "curl "
if d.Request.Method != "GET" {
d.parsedCurlString = fmt.Sprintf("%s-X %s ", d.parsedCurlString, d.Request.Method)
}
for k, v := range d.Request.Header {
for _, h := range v {
if strings.ToLower(k) == "x-vault-token" {
h = `$(vault print token)`
}
d.parsedCurlString = fmt.Sprintf("%s-H \"%s: %s\" ", d.parsedCurlString, k, h)
}
}
if len(body) > 0 {
// We need to escape single quotes since that's what we're using to
// quote the body
escapedBody := strings.Replace(string(body), "'", "'\"'\"'", -1)
d.parsedCurlString = fmt.Sprintf("%s-d '%s' ", d.parsedCurlString, escapedBody)
}
d.parsedCurlString = fmt.Sprintf("%s%s", d.parsedCurlString, d.Request.URL.String())
}
func (d *OutputStringError) CurlString() string {
if d.parsedCurlString == "" {
d.parseRequest()
}
return d.parsedCurlString
}

186
vendor/github.com/hashicorp/vault/api/plugin_helpers.go generated vendored Normal file
View File

@ -0,0 +1,186 @@
package api
import (
"crypto/tls"
"crypto/x509"
"encoding/base64"
"errors"
"flag"
"net/url"
"os"
squarejwt "gopkg.in/square/go-jose.v2/jwt"
"github.com/hashicorp/errwrap"
)
var (
// PluginMetadataModeEnv is an ENV name used to disable TLS communication
// to bootstrap mounting plugins.
PluginMetadataModeEnv = "VAULT_PLUGIN_METADATA_MODE"
// PluginUnwrapTokenEnv is the ENV name used to pass unwrap tokens to the
// plugin.
PluginUnwrapTokenEnv = "VAULT_UNWRAP_TOKEN"
)
// PluginAPIClientMeta is a helper that plugins can use to configure TLS connections
// back to Vault.
type PluginAPIClientMeta struct {
// These are set by the command line flags.
flagCACert string
flagCAPath string
flagClientCert string
flagClientKey string
flagInsecure bool
}
// FlagSet returns the flag set for configuring the TLS connection
func (f *PluginAPIClientMeta) FlagSet() *flag.FlagSet {
fs := flag.NewFlagSet("vault plugin settings", flag.ContinueOnError)
fs.StringVar(&f.flagCACert, "ca-cert", "", "")
fs.StringVar(&f.flagCAPath, "ca-path", "", "")
fs.StringVar(&f.flagClientCert, "client-cert", "", "")
fs.StringVar(&f.flagClientKey, "client-key", "", "")
fs.BoolVar(&f.flagInsecure, "tls-skip-verify", false, "")
return fs
}
// GetTLSConfig will return a TLSConfig based off the values from the flags
func (f *PluginAPIClientMeta) GetTLSConfig() *TLSConfig {
// If we need custom TLS configuration, then set it
if f.flagCACert != "" || f.flagCAPath != "" || f.flagClientCert != "" || f.flagClientKey != "" || f.flagInsecure {
t := &TLSConfig{
CACert: f.flagCACert,
CAPath: f.flagCAPath,
ClientCert: f.flagClientCert,
ClientKey: f.flagClientKey,
TLSServerName: "",
Insecure: f.flagInsecure,
}
return t
}
return nil
}
// VaultPluginTLSProvider is run inside a plugin and retrieves the response
// wrapped TLS certificate from vault. It returns a configured TLS Config.
func VaultPluginTLSProvider(apiTLSConfig *TLSConfig) func() (*tls.Config, error) {
if os.Getenv(PluginMetadataModeEnv) == "true" {
return nil
}
return func() (*tls.Config, error) {
unwrapToken := os.Getenv(PluginUnwrapTokenEnv)
parsedJWT, err := squarejwt.ParseSigned(unwrapToken)
if err != nil {
return nil, errwrap.Wrapf("error parsing wrapping token: {{err}}", err)
}
var allClaims = make(map[string]interface{})
if err = parsedJWT.UnsafeClaimsWithoutVerification(&allClaims); err != nil {
return nil, errwrap.Wrapf("error parsing claims from wrapping token: {{err}}", err)
}
addrClaimRaw, ok := allClaims["addr"]
if !ok {
return nil, errors.New("could not validate addr claim")
}
vaultAddr, ok := addrClaimRaw.(string)
if !ok {
return nil, errors.New("could not parse addr claim")
}
if vaultAddr == "" {
return nil, errors.New(`no vault api_addr found`)
}
// Sanity check the value
if _, err := url.Parse(vaultAddr); err != nil {
return nil, errwrap.Wrapf("error parsing the vault api_addr: {{err}}", err)
}
// Unwrap the token
clientConf := DefaultConfig()
clientConf.Address = vaultAddr
if apiTLSConfig != nil {
err := clientConf.ConfigureTLS(apiTLSConfig)
if err != nil {
return nil, errwrap.Wrapf("error configuring api client {{err}}", err)
}
}
client, err := NewClient(clientConf)
if err != nil {
return nil, errwrap.Wrapf("error during api client creation: {{err}}", err)
}
secret, err := client.Logical().Unwrap(unwrapToken)
if err != nil {
return nil, errwrap.Wrapf("error during token unwrap request: {{err}}", err)
}
if secret == nil {
return nil, errors.New("error during token unwrap request: secret is nil")
}
// Retrieve and parse the server's certificate
serverCertBytesRaw, ok := secret.Data["ServerCert"].(string)
if !ok {
return nil, errors.New("error unmarshalling certificate")
}
serverCertBytes, err := base64.StdEncoding.DecodeString(serverCertBytesRaw)
if err != nil {
return nil, errwrap.Wrapf("error parsing certificate: {{err}}", err)
}
serverCert, err := x509.ParseCertificate(serverCertBytes)
if err != nil {
return nil, errwrap.Wrapf("error parsing certificate: {{err}}", err)
}
// Retrieve and parse the server's private key
serverKeyB64, ok := secret.Data["ServerKey"].(string)
if !ok {
return nil, errors.New("error unmarshalling certificate")
}
serverKeyRaw, err := base64.StdEncoding.DecodeString(serverKeyB64)
if err != nil {
return nil, errwrap.Wrapf("error parsing certificate: {{err}}", err)
}
serverKey, err := x509.ParseECPrivateKey(serverKeyRaw)
if err != nil {
return nil, errwrap.Wrapf("error parsing certificate: {{err}}", err)
}
// Add CA cert to the cert pool
caCertPool := x509.NewCertPool()
caCertPool.AddCert(serverCert)
// Build a certificate object out of the server's cert and private key.
cert := tls.Certificate{
Certificate: [][]byte{serverCertBytes},
PrivateKey: serverKey,
Leaf: serverCert,
}
// Setup TLS config
tlsConfig := &tls.Config{
ClientCAs: caCertPool,
RootCAs: caCertPool,
ClientAuth: tls.RequireAndVerifyClientCert,
// TLS 1.2 minimum
MinVersion: tls.VersionTLS12,
Certificates: []tls.Certificate{cert},
ServerName: serverCert.Subject.CommonName,
}
tlsConfig.BuildNameToCertificate()
return tlsConfig, nil
}
}

349
vendor/github.com/hashicorp/vault/api/renewer.go generated vendored Normal file
View File

@ -0,0 +1,349 @@
package api
import (
"errors"
"math/rand"
"sync"
"time"
)
var (
ErrRenewerMissingInput = errors.New("missing input to renewer")
ErrRenewerMissingSecret = errors.New("missing secret to renew")
ErrRenewerNotRenewable = errors.New("secret is not renewable")
ErrRenewerNoSecretData = errors.New("returned empty secret data")
// DefaultRenewerRenewBuffer is the default size of the buffer for renew
// messages on the channel.
DefaultRenewerRenewBuffer = 5
)
// Renewer is a process for renewing a secret.
//
// renewer, err := client.NewRenewer(&RenewerInput{
// Secret: mySecret,
// })
// go renewer.Renew()
// defer renewer.Stop()
//
// for {
// select {
// case err := <-renewer.DoneCh():
// if err != nil {
// log.Fatal(err)
// }
//
// // Renewal is now over
// case renewal := <-renewer.RenewCh():
// log.Printf("Successfully renewed: %#v", renewal)
// }
// }
//
//
// The `DoneCh` will return if renewal fails or if the remaining lease duration
// after a renewal is less than or equal to the grace (in number of seconds). In
// both cases, the caller should attempt a re-read of the secret. Clients should
// check the return value of the channel to see if renewal was successful.
type Renewer struct {
l sync.Mutex
client *Client
secret *Secret
grace time.Duration
random *rand.Rand
increment int
doneCh chan error
renewCh chan *RenewOutput
stopped bool
stopCh chan struct{}
}
// RenewerInput is used as input to the renew function.
type RenewerInput struct {
// Secret is the secret to renew
Secret *Secret
// DEPRECATED: this does not do anything.
Grace time.Duration
// Rand is the randomizer to use for underlying randomization. If not
// provided, one will be generated and seeded automatically. If provided, it
// is assumed to have already been seeded.
Rand *rand.Rand
// RenewBuffer is the size of the buffered channel where renew messages are
// dispatched.
RenewBuffer int
// The new TTL, in seconds, that should be set on the lease. The TTL set
// here may or may not be honored by the vault server, based on Vault
// configuration or any associated max TTL values.
Increment int
}
// RenewOutput is the metadata returned to the client (if it's listening) to
// renew messages.
type RenewOutput struct {
// RenewedAt is the timestamp when the renewal took place (UTC).
RenewedAt time.Time
// Secret is the underlying renewal data. It's the same struct as all data
// that is returned from Vault, but since this is renewal data, it will not
// usually include the secret itself.
Secret *Secret
}
// NewRenewer creates a new renewer from the given input.
func (c *Client) NewRenewer(i *RenewerInput) (*Renewer, error) {
if i == nil {
return nil, ErrRenewerMissingInput
}
secret := i.Secret
if secret == nil {
return nil, ErrRenewerMissingSecret
}
random := i.Rand
if random == nil {
random = rand.New(rand.NewSource(int64(time.Now().Nanosecond())))
}
renewBuffer := i.RenewBuffer
if renewBuffer == 0 {
renewBuffer = DefaultRenewerRenewBuffer
}
return &Renewer{
client: c,
secret: secret,
increment: i.Increment,
random: random,
doneCh: make(chan error, 1),
renewCh: make(chan *RenewOutput, renewBuffer),
stopped: false,
stopCh: make(chan struct{}),
}, nil
}
// DoneCh returns the channel where the renewer will publish when renewal stops.
// If there is an error, this will be an error.
func (r *Renewer) DoneCh() <-chan error {
return r.doneCh
}
// RenewCh is a channel that receives a message when a successful renewal takes
// place and includes metadata about the renewal.
func (r *Renewer) RenewCh() <-chan *RenewOutput {
return r.renewCh
}
// Stop stops the renewer.
func (r *Renewer) Stop() {
r.l.Lock()
if !r.stopped {
close(r.stopCh)
r.stopped = true
}
r.l.Unlock()
}
// Renew starts a background process for renewing this secret. When the secret
// has auth data, this attempts to renew the auth (token). When the secret has
// a lease, this attempts to renew the lease.
func (r *Renewer) Renew() {
var result error
if r.secret.Auth != nil {
result = r.renewAuth()
} else {
result = r.renewLease()
}
r.doneCh <- result
}
// renewAuth is a helper for renewing authentication.
func (r *Renewer) renewAuth() error {
if !r.secret.Auth.Renewable || r.secret.Auth.ClientToken == "" {
return ErrRenewerNotRenewable
}
priorDuration := time.Duration(r.secret.Auth.LeaseDuration) * time.Second
r.calculateGrace(priorDuration)
client, token := r.client, r.secret.Auth.ClientToken
for {
// Check if we are stopped.
select {
case <-r.stopCh:
return nil
default:
}
// Renew the auth.
renewal, err := client.Auth().Token().RenewTokenAsSelf(token, r.increment)
if err != nil {
return err
}
// Push a message that a renewal took place.
select {
case r.renewCh <- &RenewOutput{time.Now().UTC(), renewal}:
default:
}
// Somehow, sometimes, this happens.
if renewal == nil || renewal.Auth == nil {
return ErrRenewerNoSecretData
}
// Do nothing if we are not renewable
if !renewal.Auth.Renewable {
return ErrRenewerNotRenewable
}
// Grab the lease duration
leaseDuration := time.Duration(renewal.Auth.LeaseDuration) * time.Second
// We keep evaluating a new grace period so long as the lease is
// extending. Once it stops extending, we've hit the max and need to
// rely on the grace duration.
if leaseDuration > priorDuration {
r.calculateGrace(leaseDuration)
}
priorDuration = leaseDuration
// The sleep duration is set to 2/3 of the current lease duration plus
// 1/3 of the current grace period, which adds jitter.
sleepDuration := time.Duration(float64(leaseDuration.Nanoseconds())*2/3 + float64(r.grace.Nanoseconds())/3)
// If we are within grace, return now; or, if the amount of time we
// would sleep would land us in the grace period. This helps with short
// tokens; for example, you don't want a current lease duration of 4
// seconds, a grace period of 3 seconds, and end up sleeping for more
// than three of those seconds and having a very small budget of time
// to renew.
if leaseDuration <= r.grace || leaseDuration-sleepDuration <= r.grace {
return nil
}
select {
case <-r.stopCh:
return nil
case <-time.After(sleepDuration):
continue
}
}
}
// renewLease is a helper for renewing a lease.
func (r *Renewer) renewLease() error {
if !r.secret.Renewable || r.secret.LeaseID == "" {
return ErrRenewerNotRenewable
}
priorDuration := time.Duration(r.secret.LeaseDuration) * time.Second
r.calculateGrace(priorDuration)
client, leaseID := r.client, r.secret.LeaseID
for {
// Check if we are stopped.
select {
case <-r.stopCh:
return nil
default:
}
// Renew the lease.
renewal, err := client.Sys().Renew(leaseID, r.increment)
if err != nil {
return err
}
// Push a message that a renewal took place.
select {
case r.renewCh <- &RenewOutput{time.Now().UTC(), renewal}:
default:
}
// Somehow, sometimes, this happens.
if renewal == nil {
return ErrRenewerNoSecretData
}
// Do nothing if we are not renewable
if !renewal.Renewable {
return ErrRenewerNotRenewable
}
// Grab the lease duration
leaseDuration := time.Duration(renewal.LeaseDuration) * time.Second
// We keep evaluating a new grace period so long as the lease is
// extending. Once it stops extending, we've hit the max and need to
// rely on the grace duration.
if leaseDuration > priorDuration {
r.calculateGrace(leaseDuration)
}
priorDuration = leaseDuration
// The sleep duration is set to 2/3 of the current lease duration plus
// 1/3 of the current grace period, which adds jitter.
sleepDuration := time.Duration(float64(leaseDuration.Nanoseconds())*2/3 + float64(r.grace.Nanoseconds())/3)
// If we are within grace, return now; or, if the amount of time we
// would sleep would land us in the grace period. This helps with short
// tokens; for example, you don't want a current lease duration of 4
// seconds, a grace period of 3 seconds, and end up sleeping for more
// than three of those seconds and having a very small budget of time
// to renew.
if leaseDuration <= r.grace || leaseDuration-sleepDuration <= r.grace {
return nil
}
select {
case <-r.stopCh:
return nil
case <-time.After(sleepDuration):
continue
}
}
}
// sleepDuration calculates the time to sleep given the base lease duration. The
// base is the resulting lease duration. It will be reduced to 1/3 and
// multiplied by a random float between 0.0 and 1.0. This extra randomness
// prevents multiple clients from all trying to renew simultaneously.
func (r *Renewer) sleepDuration(base time.Duration) time.Duration {
sleep := float64(base)
// Renew at 1/3 the remaining lease. This will give us an opportunity to retry
// at least one more time should the first renewal fail.
sleep = sleep / 3.0
// Use a randomness so many clients do not hit Vault simultaneously.
sleep = sleep * (r.random.Float64() + 1) / 2.0
return time.Duration(sleep)
}
// calculateGrace calculates the grace period based on a reasonable set of
// assumptions given the total lease time; it also adds some jitter to not have
// clients be in sync.
func (r *Renewer) calculateGrace(leaseDuration time.Duration) {
if leaseDuration == 0 {
r.grace = 0
return
}
leaseNanos := float64(leaseDuration.Nanoseconds())
jitterMax := 0.1 * leaseNanos
// For a given lease duration, we want to allow 80-90% of that to elapse,
// so the remaining amount is the grace period
r.grace = time.Duration(jitterMax) + time.Duration(uint64(r.random.Int63())%uint64(jitterMax))
}

147
vendor/github.com/hashicorp/vault/api/request.go generated vendored Normal file
View File

@ -0,0 +1,147 @@
package api
import (
"bytes"
"encoding/json"
"io"
"io/ioutil"
"net/http"
"net/url"
"github.com/hashicorp/vault/sdk/helper/consts"
retryablehttp "github.com/hashicorp/go-retryablehttp"
)
// Request is a raw request configuration structure used to initiate
// API requests to the Vault server.
type Request struct {
Method string
URL *url.URL
Params url.Values
Headers http.Header
ClientToken string
MFAHeaderVals []string
WrapTTL string
Obj interface{}
// When possible, use BodyBytes as it is more efficient due to how the
// retry logic works
BodyBytes []byte
// Fallback
Body io.Reader
BodySize int64
// Whether to request overriding soft-mandatory Sentinel policies (RGPs and
// EGPs). If set, the override flag will take effect for all policies
// evaluated during the request.
PolicyOverride bool
}
// SetJSONBody is used to set a request body that is a JSON-encoded value.
func (r *Request) SetJSONBody(val interface{}) error {
buf, err := json.Marshal(val)
if err != nil {
return err
}
r.Obj = val
r.BodyBytes = buf
return nil
}
// ResetJSONBody is used to reset the body for a redirect
func (r *Request) ResetJSONBody() error {
if r.BodyBytes == nil {
return nil
}
return r.SetJSONBody(r.Obj)
}
// DEPRECATED: ToHTTP turns this request into a valid *http.Request for use
// with the net/http package.
func (r *Request) ToHTTP() (*http.Request, error) {
req, err := r.toRetryableHTTP()
if err != nil {
return nil, err
}
switch {
case r.BodyBytes == nil && r.Body == nil:
// No body
case r.BodyBytes != nil:
req.Request.Body = ioutil.NopCloser(bytes.NewReader(r.BodyBytes))
default:
if c, ok := r.Body.(io.ReadCloser); ok {
req.Request.Body = c
} else {
req.Request.Body = ioutil.NopCloser(r.Body)
}
}
return req.Request, nil
}
func (r *Request) toRetryableHTTP() (*retryablehttp.Request, error) {
// Encode the query parameters
r.URL.RawQuery = r.Params.Encode()
// Create the HTTP request, defaulting to retryable
var req *retryablehttp.Request
var err error
var body interface{}
switch {
case r.BodyBytes == nil && r.Body == nil:
// No body
case r.BodyBytes != nil:
// Use bytes, it's more efficient
body = r.BodyBytes
default:
body = r.Body
}
req, err = retryablehttp.NewRequest(r.Method, r.URL.RequestURI(), body)
if err != nil {
return nil, err
}
req.URL.User = r.URL.User
req.URL.Scheme = r.URL.Scheme
req.URL.Host = r.URL.Host
req.Host = r.URL.Host
if r.Headers != nil {
for header, vals := range r.Headers {
for _, val := range vals {
req.Header.Add(header, val)
}
}
}
if len(r.ClientToken) != 0 {
req.Header.Set(consts.AuthHeaderName, r.ClientToken)
}
if len(r.WrapTTL) != 0 {
req.Header.Set("X-Vault-Wrap-TTL", r.WrapTTL)
}
if len(r.MFAHeaderVals) != 0 {
for _, mfaHeaderVal := range r.MFAHeaderVals {
req.Header.Add("X-Vault-MFA", mfaHeaderVal)
}
}
if r.PolicyOverride {
req.Header.Set("X-Vault-Policy-Override", "true")
}
return req, nil
}

120
vendor/github.com/hashicorp/vault/api/response.go generated vendored Normal file
View File

@ -0,0 +1,120 @@
package api
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/hashicorp/vault/sdk/helper/jsonutil"
)
// Response is a raw response that wraps an HTTP response.
type Response struct {
*http.Response
}
// DecodeJSON will decode the response body to a JSON structure. This
// will consume the response body, but will not close it. Close must
// still be called.
func (r *Response) DecodeJSON(out interface{}) error {
return jsonutil.DecodeJSONFromReader(r.Body, out)
}
// Error returns an error response if there is one. If there is an error,
// this will fully consume the response body, but will not close it. The
// body must still be closed manually.
func (r *Response) Error() error {
// 200 to 399 are okay status codes. 429 is the code for health status of
// standby nodes.
if (r.StatusCode >= 200 && r.StatusCode < 400) || r.StatusCode == 429 {
return nil
}
// We have an error. Let's copy the body into our own buffer first,
// so that if we can't decode JSON, we can at least copy it raw.
bodyBuf := &bytes.Buffer{}
if _, err := io.Copy(bodyBuf, r.Body); err != nil {
return err
}
r.Body.Close()
r.Body = ioutil.NopCloser(bodyBuf)
// Build up the error object
respErr := &ResponseError{
HTTPMethod: r.Request.Method,
URL: r.Request.URL.String(),
StatusCode: r.StatusCode,
}
// Decode the error response if we can. Note that we wrap the bodyBuf
// in a bytes.Reader here so that the JSON decoder doesn't move the
// read pointer for the original buffer.
var resp ErrorResponse
if err := jsonutil.DecodeJSON(bodyBuf.Bytes(), &resp); err != nil {
// Store the fact that we couldn't decode the errors
respErr.RawError = true
respErr.Errors = []string{bodyBuf.String()}
} else {
// Store the decoded errors
respErr.Errors = resp.Errors
}
return respErr
}
// ErrorResponse is the raw structure of errors when they're returned by the
// HTTP API.
type ErrorResponse struct {
Errors []string
}
// ResponseError is the error returned when Vault responds with an error or
// non-success HTTP status code. If a request to Vault fails because of a
// network error a different error message will be returned. ResponseError gives
// access to the underlying errors and status code.
type ResponseError struct {
// HTTPMethod is the HTTP method for the request (PUT, GET, etc).
HTTPMethod string
// URL is the URL of the request.
URL string
// StatusCode is the HTTP status code.
StatusCode int
// RawError marks that the underlying error messages returned by Vault were
// not parsable. The Errors slice will contain the raw response body as the
// first and only error string if this value is set to true.
RawError bool
// Errors are the underlying errors returned by Vault.
Errors []string
}
// Error returns a human-readable error string for the response error.
func (r *ResponseError) Error() string {
errString := "Errors"
if r.RawError {
errString = "Raw Message"
}
var errBody bytes.Buffer
errBody.WriteString(fmt.Sprintf(
"Error making API request.\n\n"+
"URL: %s %s\n"+
"Code: %d. %s:\n\n",
r.HTTPMethod, r.URL, r.StatusCode, errString))
if r.RawError && len(r.Errors) == 1 {
errBody.WriteString(r.Errors[0])
} else {
for _, err := range r.Errors {
errBody.WriteString(fmt.Sprintf("* %s", err))
}
}
return errBody.String()
}

322
vendor/github.com/hashicorp/vault/api/secret.go generated vendored Normal file
View File

@ -0,0 +1,322 @@
package api
import (
"bytes"
"fmt"
"io"
"time"
"github.com/hashicorp/errwrap"
"github.com/hashicorp/vault/sdk/helper/jsonutil"
"github.com/hashicorp/vault/sdk/helper/parseutil"
)
// Secret is the structure returned for every secret within Vault.
type Secret struct {
// The request ID that generated this response
RequestID string `json:"request_id"`
LeaseID string `json:"lease_id"`
LeaseDuration int `json:"lease_duration"`
Renewable bool `json:"renewable"`
// Data is the actual contents of the secret. The format of the data
// is arbitrary and up to the secret backend.
Data map[string]interface{} `json:"data"`
// Warnings contains any warnings related to the operation. These
// are not issues that caused the command to fail, but that the
// client should be aware of.
Warnings []string `json:"warnings"`
// Auth, if non-nil, means that there was authentication information
// attached to this response.
Auth *SecretAuth `json:"auth,omitempty"`
// WrapInfo, if non-nil, means that the initial response was wrapped in the
// cubbyhole of the given token (which has a TTL of the given number of
// seconds)
WrapInfo *SecretWrapInfo `json:"wrap_info,omitempty"`
}
// TokenID returns the standardized token ID (token) for the given secret.
func (s *Secret) TokenID() (string, error) {
if s == nil {
return "", nil
}
if s.Auth != nil && len(s.Auth.ClientToken) > 0 {
return s.Auth.ClientToken, nil
}
if s.Data == nil || s.Data["id"] == nil {
return "", nil
}
id, ok := s.Data["id"].(string)
if !ok {
return "", fmt.Errorf("token found but in the wrong format")
}
return id, nil
}
// TokenAccessor returns the standardized token accessor for the given secret.
// If the secret is nil or does not contain an accessor, this returns the empty
// string.
func (s *Secret) TokenAccessor() (string, error) {
if s == nil {
return "", nil
}
if s.Auth != nil && len(s.Auth.Accessor) > 0 {
return s.Auth.Accessor, nil
}
if s.Data == nil || s.Data["accessor"] == nil {
return "", nil
}
accessor, ok := s.Data["accessor"].(string)
if !ok {
return "", fmt.Errorf("token found but in the wrong format")
}
return accessor, nil
}
// TokenRemainingUses returns the standardized remaining uses for the given
// secret. If the secret is nil or does not contain the "num_uses", this
// returns -1. On error, this will return -1 and a non-nil error.
func (s *Secret) TokenRemainingUses() (int, error) {
if s == nil || s.Data == nil || s.Data["num_uses"] == nil {
return -1, nil
}
uses, err := parseutil.ParseInt(s.Data["num_uses"])
if err != nil {
return 0, err
}
return int(uses), nil
}
// TokenPolicies returns the standardized list of policies for the given secret.
// If the secret is nil or does not contain any policies, this returns nil. It
// also populates the secret's Auth info with identity/token policy info.
func (s *Secret) TokenPolicies() ([]string, error) {
if s == nil {
return nil, nil
}
if s.Auth != nil && len(s.Auth.Policies) > 0 {
return s.Auth.Policies, nil
}
if s.Data == nil || s.Data["policies"] == nil {
return nil, nil
}
var tokenPolicies []string
// Token policies
{
_, ok := s.Data["policies"]
if !ok {
goto TOKEN_DONE
}
sList, ok := s.Data["policies"].([]string)
if ok {
tokenPolicies = sList
goto TOKEN_DONE
}
list, ok := s.Data["policies"].([]interface{})
if !ok {
return nil, fmt.Errorf("unable to convert token policies to expected format")
}
for _, v := range list {
p, ok := v.(string)
if !ok {
return nil, fmt.Errorf("unable to convert policy %v to string", v)
}
tokenPolicies = append(tokenPolicies, p)
}
}
TOKEN_DONE:
var identityPolicies []string
// Identity policies
{
_, ok := s.Data["identity_policies"]
if !ok {
goto DONE
}
sList, ok := s.Data["identity_policies"].([]string)
if ok {
identityPolicies = sList
goto DONE
}
list, ok := s.Data["identity_policies"].([]interface{})
if !ok {
return nil, fmt.Errorf("unable to convert identity policies to expected format")
}
for _, v := range list {
p, ok := v.(string)
if !ok {
return nil, fmt.Errorf("unable to convert policy %v to string", v)
}
identityPolicies = append(identityPolicies, p)
}
}
DONE:
if s.Auth == nil {
s.Auth = &SecretAuth{}
}
policies := append(tokenPolicies, identityPolicies...)
s.Auth.TokenPolicies = tokenPolicies
s.Auth.IdentityPolicies = identityPolicies
s.Auth.Policies = policies
return policies, nil
}
// TokenMetadata returns the map of metadata associated with this token, if any
// exists. If the secret is nil or does not contain the "metadata" key, this
// returns nil.
func (s *Secret) TokenMetadata() (map[string]string, error) {
if s == nil {
return nil, nil
}
if s.Auth != nil && len(s.Auth.Metadata) > 0 {
return s.Auth.Metadata, nil
}
if s.Data == nil || (s.Data["metadata"] == nil && s.Data["meta"] == nil) {
return nil, nil
}
data, ok := s.Data["metadata"].(map[string]interface{})
if !ok {
data, ok = s.Data["meta"].(map[string]interface{})
if !ok {
return nil, fmt.Errorf("unable to convert metadata field to expected format")
}
}
metadata := make(map[string]string, len(data))
for k, v := range data {
typed, ok := v.(string)
if !ok {
return nil, fmt.Errorf("unable to convert metadata value %v to string", v)
}
metadata[k] = typed
}
return metadata, nil
}
// TokenIsRenewable returns the standardized token renewability for the given
// secret. If the secret is nil or does not contain the "renewable" key, this
// returns false.
func (s *Secret) TokenIsRenewable() (bool, error) {
if s == nil {
return false, nil
}
if s.Auth != nil && s.Auth.Renewable {
return s.Auth.Renewable, nil
}
if s.Data == nil || s.Data["renewable"] == nil {
return false, nil
}
renewable, err := parseutil.ParseBool(s.Data["renewable"])
if err != nil {
return false, errwrap.Wrapf("could not convert renewable value to a boolean: {{err}}", err)
}
return renewable, nil
}
// TokenTTL returns the standardized remaining token TTL for the given secret.
// If the secret is nil or does not contain a TTL, this returns 0.
func (s *Secret) TokenTTL() (time.Duration, error) {
if s == nil {
return 0, nil
}
if s.Auth != nil && s.Auth.LeaseDuration > 0 {
return time.Duration(s.Auth.LeaseDuration) * time.Second, nil
}
if s.Data == nil || s.Data["ttl"] == nil {
return 0, nil
}
ttl, err := parseutil.ParseDurationSecond(s.Data["ttl"])
if err != nil {
return 0, err
}
return ttl, nil
}
// SecretWrapInfo contains wrapping information if we have it. If what is
// contained is an authentication token, the accessor for the token will be
// available in WrappedAccessor.
type SecretWrapInfo struct {
Token string `json:"token"`
Accessor string `json:"accessor"`
TTL int `json:"ttl"`
CreationTime time.Time `json:"creation_time"`
CreationPath string `json:"creation_path"`
WrappedAccessor string `json:"wrapped_accessor"`
}
// SecretAuth is the structure containing auth information if we have it.
type SecretAuth struct {
ClientToken string `json:"client_token"`
Accessor string `json:"accessor"`
Policies []string `json:"policies"`
TokenPolicies []string `json:"token_policies"`
IdentityPolicies []string `json:"identity_policies"`
Metadata map[string]string `json:"metadata"`
Orphan bool `json:"orphan"`
EntityID string `json:"entity_id"`
LeaseDuration int `json:"lease_duration"`
Renewable bool `json:"renewable"`
}
// ParseSecret is used to parse a secret value from JSON from an io.Reader.
func ParseSecret(r io.Reader) (*Secret, error) {
// First read the data into a buffer. Not super efficient but we want to
// know if we actually have a body or not.
var buf bytes.Buffer
_, err := buf.ReadFrom(r)
if err != nil {
return nil, err
}
if buf.Len() == 0 {
return nil, nil
}
// First decode the JSON into a map[string]interface{}
var secret Secret
if err := jsonutil.DecodeJSONFromReader(&buf, &secret); err != nil {
return nil, err
}
return &secret, nil
}

62
vendor/github.com/hashicorp/vault/api/ssh.go generated vendored Normal file
View File

@ -0,0 +1,62 @@
package api
import (
"context"
"fmt"
)
// SSH is used to return a client to invoke operations on SSH backend.
type SSH struct {
c *Client
MountPoint string
}
// SSH returns the client for logical-backend API calls.
func (c *Client) SSH() *SSH {
return c.SSHWithMountPoint(SSHHelperDefaultMountPoint)
}
// SSHWithMountPoint returns the client with specific SSH mount point.
func (c *Client) SSHWithMountPoint(mountPoint string) *SSH {
return &SSH{
c: c,
MountPoint: mountPoint,
}
}
// Credential invokes the SSH backend API to create a credential to establish an SSH session.
func (c *SSH) Credential(role string, data map[string]interface{}) (*Secret, error) {
r := c.c.NewRequest("PUT", fmt.Sprintf("/v1/%s/creds/%s", c.MountPoint, role))
if err := r.SetJSONBody(data); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
// SignKey signs the given public key and returns a signed public key to pass
// along with the SSH request.
func (c *SSH) SignKey(role string, data map[string]interface{}) (*Secret, error) {
r := c.c.NewRequest("PUT", fmt.Sprintf("/v1/%s/sign/%s", c.MountPoint, role))
if err := r.SetJSONBody(data); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}

234
vendor/github.com/hashicorp/vault/api/ssh_agent.go generated vendored Normal file
View File

@ -0,0 +1,234 @@
package api
import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"os"
"github.com/hashicorp/errwrap"
cleanhttp "github.com/hashicorp/go-cleanhttp"
multierror "github.com/hashicorp/go-multierror"
rootcerts "github.com/hashicorp/go-rootcerts"
"github.com/hashicorp/hcl"
"github.com/hashicorp/hcl/hcl/ast"
"github.com/hashicorp/vault/sdk/helper/hclutil"
"github.com/mitchellh/mapstructure"
)
const (
// SSHHelperDefaultMountPoint is the default path at which SSH backend will be
// mounted in the Vault server.
SSHHelperDefaultMountPoint = "ssh"
// VerifyEchoRequest is the echo request message sent as OTP by the helper.
VerifyEchoRequest = "verify-echo-request"
// VerifyEchoResponse is the echo response message sent as a response to OTP
// matching echo request.
VerifyEchoResponse = "verify-echo-response"
)
// SSHHelper is a structure representing a vault-ssh-helper which can talk to vault server
// in order to verify the OTP entered by the user. It contains the path at which
// SSH backend is mounted at the server.
type SSHHelper struct {
c *Client
MountPoint string
}
// SSHVerifyResponse is a structure representing the fields in Vault server's
// response.
type SSHVerifyResponse struct {
// Usually empty. If the request OTP is echo request message, this will
// be set to the corresponding echo response message.
Message string `json:"message" mapstructure:"message"`
// Username associated with the OTP
Username string `json:"username" mapstructure:"username"`
// IP associated with the OTP
IP string `json:"ip" mapstructure:"ip"`
// Name of the role against which the OTP was issued
RoleName string `json:"role_name" mapstructure:"role_name"`
}
// SSHHelperConfig is a structure which represents the entries from the vault-ssh-helper's configuration file.
type SSHHelperConfig struct {
VaultAddr string `hcl:"vault_addr"`
SSHMountPoint string `hcl:"ssh_mount_point"`
CACert string `hcl:"ca_cert"`
CAPath string `hcl:"ca_path"`
AllowedCidrList string `hcl:"allowed_cidr_list"`
AllowedRoles string `hcl:"allowed_roles"`
TLSSkipVerify bool `hcl:"tls_skip_verify"`
TLSServerName string `hcl:"tls_server_name"`
}
// SetTLSParameters sets the TLS parameters for this SSH agent.
func (c *SSHHelperConfig) SetTLSParameters(clientConfig *Config, certPool *x509.CertPool) {
tlsConfig := &tls.Config{
InsecureSkipVerify: c.TLSSkipVerify,
MinVersion: tls.VersionTLS12,
RootCAs: certPool,
ServerName: c.TLSServerName,
}
transport := cleanhttp.DefaultTransport()
transport.TLSClientConfig = tlsConfig
clientConfig.HttpClient.Transport = transport
}
// Returns true if any of the following conditions are true:
// * CA cert is configured
// * CA path is configured
// * configured to skip certificate verification
// * TLS server name is configured
//
func (c *SSHHelperConfig) shouldSetTLSParameters() bool {
return c.CACert != "" || c.CAPath != "" || c.TLSServerName != "" || c.TLSSkipVerify
}
// NewClient returns a new client for the configuration. This client will be used by the
// vault-ssh-helper to communicate with Vault server and verify the OTP entered by user.
// If the configuration supplies Vault SSL certificates, then the client will
// have TLS configured in its transport.
func (c *SSHHelperConfig) NewClient() (*Client, error) {
// Creating a default client configuration for communicating with vault server.
clientConfig := DefaultConfig()
// Pointing the client to the actual address of vault server.
clientConfig.Address = c.VaultAddr
// Check if certificates are provided via config file.
if c.shouldSetTLSParameters() {
rootConfig := &rootcerts.Config{
CAFile: c.CACert,
CAPath: c.CAPath,
}
certPool, err := rootcerts.LoadCACerts(rootConfig)
if err != nil {
return nil, err
}
// Enable TLS on the HTTP client information
c.SetTLSParameters(clientConfig, certPool)
}
// Creating the client object for the given configuration
client, err := NewClient(clientConfig)
if err != nil {
return nil, err
}
return client, nil
}
// LoadSSHHelperConfig loads ssh-helper's configuration from the file and populates the corresponding
// in-memory structure.
//
// Vault address is a required parameter.
// Mount point defaults to "ssh".
func LoadSSHHelperConfig(path string) (*SSHHelperConfig, error) {
contents, err := ioutil.ReadFile(path)
if err != nil && !os.IsNotExist(err) {
return nil, multierror.Prefix(err, "ssh_helper:")
}
return ParseSSHHelperConfig(string(contents))
}
// ParseSSHHelperConfig parses the given contents as a string for the SSHHelper
// configuration.
func ParseSSHHelperConfig(contents string) (*SSHHelperConfig, error) {
root, err := hcl.Parse(string(contents))
if err != nil {
return nil, errwrap.Wrapf("error parsing config: {{err}}", err)
}
list, ok := root.Node.(*ast.ObjectList)
if !ok {
return nil, fmt.Errorf("error parsing config: file doesn't contain a root object")
}
valid := []string{
"vault_addr",
"ssh_mount_point",
"ca_cert",
"ca_path",
"allowed_cidr_list",
"allowed_roles",
"tls_skip_verify",
"tls_server_name",
}
if err := hclutil.CheckHCLKeys(list, valid); err != nil {
return nil, multierror.Prefix(err, "ssh_helper:")
}
var c SSHHelperConfig
c.SSHMountPoint = SSHHelperDefaultMountPoint
if err := hcl.DecodeObject(&c, list); err != nil {
return nil, multierror.Prefix(err, "ssh_helper:")
}
if c.VaultAddr == "" {
return nil, fmt.Errorf(`missing config "vault_addr"`)
}
return &c, nil
}
// SSHHelper creates an SSHHelper object which can talk to Vault server with SSH backend
// mounted at default path ("ssh").
func (c *Client) SSHHelper() *SSHHelper {
return c.SSHHelperWithMountPoint(SSHHelperDefaultMountPoint)
}
// SSHHelperWithMountPoint creates an SSHHelper object which can talk to Vault server with SSH backend
// mounted at a specific mount point.
func (c *Client) SSHHelperWithMountPoint(mountPoint string) *SSHHelper {
return &SSHHelper{
c: c,
MountPoint: mountPoint,
}
}
// Verify verifies if the key provided by user is present in Vault server. The response
// will contain the IP address and username associated with the OTP. In case the
// OTP matches the echo request message, instead of searching an entry for the OTP,
// an echo response message is returned. This feature is used by ssh-helper to verify if
// its configured correctly.
func (c *SSHHelper) Verify(otp string) (*SSHVerifyResponse, error) {
data := map[string]interface{}{
"otp": otp,
}
verifyPath := fmt.Sprintf("/v1/%s/verify", c.MountPoint)
r := c.c.NewRequest("PUT", verifyPath)
if err := r.SetJSONBody(data); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret.Data == nil {
return nil, nil
}
var verifyResp SSHVerifyResponse
err = mapstructure.Decode(secret.Data, &verifyResp)
if err != nil {
return nil, err
}
return &verifyResp, nil
}

11
vendor/github.com/hashicorp/vault/api/sys.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
package api
// Sys is used to perform system-related operations on Vault.
type Sys struct {
c *Client
}
// Sys is used to return the client for sys-related API calls.
func (c *Client) Sys() *Sys {
return &Sys{c: c}
}

136
vendor/github.com/hashicorp/vault/api/sys_audit.go generated vendored Normal file
View File

@ -0,0 +1,136 @@
package api
import (
"context"
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) AuditHash(path string, input string) (string, error) {
body := map[string]interface{}{
"input": input,
}
r := c.c.NewRequest("PUT", fmt.Sprintf("/v1/sys/audit-hash/%s", path))
if err := r.SetJSONBody(body); err != nil {
return "", err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return "", err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return "", err
}
if secret == nil || secret.Data == nil {
return "", errors.New("data from server response is empty")
}
hash, ok := secret.Data["hash"]
if !ok {
return "", errors.New("hash not found in response data")
}
hashStr, ok := hash.(string)
if !ok {
return "", errors.New("could not parse hash in response data")
}
return hashStr, nil
}
func (c *Sys) ListAudit() (map[string]*Audit, error) {
r := c.c.NewRequest("GET", "/v1/sys/audit")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
mounts := map[string]*Audit{}
err = mapstructure.Decode(secret.Data, &mounts)
if err != nil {
return nil, err
}
return mounts, nil
}
// DEPRECATED: Use EnableAuditWithOptions instead
func (c *Sys) EnableAudit(
path string, auditType string, desc string, opts map[string]string) error {
return c.EnableAuditWithOptions(path, &EnableAuditOptions{
Type: auditType,
Description: desc,
Options: opts,
})
}
func (c *Sys) EnableAuditWithOptions(path string, options *EnableAuditOptions) error {
r := c.c.NewRequest("PUT", fmt.Sprintf("/v1/sys/audit/%s", path))
if err := r.SetJSONBody(options); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
func (c *Sys) DisableAudit(path string) error {
r := c.c.NewRequest("DELETE", fmt.Sprintf("/v1/sys/audit/%s", path))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
// Structures for the requests/resposne are all down here. They aren't
// individually documented because the map almost directly to the raw HTTP API
// documentation. Please refer to that documentation for more details.
type EnableAuditOptions struct {
Type string `json:"type" mapstructure:"type"`
Description string `json:"description" mapstructure:"description"`
Options map[string]string `json:"options" mapstructure:"options"`
Local bool `json:"local" mapstructure:"local"`
}
type Audit struct {
Type string `json:"type" mapstructure:"type"`
Description string `json:"description" mapstructure:"description"`
Options map[string]string `json:"options" mapstructure:"options"`
Local bool `json:"local" mapstructure:"local"`
Path string `json:"path" mapstructure:"path"`
}

80
vendor/github.com/hashicorp/vault/api/sys_auth.go generated vendored Normal file
View File

@ -0,0 +1,80 @@
package api
import (
"context"
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) ListAuth() (map[string]*AuthMount, error) {
r := c.c.NewRequest("GET", "/v1/sys/auth")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
mounts := map[string]*AuthMount{}
err = mapstructure.Decode(secret.Data, &mounts)
if err != nil {
return nil, err
}
return mounts, nil
}
// DEPRECATED: Use EnableAuthWithOptions instead
func (c *Sys) EnableAuth(path, authType, desc string) error {
return c.EnableAuthWithOptions(path, &EnableAuthOptions{
Type: authType,
Description: desc,
})
}
func (c *Sys) EnableAuthWithOptions(path string, options *EnableAuthOptions) error {
r := c.c.NewRequest("POST", fmt.Sprintf("/v1/sys/auth/%s", path))
if err := r.SetJSONBody(options); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
func (c *Sys) DisableAuth(path string) error {
r := c.c.NewRequest("DELETE", fmt.Sprintf("/v1/sys/auth/%s", path))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
// Rather than duplicate, we can use modern Go's type aliasing
type EnableAuthOptions = MountInput
type AuthConfigInput = MountConfigInput
type AuthMount = MountOutput
type AuthConfigOutput = MountConfigOutput

View File

@ -0,0 +1,64 @@
package api
import (
"context"
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) CapabilitiesSelf(path string) ([]string, error) {
return c.Capabilities(c.c.Token(), path)
}
func (c *Sys) Capabilities(token, path string) ([]string, error) {
body := map[string]string{
"token": token,
"path": path,
}
reqPath := "/v1/sys/capabilities"
if token == c.c.Token() {
reqPath = fmt.Sprintf("%s-self", reqPath)
}
r := c.c.NewRequest("POST", reqPath)
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var res []string
err = mapstructure.Decode(secret.Data[path], &res)
if err != nil {
return nil, err
}
if len(res) == 0 {
_, ok := secret.Data["capabilities"]
if ok {
err = mapstructure.Decode(secret.Data["capabilities"], &res)
if err != nil {
return nil, err
}
}
}
return res, nil
}

View File

@ -0,0 +1,105 @@
package api
import (
"context"
"errors"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) CORSStatus() (*CORSResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/config/cors")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result CORSResponse
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
func (c *Sys) ConfigureCORS(req *CORSRequest) (*CORSResponse, error) {
r := c.c.NewRequest("PUT", "/v1/sys/config/cors")
if err := r.SetJSONBody(req); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result CORSResponse
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
func (c *Sys) DisableCORS() (*CORSResponse, error) {
r := c.c.NewRequest("DELETE", "/v1/sys/config/cors")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result CORSResponse
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
type CORSRequest struct {
AllowedOrigins string `json:"allowed_origins" mapstructure:"allowed_origins"`
Enabled bool `json:"enabled" mapstructure:"enabled"`
}
type CORSResponse struct {
AllowedOrigins string `json:"allowed_origins" mapstructure:"allowed_origins"`
Enabled bool `json:"enabled" mapstructure:"enabled"`
}

View File

@ -0,0 +1,124 @@
package api
import "context"
func (c *Sys) GenerateRootStatus() (*GenerateRootStatusResponse, error) {
return c.generateRootStatusCommon("/v1/sys/generate-root/attempt")
}
func (c *Sys) GenerateDROperationTokenStatus() (*GenerateRootStatusResponse, error) {
return c.generateRootStatusCommon("/v1/sys/replication/dr/secondary/generate-operation-token/attempt")
}
func (c *Sys) generateRootStatusCommon(path string) (*GenerateRootStatusResponse, error) {
r := c.c.NewRequest("GET", path)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result GenerateRootStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) GenerateRootInit(otp, pgpKey string) (*GenerateRootStatusResponse, error) {
return c.generateRootInitCommon("/v1/sys/generate-root/attempt", otp, pgpKey)
}
func (c *Sys) GenerateDROperationTokenInit(otp, pgpKey string) (*GenerateRootStatusResponse, error) {
return c.generateRootInitCommon("/v1/sys/replication/dr/secondary/generate-operation-token/attempt", otp, pgpKey)
}
func (c *Sys) generateRootInitCommon(path, otp, pgpKey string) (*GenerateRootStatusResponse, error) {
body := map[string]interface{}{
"otp": otp,
"pgp_key": pgpKey,
}
r := c.c.NewRequest("PUT", path)
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result GenerateRootStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) GenerateRootCancel() error {
return c.generateRootCancelCommon("/v1/sys/generate-root/attempt")
}
func (c *Sys) GenerateDROperationTokenCancel() error {
return c.generateRootCancelCommon("/v1/sys/replication/dr/secondary/generate-operation-token/attempt")
}
func (c *Sys) generateRootCancelCommon(path string) error {
r := c.c.NewRequest("DELETE", path)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) GenerateRootUpdate(shard, nonce string) (*GenerateRootStatusResponse, error) {
return c.generateRootUpdateCommon("/v1/sys/generate-root/update", shard, nonce)
}
func (c *Sys) GenerateDROperationTokenUpdate(shard, nonce string) (*GenerateRootStatusResponse, error) {
return c.generateRootUpdateCommon("/v1/sys/replication/dr/secondary/generate-operation-token/update", shard, nonce)
}
func (c *Sys) generateRootUpdateCommon(path, shard, nonce string) (*GenerateRootStatusResponse, error) {
body := map[string]interface{}{
"key": shard,
"nonce": nonce,
}
r := c.c.NewRequest("PUT", path)
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result GenerateRootStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type GenerateRootStatusResponse struct {
Nonce string `json:"nonce"`
Started bool `json:"started"`
Progress int `json:"progress"`
Required int `json:"required"`
Complete bool `json:"complete"`
EncodedToken string `json:"encoded_token"`
EncodedRootToken string `json:"encoded_root_token"`
PGPFingerprint string `json:"pgp_fingerprint"`
OTP string `json:"otp"`
OTPLength int `json:"otp_length"`
}

41
vendor/github.com/hashicorp/vault/api/sys_health.go generated vendored Normal file
View File

@ -0,0 +1,41 @@
package api
import "context"
func (c *Sys) Health() (*HealthResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/health")
// If the code is 400 or above it will automatically turn into an error,
// but the sys/health API defaults to returning 5xx when not sealed or
// inited, so we force this code to be something else so we parse correctly
r.Params.Add("uninitcode", "299")
r.Params.Add("sealedcode", "299")
r.Params.Add("standbycode", "299")
r.Params.Add("drsecondarycode", "299")
r.Params.Add("performancestandbycode", "299")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result HealthResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type HealthResponse struct {
Initialized bool `json:"initialized"`
Sealed bool `json:"sealed"`
Standby bool `json:"standby"`
PerformanceStandby bool `json:"performance_standby"`
ReplicationPerformanceMode string `json:"replication_performance_mode"`
ReplicationDRMode string `json:"replication_dr_mode"`
ServerTimeUTC int64 `json:"server_time_utc"`
Version string `json:"version"`
ClusterName string `json:"cluster_name,omitempty"`
ClusterID string `json:"cluster_id,omitempty"`
LastWAL uint64 `json:"last_wal,omitempty"`
}

61
vendor/github.com/hashicorp/vault/api/sys_init.go generated vendored Normal file
View File

@ -0,0 +1,61 @@
package api
import "context"
func (c *Sys) InitStatus() (bool, error) {
r := c.c.NewRequest("GET", "/v1/sys/init")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return false, err
}
defer resp.Body.Close()
var result InitStatusResponse
err = resp.DecodeJSON(&result)
return result.Initialized, err
}
func (c *Sys) Init(opts *InitRequest) (*InitResponse, error) {
r := c.c.NewRequest("PUT", "/v1/sys/init")
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result InitResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type InitRequest struct {
SecretShares int `json:"secret_shares"`
SecretThreshold int `json:"secret_threshold"`
StoredShares int `json:"stored_shares"`
PGPKeys []string `json:"pgp_keys"`
RecoveryShares int `json:"recovery_shares"`
RecoveryThreshold int `json:"recovery_threshold"`
RecoveryPGPKeys []string `json:"recovery_pgp_keys"`
RootTokenPGPKey string `json:"root_token_pgp_key"`
}
type InitStatusResponse struct {
Initialized bool
}
type InitResponse struct {
Keys []string `json:"keys"`
KeysB64 []string `json:"keys_base64"`
RecoveryKeys []string `json:"recovery_keys"`
RecoveryKeysB64 []string `json:"recovery_keys_base64"`
RootToken string `json:"root_token"`
}

29
vendor/github.com/hashicorp/vault/api/sys_leader.go generated vendored Normal file
View File

@ -0,0 +1,29 @@
package api
import "context"
func (c *Sys) Leader() (*LeaderResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/leader")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result LeaderResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type LeaderResponse struct {
HAEnabled bool `json:"ha_enabled"`
IsSelf bool `json:"is_self"`
LeaderAddress string `json:"leader_address"`
LeaderClusterAddress string `json:"leader_cluster_address"`
PerfStandby bool `json:"performance_standby"`
PerfStandbyLastRemoteWAL uint64 `json:"performance_standby_last_remote_wal"`
LastWAL uint64 `json:"last_wal"`
}

105
vendor/github.com/hashicorp/vault/api/sys_leases.go generated vendored Normal file
View File

@ -0,0 +1,105 @@
package api
import (
"context"
"errors"
)
func (c *Sys) Renew(id string, increment int) (*Secret, error) {
r := c.c.NewRequest("PUT", "/v1/sys/leases/renew")
body := map[string]interface{}{
"increment": increment,
"lease_id": id,
}
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ParseSecret(resp.Body)
}
func (c *Sys) Revoke(id string) error {
r := c.c.NewRequest("PUT", "/v1/sys/leases/revoke/"+id)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RevokePrefix(id string) error {
r := c.c.NewRequest("PUT", "/v1/sys/leases/revoke-prefix/"+id)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RevokeForce(id string) error {
r := c.c.NewRequest("PUT", "/v1/sys/leases/revoke-force/"+id)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RevokeWithOptions(opts *RevokeOptions) error {
if opts == nil {
return errors.New("nil options provided")
}
// Construct path
path := "/v1/sys/leases/revoke/"
switch {
case opts.Force:
path = "/v1/sys/leases/revoke-force/"
case opts.Prefix:
path = "/v1/sys/leases/revoke-prefix/"
}
path += opts.LeaseID
r := c.c.NewRequest("PUT", path)
if !opts.Force {
body := map[string]interface{}{
"sync": opts.Sync,
}
if err := r.SetJSONBody(body); err != nil {
return err
}
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
type RevokeOptions struct {
LeaseID string
Force bool
Prefix bool
Sync bool
}

185
vendor/github.com/hashicorp/vault/api/sys_mounts.go generated vendored Normal file
View File

@ -0,0 +1,185 @@
package api
import (
"context"
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) ListMounts() (map[string]*MountOutput, error) {
r := c.c.NewRequest("GET", "/v1/sys/mounts")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
mounts := map[string]*MountOutput{}
err = mapstructure.Decode(secret.Data, &mounts)
if err != nil {
return nil, err
}
return mounts, nil
}
func (c *Sys) Mount(path string, mountInfo *MountInput) error {
r := c.c.NewRequest("POST", fmt.Sprintf("/v1/sys/mounts/%s", path))
if err := r.SetJSONBody(mountInfo); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
func (c *Sys) Unmount(path string) error {
r := c.c.NewRequest("DELETE", fmt.Sprintf("/v1/sys/mounts/%s", path))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) Remount(from, to string) error {
body := map[string]interface{}{
"from": from,
"to": to,
}
r := c.c.NewRequest("POST", "/v1/sys/remount")
if err := r.SetJSONBody(body); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) TuneMount(path string, config MountConfigInput) error {
r := c.c.NewRequest("POST", fmt.Sprintf("/v1/sys/mounts/%s/tune", path))
if err := r.SetJSONBody(config); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) MountConfig(path string) (*MountConfigOutput, error) {
r := c.c.NewRequest("GET", fmt.Sprintf("/v1/sys/mounts/%s/tune", path))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result MountConfigOutput
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
type MountInput struct {
Type string `json:"type"`
Description string `json:"description"`
Config MountConfigInput `json:"config"`
Local bool `json:"local"`
SealWrap bool `json:"seal_wrap" mapstructure:"seal_wrap"`
Options map[string]string `json:"options"`
// Deprecated: Newer server responses should be returning this information in the
// Type field (json: "type") instead.
PluginName string `json:"plugin_name,omitempty"`
}
type MountConfigInput struct {
Options map[string]string `json:"options" mapstructure:"options"`
DefaultLeaseTTL string `json:"default_lease_ttl" mapstructure:"default_lease_ttl"`
Description *string `json:"description,omitempty" mapstructure:"description"`
MaxLeaseTTL string `json:"max_lease_ttl" mapstructure:"max_lease_ttl"`
ForceNoCache bool `json:"force_no_cache" mapstructure:"force_no_cache"`
AuditNonHMACRequestKeys []string `json:"audit_non_hmac_request_keys,omitempty" mapstructure:"audit_non_hmac_request_keys"`
AuditNonHMACResponseKeys []string `json:"audit_non_hmac_response_keys,omitempty" mapstructure:"audit_non_hmac_response_keys"`
ListingVisibility string `json:"listing_visibility,omitempty" mapstructure:"listing_visibility"`
PassthroughRequestHeaders []string `json:"passthrough_request_headers,omitempty" mapstructure:"passthrough_request_headers"`
AllowedResponseHeaders []string `json:"allowed_response_headers,omitempty" mapstructure:"allowed_response_headers"`
TokenType string `json:"token_type,omitempty" mapstructure:"token_type"`
// Deprecated: This field will always be blank for newer server responses.
PluginName string `json:"plugin_name,omitempty" mapstructure:"plugin_name"`
}
type MountOutput struct {
UUID string `json:"uuid"`
Type string `json:"type"`
Description string `json:"description"`
Accessor string `json:"accessor"`
Config MountConfigOutput `json:"config"`
Options map[string]string `json:"options"`
Local bool `json:"local"`
SealWrap bool `json:"seal_wrap" mapstructure:"seal_wrap"`
}
type MountConfigOutput struct {
DefaultLeaseTTL int `json:"default_lease_ttl" mapstructure:"default_lease_ttl"`
MaxLeaseTTL int `json:"max_lease_ttl" mapstructure:"max_lease_ttl"`
ForceNoCache bool `json:"force_no_cache" mapstructure:"force_no_cache"`
AuditNonHMACRequestKeys []string `json:"audit_non_hmac_request_keys,omitempty" mapstructure:"audit_non_hmac_request_keys"`
AuditNonHMACResponseKeys []string `json:"audit_non_hmac_response_keys,omitempty" mapstructure:"audit_non_hmac_response_keys"`
ListingVisibility string `json:"listing_visibility,omitempty" mapstructure:"listing_visibility"`
PassthroughRequestHeaders []string `json:"passthrough_request_headers,omitempty" mapstructure:"passthrough_request_headers"`
AllowedResponseHeaders []string `json:"allowed_response_headers,omitempty" mapstructure:"allowed_response_headers"`
TokenType string `json:"token_type,omitempty" mapstructure:"token_type"`
// Deprecated: This field will always be blank for newer server responses.
PluginName string `json:"plugin_name,omitempty" mapstructure:"plugin_name"`
}

238
vendor/github.com/hashicorp/vault/api/sys_plugins.go generated vendored Normal file
View File

@ -0,0 +1,238 @@
package api
import (
"context"
"errors"
"fmt"
"net/http"
"github.com/hashicorp/vault/sdk/helper/consts"
"github.com/mitchellh/mapstructure"
)
// ListPluginsInput is used as input to the ListPlugins function.
type ListPluginsInput struct {
// Type of the plugin. Required.
Type consts.PluginType `json:"type"`
}
// ListPluginsResponse is the response from the ListPlugins call.
type ListPluginsResponse struct {
// PluginsByType is the list of plugins by type.
PluginsByType map[consts.PluginType][]string `json:"types"`
// Names is the list of names of the plugins.
//
// Deprecated: Newer server responses should be returning PluginsByType (json:
// "types") instead.
Names []string `json:"names"`
}
// ListPlugins lists all plugins in the catalog and returns their names as a
// list of strings.
func (c *Sys) ListPlugins(i *ListPluginsInput) (*ListPluginsResponse, error) {
path := ""
method := ""
if i.Type == consts.PluginTypeUnknown {
path = "/v1/sys/plugins/catalog"
method = "GET"
} else {
path = fmt.Sprintf("/v1/sys/plugins/catalog/%s", i.Type)
method = "LIST"
}
req := c.c.NewRequest(method, path)
if method == "LIST" {
// Set this for broader compatibility, but we use LIST above to be able
// to handle the wrapping lookup function
req.Method = "GET"
req.Params.Set("list", "true")
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, req)
if err != nil && resp == nil {
return nil, err
}
if resp == nil {
return nil, nil
}
defer resp.Body.Close()
// We received an Unsupported Operation response from Vault, indicating
// Vault of an older version that doesn't support the GET method yet;
// switch it to a LIST.
if resp.StatusCode == 405 {
req.Params.Set("list", "true")
resp, err := c.c.RawRequestWithContext(ctx, req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result struct {
Data struct {
Keys []string `json:"keys"`
} `json:"data"`
}
if err := resp.DecodeJSON(&result); err != nil {
return nil, err
}
return &ListPluginsResponse{Names: result.Data.Keys}, nil
}
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
result := &ListPluginsResponse{
PluginsByType: make(map[consts.PluginType][]string),
}
if i.Type == consts.PluginTypeUnknown {
for pluginTypeStr, pluginsRaw := range secret.Data {
pluginType, err := consts.ParsePluginType(pluginTypeStr)
if err != nil {
return nil, err
}
pluginsIfc, ok := pluginsRaw.([]interface{})
if !ok {
return nil, fmt.Errorf("unable to parse plugins for %q type", pluginTypeStr)
}
plugins := make([]string, len(pluginsIfc))
for i, nameIfc := range pluginsIfc {
name, ok := nameIfc.(string)
if !ok {
}
plugins[i] = name
}
result.PluginsByType[pluginType] = plugins
}
} else {
var respKeys []string
if err := mapstructure.Decode(secret.Data["keys"], &respKeys); err != nil {
return nil, err
}
result.PluginsByType[i.Type] = respKeys
}
return result, nil
}
// GetPluginInput is used as input to the GetPlugin function.
type GetPluginInput struct {
Name string `json:"-"`
// Type of the plugin. Required.
Type consts.PluginType `json:"type"`
}
// GetPluginResponse is the response from the GetPlugin call.
type GetPluginResponse struct {
Args []string `json:"args"`
Builtin bool `json:"builtin"`
Command string `json:"command"`
Name string `json:"name"`
SHA256 string `json:"sha256"`
}
// GetPlugin retrieves information about the plugin.
func (c *Sys) GetPlugin(i *GetPluginInput) (*GetPluginResponse, error) {
path := catalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodGet, path)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result struct {
Data *GetPluginResponse
}
err = resp.DecodeJSON(&result)
if err != nil {
return nil, err
}
return result.Data, err
}
// RegisterPluginInput is used as input to the RegisterPlugin function.
type RegisterPluginInput struct {
// Name is the name of the plugin. Required.
Name string `json:"-"`
// Type of the plugin. Required.
Type consts.PluginType `json:"type"`
// Args is the list of args to spawn the process with.
Args []string `json:"args,omitempty"`
// Command is the command to run.
Command string `json:"command,omitempty"`
// SHA256 is the shasum of the plugin.
SHA256 string `json:"sha256,omitempty"`
}
// RegisterPlugin registers the plugin with the given information.
func (c *Sys) RegisterPlugin(i *RegisterPluginInput) error {
path := catalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodPut, path)
if err := req.SetJSONBody(i); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, req)
if err == nil {
defer resp.Body.Close()
}
return err
}
// DeregisterPluginInput is used as input to the DeregisterPlugin function.
type DeregisterPluginInput struct {
// Name is the name of the plugin. Required.
Name string `json:"-"`
// Type of the plugin. Required.
Type consts.PluginType `json:"type"`
}
// DeregisterPlugin removes the plugin with the given name from the plugin
// catalog.
func (c *Sys) DeregisterPlugin(i *DeregisterPluginInput) error {
path := catalogPathByType(i.Type, i.Name)
req := c.c.NewRequest(http.MethodDelete, path)
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, req)
if err == nil {
defer resp.Body.Close()
}
return err
}
// catalogPathByType is a helper to construct the proper API path by plugin type
func catalogPathByType(pluginType consts.PluginType, name string) string {
path := fmt.Sprintf("/v1/sys/plugins/catalog/%s/%s", pluginType, name)
// Backwards compat, if type is not provided then use old path
if pluginType == consts.PluginTypeUnknown {
path = fmt.Sprintf("/v1/sys/plugins/catalog/%s", name)
}
return path
}

113
vendor/github.com/hashicorp/vault/api/sys_policy.go generated vendored Normal file
View File

@ -0,0 +1,113 @@
package api
import (
"context"
"errors"
"fmt"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) ListPolicies() ([]string, error) {
r := c.c.NewRequest("LIST", "/v1/sys/policies/acl")
// Set this for broader compatibility, but we use LIST above to be able to
// handle the wrapping lookup function
r.Method = "GET"
r.Params.Set("list", "true")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result []string
err = mapstructure.Decode(secret.Data["keys"], &result)
if err != nil {
return nil, err
}
return result, err
}
func (c *Sys) GetPolicy(name string) (string, error) {
r := c.c.NewRequest("GET", fmt.Sprintf("/v1/sys/policies/acl/%s", name))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil {
defer resp.Body.Close()
if resp.StatusCode == 404 {
return "", nil
}
}
if err != nil {
return "", err
}
secret, err := ParseSecret(resp.Body)
if err != nil {
return "", err
}
if secret == nil || secret.Data == nil {
return "", errors.New("data from server response is empty")
}
if policyRaw, ok := secret.Data["policy"]; ok {
return policyRaw.(string), nil
}
return "", fmt.Errorf("no policy found in response")
}
func (c *Sys) PutPolicy(name, rules string) error {
body := map[string]string{
"policy": rules,
}
r := c.c.NewRequest("PUT", fmt.Sprintf("/v1/sys/policies/acl/%s", name))
if err := r.SetJSONBody(body); err != nil {
return err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}
func (c *Sys) DeletePolicy(name string) error {
r := c.c.NewRequest("DELETE", fmt.Sprintf("/v1/sys/policies/acl/%s", name))
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
type getPoliciesResp struct {
Rules string `json:"rules"`
}
type listPoliciesResp struct {
Policies []string `json:"policies"`
}

130
vendor/github.com/hashicorp/vault/api/sys_raft.go generated vendored Normal file
View File

@ -0,0 +1,130 @@
package api
import (
"context"
"io"
"net/http"
"github.com/hashicorp/vault/sdk/helper/consts"
)
// RaftJoinResponse represents the response of the raft join API
type RaftJoinResponse struct {
Joined bool `json:"joined"`
}
// RaftJoinRequest represents the parameters consumed by the raft join API
type RaftJoinRequest struct {
LeaderAPIAddr string `json:"leader_api_addr"`
LeaderCACert string `json:"leader_ca_cert":`
LeaderClientCert string `json:"leader_client_cert"`
LeaderClientKey string `json:"leader_client_key"`
Retry bool `json:"retry"`
}
// RaftJoin adds the node from which this call is invoked from to the raft
// cluster represented by the leader address in the parameter.
func (c *Sys) RaftJoin(opts *RaftJoinRequest) (*RaftJoinResponse, error) {
r := c.c.NewRequest("POST", "/v1/sys/storage/raft/join")
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RaftJoinResponse
err = resp.DecodeJSON(&result)
return &result, err
}
// RaftSnapshot invokes the API that takes the snapshot of the raft cluster and
// writes it to the supplied io.Writer.
func (c *Sys) RaftSnapshot(snapWriter io.Writer) error {
r := c.c.NewRequest("GET", "/v1/sys/storage/raft/snapshot")
r.URL.RawQuery = r.Params.Encode()
req, err := http.NewRequest(http.MethodGet, r.URL.RequestURI(), nil)
if err != nil {
return err
}
req.URL.User = r.URL.User
req.URL.Scheme = r.URL.Scheme
req.URL.Host = r.URL.Host
req.Host = r.URL.Host
if r.Headers != nil {
for header, vals := range r.Headers {
for _, val := range vals {
req.Header.Add(header, val)
}
}
}
if len(r.ClientToken) != 0 {
req.Header.Set(consts.AuthHeaderName, r.ClientToken)
}
if len(r.WrapTTL) != 0 {
req.Header.Set("X-Vault-Wrap-TTL", r.WrapTTL)
}
if len(r.MFAHeaderVals) != 0 {
for _, mfaHeaderVal := range r.MFAHeaderVals {
req.Header.Add("X-Vault-MFA", mfaHeaderVal)
}
}
if r.PolicyOverride {
req.Header.Set("X-Vault-Policy-Override", "true")
}
// Avoiding the use of RawRequestWithContext which reads the response body
// to determine if the body contains error message.
var result *Response
resp, err := c.c.config.HttpClient.Do(req)
if resp == nil {
return nil
}
result = &Response{Response: resp}
if err := result.Error(); err != nil {
return err
}
_, err = io.Copy(snapWriter, resp.Body)
if err != nil {
return err
}
return nil
}
// RaftSnapshotRestore reads the snapshot from the io.Reader and installs that
// snapshot, returning the cluster to the state defined by it.
func (c *Sys) RaftSnapshotRestore(snapReader io.Reader, force bool) error {
path := "/v1/sys/storage/raft/snapshot"
if force {
path = "/v1/sys/storage/raft/snapshot-force"
}
r := c.c.NewRequest("POST", path)
r.Body = snapReader
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return err
}
defer resp.Body.Close()
return nil
}

388
vendor/github.com/hashicorp/vault/api/sys_rekey.go generated vendored Normal file
View File

@ -0,0 +1,388 @@
package api
import (
"context"
"errors"
"github.com/mitchellh/mapstructure"
)
func (c *Sys) RekeyStatus() (*RekeyStatusResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey/init")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRecoveryKeyStatus() (*RekeyStatusResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey-recovery-key/init")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyVerificationStatus() (*RekeyVerificationStatusResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey/verify")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyVerificationStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRecoveryKeyVerificationStatus() (*RekeyVerificationStatusResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey-recovery-key/verify")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyVerificationStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyInit(config *RekeyInitRequest) (*RekeyStatusResponse, error) {
r := c.c.NewRequest("PUT", "/v1/sys/rekey/init")
if err := r.SetJSONBody(config); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRecoveryKeyInit(config *RekeyInitRequest) (*RekeyStatusResponse, error) {
r := c.c.NewRequest("PUT", "/v1/sys/rekey-recovery-key/init")
if err := r.SetJSONBody(config); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyCancel() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey/init")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyRecoveryKeyCancel() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey-recovery-key/init")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyVerificationCancel() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey/verify")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyRecoveryKeyVerificationCancel() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey-recovery-key/verify")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyUpdate(shard, nonce string) (*RekeyUpdateResponse, error) {
body := map[string]interface{}{
"key": shard,
"nonce": nonce,
}
r := c.c.NewRequest("PUT", "/v1/sys/rekey/update")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyUpdateResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRecoveryKeyUpdate(shard, nonce string) (*RekeyUpdateResponse, error) {
body := map[string]interface{}{
"key": shard,
"nonce": nonce,
}
r := c.c.NewRequest("PUT", "/v1/sys/rekey-recovery-key/update")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyUpdateResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRetrieveBackup() (*RekeyRetrieveResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey/backup")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result RekeyRetrieveResponse
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
func (c *Sys) RekeyRetrieveRecoveryBackup() (*RekeyRetrieveResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/rekey/recovery-key-backup")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result RekeyRetrieveResponse
err = mapstructure.Decode(secret.Data, &result)
if err != nil {
return nil, err
}
return &result, err
}
func (c *Sys) RekeyDeleteBackup() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey/backup")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyDeleteRecoveryBackup() error {
r := c.c.NewRequest("DELETE", "/v1/sys/rekey/recovery-key-backup")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) RekeyVerificationUpdate(shard, nonce string) (*RekeyVerificationUpdateResponse, error) {
body := map[string]interface{}{
"key": shard,
"nonce": nonce,
}
r := c.c.NewRequest("PUT", "/v1/sys/rekey/verify")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyVerificationUpdateResponse
err = resp.DecodeJSON(&result)
return &result, err
}
func (c *Sys) RekeyRecoveryKeyVerificationUpdate(shard, nonce string) (*RekeyVerificationUpdateResponse, error) {
body := map[string]interface{}{
"key": shard,
"nonce": nonce,
}
r := c.c.NewRequest("PUT", "/v1/sys/rekey-recovery-key/verify")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result RekeyVerificationUpdateResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type RekeyInitRequest struct {
SecretShares int `json:"secret_shares"`
SecretThreshold int `json:"secret_threshold"`
StoredShares int `json:"stored_shares"`
PGPKeys []string `json:"pgp_keys"`
Backup bool
RequireVerification bool `json:"require_verification"`
}
type RekeyStatusResponse struct {
Nonce string `json:"nonce"`
Started bool `json:"started"`
T int `json:"t"`
N int `json:"n"`
Progress int `json:"progress"`
Required int `json:"required"`
PGPFingerprints []string `json:"pgp_fingerprints"`
Backup bool `json:"backup"`
VerificationRequired bool `json:"verification_required"`
VerificationNonce string `json:"verification_nonce"`
}
type RekeyUpdateResponse struct {
Nonce string `json:"nonce"`
Complete bool `json:"complete"`
Keys []string `json:"keys"`
KeysB64 []string `json:"keys_base64"`
PGPFingerprints []string `json:"pgp_fingerprints"`
Backup bool `json:"backup"`
VerificationRequired bool `json:"verification_required"`
VerificationNonce string `json:"verification_nonce,omitempty"`
}
type RekeyRetrieveResponse struct {
Nonce string `json:"nonce" mapstructure:"nonce"`
Keys map[string][]string `json:"keys" mapstructure:"keys"`
KeysB64 map[string][]string `json:"keys_base64" mapstructure:"keys_base64"`
}
type RekeyVerificationStatusResponse struct {
Nonce string `json:"nonce"`
Started bool `json:"started"`
T int `json:"t"`
N int `json:"n"`
Progress int `json:"progress"`
}
type RekeyVerificationUpdateResponse struct {
Nonce string `json:"nonce"`
Complete bool `json:"complete"`
}

77
vendor/github.com/hashicorp/vault/api/sys_rotate.go generated vendored Normal file
View File

@ -0,0 +1,77 @@
package api
import (
"context"
"encoding/json"
"errors"
"time"
)
func (c *Sys) Rotate() error {
r := c.c.NewRequest("POST", "/v1/sys/rotate")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) KeyStatus() (*KeyStatus, error) {
r := c.c.NewRequest("GET", "/v1/sys/key-status")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
secret, err := ParseSecret(resp.Body)
if err != nil {
return nil, err
}
if secret == nil || secret.Data == nil {
return nil, errors.New("data from server response is empty")
}
var result KeyStatus
termRaw, ok := secret.Data["term"]
if !ok {
return nil, errors.New("term not found in response")
}
term, ok := termRaw.(json.Number)
if !ok {
return nil, errors.New("could not convert term to a number")
}
term64, err := term.Int64()
if err != nil {
return nil, err
}
result.Term = int(term64)
installTimeRaw, ok := secret.Data["install_time"]
if !ok {
return nil, errors.New("install_time not found in response")
}
installTimeStr, ok := installTimeRaw.(string)
if !ok {
return nil, errors.New("could not convert install_time to a string")
}
installTime, err := time.Parse(time.RFC3339Nano, installTimeStr)
if err != nil {
return nil, err
}
result.InstallTime = installTime
return &result, err
}
type KeyStatus struct {
Term int `json:"term"`
InstallTime time.Time `json:"install_time"`
}

86
vendor/github.com/hashicorp/vault/api/sys_seal.go generated vendored Normal file
View File

@ -0,0 +1,86 @@
package api
import "context"
func (c *Sys) SealStatus() (*SealStatusResponse, error) {
r := c.c.NewRequest("GET", "/v1/sys/seal-status")
return sealStatusRequest(c, r)
}
func (c *Sys) Seal() error {
r := c.c.NewRequest("PUT", "/v1/sys/seal")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err == nil {
defer resp.Body.Close()
}
return err
}
func (c *Sys) ResetUnsealProcess() (*SealStatusResponse, error) {
body := map[string]interface{}{"reset": true}
r := c.c.NewRequest("PUT", "/v1/sys/unseal")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
return sealStatusRequest(c, r)
}
func (c *Sys) Unseal(shard string) (*SealStatusResponse, error) {
body := map[string]interface{}{"key": shard}
r := c.c.NewRequest("PUT", "/v1/sys/unseal")
if err := r.SetJSONBody(body); err != nil {
return nil, err
}
return sealStatusRequest(c, r)
}
func (c *Sys) UnsealWithOptions(opts *UnsealOpts) (*SealStatusResponse, error) {
r := c.c.NewRequest("PUT", "/v1/sys/unseal")
if err := r.SetJSONBody(opts); err != nil {
return nil, err
}
return sealStatusRequest(c, r)
}
func sealStatusRequest(c *Sys, r *Request) (*SealStatusResponse, error) {
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var result SealStatusResponse
err = resp.DecodeJSON(&result)
return &result, err
}
type SealStatusResponse struct {
Type string `json:"type"`
Initialized bool `json:"initialized"`
Sealed bool `json:"sealed"`
T int `json:"t"`
N int `json:"n"`
Progress int `json:"progress"`
Nonce string `json:"nonce"`
Version string `json:"version"`
Migration bool `json:"migration"`
ClusterName string `json:"cluster_name,omitempty"`
ClusterID string `json:"cluster_id,omitempty"`
RecoverySeal bool `json:"recovery_seal"`
}
type UnsealOpts struct {
Key string `json:"key"`
Reset bool `json:"reset"`
Migrate bool `json:"migrate"`
}

15
vendor/github.com/hashicorp/vault/api/sys_stepdown.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
package api
import "context"
func (c *Sys) StepDown() error {
r := c.c.NewRequest("PUT", "/v1/sys/step-down")
ctx, cancelFunc := context.WithCancel(context.Background())
defer cancelFunc()
resp, err := c.c.RawRequestWithContext(ctx, r)
if resp != nil && resp.Body != nil {
resp.Body.Close()
}
return err
}

363
vendor/github.com/hashicorp/vault/sdk/LICENSE generated vendored Normal file
View File

@ -0,0 +1,363 @@
Mozilla Public License, version 2.0
1. Definitions
1.1. "Contributor"
means each individual or legal entity that creates, contributes to the
creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used by a
Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached the
notice in Exhibit A, the Executable Form of such Source Code Form, and
Modifications of such Source Code Form, in each case including portions
thereof.
1.5. "Incompatible With Secondary Licenses"
means
a. that the initial Contributor has attached the notice described in
Exhibit B to the Covered Software; or
b. that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the terms of
a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in a
separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible, whether
at the time of the initial grant or subsequently, any and all of the
rights conveyed by this License.
1.10. "Modifications"
means any of the following:
a. any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered Software; or
b. any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the License,
by the making, using, selling, offering for sale, having made, import,
or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU Lesser
General Public License, Version 2.1, the GNU Affero General Public
License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that controls, is
controlled by, or is under common control with You. For purposes of this
definition, "control" means (a) the power, direct or indirect, to cause
the direction or management of such entity, whether by contract or
otherwise, or (b) ownership of more than fifty percent (50%) of the
outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
a. under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
b. under Patent Claims of such Contributor to make, use, sell, offer for
sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
a. for any code that a Contributor has removed from Covered Software; or
b. for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
c. under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights to
grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
a. such Covered Software must also be made available in Source Code Form,
as described in Section 3.1, and You must inform recipients of the
Executable Form how they can obtain a copy of such Source Code Form by
reasonable means in a timely manner, at a charge no more than the cost
of distribution to the recipient; and
b. You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter the
recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty, or
limitations of liability) contained within the Source Code Form of the
Covered Software, except that You may alter any license notices to the
extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License
with respect to some or all of the Covered Software due to statute,
judicial order, or regulation then You must: (a) comply with the terms of
this License to the maximum extent possible; and (b) describe the
limitations and the code they affect. Such description must be placed in a
text file included with all distributions of the Covered Software under
this License. Except to the extent prohibited by statute or regulation,
such description must be sufficiently detailed for a recipient of ordinary
skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You
fail to comply with any of its terms. However, if You become compliant,
then the rights granted under this License from a particular Contributor
are reinstated (a) provisionally, unless and until such Contributor
explicitly and finally terminates Your grants, and (b) on an ongoing
basis, if such Contributor fails to notify You of the non-compliance by
some reasonable means prior to 60 days after You have come back into
compliance. Moreover, Your grants from a particular Contributor are
reinstated on an ongoing basis if such Contributor notifies You of the
non-compliance by some reasonable means, this is the first time You have
received notice of non-compliance with this License from such
Contributor, and You become compliant prior to 30 days after Your receipt
of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
license agreements (excluding distributors and resellers) which have been
validly granted by You or Your distributors under this License prior to
termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis,
without warranty of any kind, either expressed, implied, or statutory,
including, without limitation, warranties that the Covered Software is free
of defects, merchantable, fit for a particular purpose or non-infringing.
The entire risk as to the quality and performance of the Covered Software
is with You. Should any Covered Software prove defective in any respect,
You (not any Contributor) assume the cost of any necessary servicing,
repair, or correction. This disclaimer of warranty constitutes an essential
part of this License. No use of any Covered Software is authorized under
this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including
negligence), contract, or otherwise, shall any Contributor, or anyone who
distributes Covered Software as permitted above, be liable to You for any
direct, indirect, special, incidental, or consequential damages of any
character including, without limitation, damages for lost profits, loss of
goodwill, work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses, even if such party shall have been
informed of the possibility of such damages. This limitation of liability
shall not apply to liability for death or personal injury resulting from
such party's negligence to the extent applicable law prohibits such
limitation. Some jurisdictions do not allow the exclusion or limitation of
incidental or consequential damages, so this exclusion and limitation may
not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts
of a jurisdiction where the defendant maintains its principal place of
business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions. Nothing
in this Section shall prevent a party's ability to bring cross-claims or
counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides that
the language of a contract shall be construed against the drafter shall not
be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses If You choose to distribute Source Code Form that is
Incompatible With Secondary Licenses under the terms of this version of
the License, the notice described in Exhibit B of this License must be
attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the
terms of the Mozilla Public License, v.
2.0. If a copy of the MPL was not
distributed with this file, You can
obtain one at
http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file,
then You may include the notice in a location (such as a LICENSE file in a
relevant directory) where a recipient would be likely to look for such a
notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible
With Secondary Licenses", as defined by
the Mozilla Public License, v. 2.0.

Some files were not shown because too many files have changed in this diff Show More