mirror of https://github.com/knative/client.git
upgrade to latest dependencies (#1746)
bumping sigs.k8s.io/structured-merge-diff/v4 65611e5...26781d0:
> 26781d0 Merge pull request # 224 from apelisse/revert-backwards-incompatible-changes
> 4ad28cc Merge pull request # 223 from apelisse/v5
> 4d3633c Remove all copies of Schema and Map
> ebd569f Merge pull request # 220 from alexzielenski/fieldannotation
> 1754a8d Create package v5 for backward incompatible change
> b4e893e Revert "use pointer semantics for schema"
> 5e8be97 Merge pull request # 222 from alexzielenski/fix-exponential-bug
> c779eba fix gofmt
> a70144b Revert "Create package v5 for backward incompatible change"
> 8f72076 Merge pull request # 221 from apelisse/improve-operations-parsing
> 478fc28 fix nil schema type not being deduced to whatever the other operand is
> 50567d1 update gofmt
> e3fbe42 Merge pull request # 204 from zqzten/retry_add_back_owned
> 11d9bfa Change operation test dependency on parseableType rather than typename
> 0f66c33 Add exponential test failure for large CR
> 15f722a add benchmark excercising field-level overrides
> 0f3c884 Merge pull request # 215 from alexzielenski/schema-copy
> 41d6b2c add test for multiple version appliers with reliant fields conversions
> 8a2620f Name operation tests after their filename
> 60b3b65 use pointer semantics for schema
> 73412af Merge pull request # 213 from apelisse/add-readme-testdata
> f345637 fixup! add CopyInto method to schema.Schema
> 5e01923 add back owned items at pruned version first
> 00b6b52 add field-level element relationship which overrides referred type
> aa301db Merge pull request # 214 from apelisse/update-jsoniter-go118
> f0fadd0 Add README to explain how schemas can be generated
> c26273c add CopyInto method to schema.Schema
> b2bdf68 Update jsoniter to v1.1.12 to work with go1.18
bumping knative.dev/networking a5c26fe...fcbfa31:
> fcbfa31 Update actions (# 718)
> 8d6b2b0 Update community files (# 717)
> 584ccad upgrade to latest dependencies (# 716)
bumping k8s.io/api 44d27eb...4b838ea:
> 4b838ea Update dependencies to v0.25.2 tag
> fce3016 Merge pull request # 112161 from pohly/automated-cherry-pick-of-# 112129-origin-release-1.25
> 29513a2 dependencies: update to ginkgo v2.1.6 and gomega v1.20.1
> 5c4a1b1 Merge remote-tracking branch 'origin/master' into release-1.25
> 714e431 Merge pull request # 111657 from aojea/hc_nodeport
> 8608211 Merge pull request # 109090 from sarveshr7/multicidr-rangeallocator
> 7840548 doc services healthcheckNodePort is inmutable
> b88698c Merge pull request # 111258 from dobsonj/kep-596-ga-feature-flag
> a7621fb Auto generate code for ClusterCIDR API
> 2f9e588 Merge pull request # 111113 from mimowo/retriable-pod-failures-job-controller
> b964bc7 Move CSIInlineVolume feature to GA
> 14e048f Introduce networking/v1alpha1 api, ClusterCIDR type
> 3be517c Merge pull request # 111696 from liggitt/go119mod
> fe83bea Support handling of pod failures with respect to the specified rules
> 991b481 Merge pull request # 108692 from jsafrane/selinux
> e281bde Update go.mod to go1.19
> f42f86a Regenerate files
> a39f97c Add CSIDriverSpec.SELinuxMount
> c8f0601 Merge pull request # 111677 from dims/stop-panic-in-govet-levee
> ea1451a run lint-dependencies and follow directions
> a2b9bcc Stop panic in govet-levee CI job
> ad89a10 Merge pull request # 110495 from alexzielenski/atomic-objectreference
> e590d1f Merge pull request # 111090 from kinvolk/rata/userns-support-2022
> d978b18 mark persistentvolume's claimRef as granular
> 7488a8c Merge pull request # 111435 from soltysh/cronjob_timezone_beta
> 2a31718 Update autogenerated files
> d351ecd Merge pull request # 111557 from alexzielenski/update-smd-422
> f990455 Update generated
> 9448de2 pkg/apis, staging: add HostUsers to pod spec
> ae37896 update smd to 4.2.3
> 010c740 Promote CronJobTimeZone to beta
> 1b82208 Merge pull request # 110959 from mimowo/retriable-pod-failures-pod-conditions
> 08dedd8 Introduction of a pod condition type indicating disruption. Its `reason` field indicates the reason:
> f5e1938 Merge pull request # 111587 from ialidzhikov/k8s-utils@ee6ede2d64
> b3c30ab Update `k8s.io/utils` to `ee6ede2d64ed`
> be740eb Merge pull request # 111441 from denkensk/respect-topology
> a76179a Merge pull request # 111402 from verb/111030-ec-ga
> acf399c code generated by script for MatchLabelKeys in TopologySpreadConstraint
> 537ea12 Merge pull request # 111442 from ialidzhikov/k8s-utils@56c0de1e6f
> d8280df Remove EphemeralContainers beta disclaimer
> d96a10b api defination for MatchLabelKeys in TopologySpreadConstraint
> 8f210eb Update `k8s.io/utils` to `9bab9ef40391`
> 04aced3 Merge pull request # 111254 from dims/update-to-golang-1.19-rc2
> dda4dee Generate and format files
> f6f0d0e Merge pull request # 111194 from ravisantoshgudimetla/promote-maxSurge-ga
> f77fa25 Merge pull request # 111229 from ravisantoshgudimetla/promote-podOS-GA
> 7b89ea1 Generated: maxSurge for DS
> 9a18f7a Merge pull request # 110178 from kevindelgado/validation-beta-1-25
> 844753e Generated: PodOS field to GA
> d895ab7 api: Promote DS maxSurge to GA
> 096c9df Merge pull request # 110388 from sanposhiho/graduate-mindomain-beta
> 1d3dcfc update kjson
> bdc8eb0 api: Promote PodOS field to GA
> fa32a3a Merge pull request # 110896 from ravisantoshgudimetla/promote-minReadySec-sts-update-ga
> eeea400 Update doc comment
> 08c75a7 Merge pull request # 111008 from cici37/bumpCEL
> 40e8eea Generated: minReadySeconds for STS
> 0110f55 Update v1 package to graduate minDomains to beta
> 0c6d49d Bump cel-go to v0.12.0
> df27ddf api: Promote statefulset MinReadySeconds to GA
> b9bd732 Merge pull request # 110564 from j4m3s-s/add-ports-doc-fix
> 51e4c4a Merge pull request # 111010 from thockin/remove-refs-to-EndpointSliceNodeName
> b7c39ad Fix description of Ports in PodSpec
> 248dcdb Merge pull request # 111001 from pohly/klog-update
> 4756865 Remove obsolete refs to gate EndpointSliceNodeName
> b5b8fba build: update to klog v2.70.1
> 8488949 Merge pull request # 110990 from thockin/svc-typenames-IPFamilyPolicyType
> ed77e2e Rename IPFamilyPolicyType => IPFamilyPolicy
> 33ab20e Merge pull request # 110868 from rikatz/endport-to-ga
> f18d381 Merge pull request # 110831 from chendave/openapi
> 270b22d Generated files for endPort promotion
> 724b071 Bump `kube-openapi` to the latest
> e803bc1 Promote endPort to GA
> b98f264 Merge pull request # 110724 from pohly/klog-update
> 61fcc0f build: update to klog v2.70.0
> edebc67 Merge pull request # 110561 from Shubham82/extend_Description
> 60387f6 Merge pull request # 110378 from lucacome/bump-grpc
> 1cba8e4 RBAC: Modify the Description for the apiGroup.
> 89ed2a8 Bump grpc to v1.47.0
> 6b0201d Merge pull request # 110520 from dims/update-gopkg.in/yaml.v3-to-v3.0.1
> 2c10714 Update gopkg.in/yaml.v3 to v3.0.1
> d50b1bc Merge pull request # 109293 from iamNoah1/improve-ingressclassname-api-doc
> 832b1f4 Merge pull request # 109938 from dims/move-from-k8s.gcr.io-to-registry.k8s.io
> bba462c generate ressources after change request
> ed22bb3 Move from k8s.gcr.io to registry.k8s.io
> d1f2717 add change requests
> be84346 Merge pull request # 109968 from kerthcet/feature/optimize-apifield-comment
> e85f85a generate ressources after change request
> 209903b amend comment of NodeInclusionPolicy
> bac3ee2 add change requests
> ae35a85 Merge pull request # 108492 from kerthcet/feature/add-NodeInclustionPolicies
> a3f4ca9 generate ressources after change request
> 202371b feat: add NodeInclusionPolicy to TopologySpreadConstraint in PodSpec
> 4e39c88 add change requests
> 9b88471 Merge pull request # 109891 from pohly/log-dependency-update
> af2d87a add generated assets
> 548c53c Merge pull request # 109602 from lavalamp/remove-clustername
> 0f669be dependencies: logr and zapr v1.2.3
> 4b894ff first shot improving api doc for ingressclassname
> 5e2f5ad Merge pull request # 109308 from danwinship/traffic-policy-docs
> eb567e7 compat
> a2ee8c7 Merge pull request # 109803 from liggitt/api-fixture-data
> 85c4c9d Clarify ExternalTrafficPolicy/InternalTrafficPolicy definitions
> fcb6509 Merge pull request # 109440 from liggitt/gomod-1.18
> 1afa1ae Remove v1.22.0 API fixture data
> 3f52c1d Regenerate vendor
> a2ae4d4 Add v1.24.0 fixture data
> d672b36 Merge pull request # 109506 from wangrzneu/fix-comment
> f2f8c15 Merge pull request # 109421 from vpnachev/fix/typo-in-token-request-doc-string
> 35e4518 fix IngressClassParametersReferenceScopeCluster comment in staging/src/k8s.io/api/networking
> af4f75e Merge pull request # 105963 from zhucan/bugfix-95367
> aed3ebb Fix typo in TokenRequest doc string
> 7a89730 Merge pull request # 109436 from JamesLaverack/revert-108290
> 56fb9f0 generated code and doc
> 1f82dd7 Revert "Introduce APIs to support multiple ClusterCIDRs (# 108290)"
> 6f81c7b csi: add nodeExpandSecret support for CSI client
bumping knative.dev/eventing af2298f...56e9ee1:
> 56e9ee1 update exact/prefix/suffix filters to accept multiple entries (# 6551)
> d1827fc Update actions (# 6560)
> fdacfa3 Update community files (# 6559)
bumping k8s.io/kube-openapi 3ee0da9...67bda5d:
> 67bda5d Merge pull request # 308 from alexzielenski/fieldlevelannotation
> 011e075 Merge pull request # 302 from ruquanzhao/branch2
> 4d09b3c add field level override integration test for openapi gen
> c39d0f6 Merge pull request # 306 from apelisse/go-1.18
> dff3465 Remove newlineReporter custom report
> 74b52a2 update smd to 4.2.3
> 1062c7a Merge pull request # 304 from dims/switch-to-released-version-of-v3.8.0-github.com/emicklei/go-restful/v3
> 0bed9f0 Update to go1.18
> 89ff079 upgrade ginkgo to ginkgo/v2
> 48d47d9 remove unused function
> 31174f5 Merge pull request # 303 from dims/move-to-newer-v3-version-of-go-restful
> 77c5a1e Switch to released version of v3.8.0 - github.com/emicklei/go-restful/v3
> bff9878 add ability to override structType with field-level annotation
> 5e7f5fd Merge pull request # 300 from Jefftree/remove-bazel-references
> ca8f27e Move to newer version (v3) of github.com/emicklei/go-restful
> b28bf28 Merge pull request # 283 from alexzielenski/gnostic_conversion
> fa41ef1 Remove bazel references
> c715a76 add direct conversion for gnostic v2 types
bumping k8s.io/code-generator 65c70a5...7e9837e:
> 7e9837e Merge pull request # 112161 from pohly/automated-cherry-pick-of-# 112129-origin-release-1.25
> ecb165b dependencies: update to ginkgo v2.1.6 and gomega v1.20.1
> 775c304 Merge remote-tracking branch 'origin/master' into release-1.25
> d31c93c Revert "Remove gcp and azure auth plugins"
> 7a6b27b Update go.mod to go1.19
> fa4467d Merge pull request # 111677 from dims/stop-panic-in-govet-levee
> 7b3066b run lint-dependencies and follow directions
> c04cb7f Stop panic in govet-levee CI job
> aed7155 Merge pull request # 110495 from alexzielenski/atomic-objectreference
> 02dff21 update kube-openapi
> 24b65b5 Update kubectl kustomize to kyaml/v0.13.9, cmd/config/v0.10.9, api/v0.12.1, kustomize/v4.5.7 (# 111606)
> ce96325 Merge pull request # 111557 from alexzielenski/update-smd-422
> 3e298a9 update smd to 4.2.3
> 98b8ad1 Merge pull request # 111587 from ialidzhikov/k8s-utils@ee6ede2d64
> 7d41d4b Update `k8s.io/utils` to `ee6ede2d64ed`
> f7e6982 Merge pull request # 111442 from ialidzhikov/k8s-utils@56c0de1e6f
> 5e969f2 Update `k8s.io/utils` to `9bab9ef40391`
> 22db44c Merge pull request # 111254 from dims/update-to-golang-1.19-rc2
> 53c2ca0 Generate and format files
> a08f67b Merge pull request # 110178 from kevindelgado/validation-beta-1-25
> e22e97f update kjson
> 40a88eb Merge pull request # 111008 from cici37/bumpCEL
> 9a892d0 Bump cel-go to v0.12.0
> d665e29 Merge pull request # 108792 from wongearl/modify-todo
> dc61ef8 Merge pull request # 111001 from pohly/klog-update
> dec659d fix CustomArgs annotation, todo describe
> 162a509 build: update to klog v2.70.1
> 132cd9f Merge pull request # 110831 from chendave/openapi
> 0bd13bc Bump `kube-openapi` to the latest
> 068d9f8 Merge pull request # 110724 from pohly/klog-update
> 051a429 build: update to klog v2.70.0
> 4e8f8f5 Merge pull request # 110378 from lucacome/bump-grpc
> 39381c1 Bump grpc to v1.47.0
> a2acbb4 Merge pull request # 110546 from liggitt/fix-example
> cb496e7 Merge pull request # 110518 from dims/switch-to-released-version-of-v3.8.0-github.com/emicklei/go-restful/v3
> e246394 Drop spurious replace
> f2ea995 Merge pull request # 110520 from dims/update-gopkg.in/yaml.v3-to-v3.0.1
> 306deab Switch to released version of v3.8.0 - github.com/emicklei/go-restful/v3
> d8adc26 Update gopkg.in/yaml.v3 to v3.0.1
> 35a5f00 Merge pull request # 110351 from dims/switch-to-v3-of-github.com/emicklei/go-restful
> e1fbd90 Switch to v3 of github.com/emicklei/go-restful
> 7d977b3 Merge pull request # 110013 from enj/enj/i/remove_azure_gcp_auth_plugins
> 7e85903 Remove gcp and azure auth plugins
> 9f887a7 Merge pull request # 109891 from pohly/log-dependency-update
> acd6f5b Merge pull request # 109602 from lavalamp/remove-clustername
> 7c0fe80 dependencies: logr and zapr v1.2.3
> 05559de generated files
> 8ed2cce Merge pull request # 109804 from cici37/celUpdate
> f82eadd Update go-control-plane, cncf/xds/go, cncf/udpa/go and remove unused versions
> cd6d91a Update GRPC
> df8059b Update genproto and antlr.
> 4a69c11 Bump cel-go to v0.11.2
> 0a834f8 Merge pull request # 109440 from liggitt/gomod-1.18
> 5cbad16 Regenerate vendor
> 8f17de0 Merge pull request # 109031 from Jefftree/openapiv3beta
> 858bf3c generated: Update kube-openapi and vendor
bumping knative.dev/hack 92a65f1...3fdc50b:
> 3fdc50b Remove Signing Feature Gate (# 236)
> 2d67db5 generate provenances (# 237)
> 52a87e1 Update community files (# 235)
bumping knative.dev/pkg 158538c...21d3b47:
> 21d3b47 upgrade to latest dependencies (# 2608)
> 8178c38 update k8s to 1.25.2 (# 2599)
> fb2e4fb Preserve webhook namespaceSelector.matchLabels (# 2605)
> 5c5da28 Update actions (# 2607)
> 1fb3e67 Update community files (# 2606)
> bc93b0a bump min kubernetes to v1.23 (# 2595)
> 8cacac2 upgrade to latest dependencies (# 2604)
bumping k8s.io/apiextensions-apiserver b993e22...ebdae04:
> ebdae04 Update dependencies to v0.25.2 tag
> a290ab4 Merge pull request # 112161 from pohly/automated-cherry-pick-of-# 112129-origin-release-1.25
> d8d88ce Merge pull request # 112107 from DangerOnTheRanger/automated-cherry-pick-of-# 111964-upstream-release-1.25
> 34ce90c dependencies: update to ginkgo v2.1.6 and gomega v1.20.1
> 063c82b Add unit tests.
> 6d0af3c Run pin-dependency.sh and update-vendor.sh.
> 850356a Update go.mod to go1.19
> 1994fc0 Merge pull request # 111677 from dims/stop-panic-in-govet-levee
> e2f1a8a run lint-dependencies and follow directions
> 79e4ee6 Stop panic in govet-levee CI job
> a9d332a Merge pull request # 110495 from alexzielenski/atomic-objectreference
> 0bec600 update kube-openapi
> 914f1d7 Update kubectl kustomize to kyaml/v0.13.9, cmd/config/v0.10.9, api/v0.12.1, kustomize/v4.5.7 (# 111606)
> d25d03c Merge pull request # 111557 from alexzielenski/update-smd-422
> 8f03975 Merge pull request # 105126 from sallyom/tracing-kubelet
> 11291a4 update smd to 4.2.3
> a34b99b kubelet tracing: update vendor
> 5fd535d kubelet tracing
> c66b2d1 Merge pull request # 111587 from ialidzhikov/k8s-utils@ee6ede2d64
> 58b8011 Update `k8s.io/utils` to `ee6ede2d64ed`
> 49cedce Merge pull request # 111446 from alexzielenski/trivial-x-preserve-unknown-fields-correction
> 5f4930e Merge pull request # 111519 from jpbetz/skip-cel-validation
> 3889972 correct OpenAPI extension in error message
> 85460a1 Merge pull request # 111483 from jpbetz/fix-missing-root-object-type
> 6158148 Skip CEL expression validation if OpenAPIv3 schema is invalid
> de80fce Skip schemas that are non-structural in NewValidator
> 7846e04 Merge pull request # 111451 from DangerOnTheRanger/cel-use-case-tests
> 5ac5089 Merge pull request # 111442 from ialidzhikov/k8s-utils@56c0de1e6f
> 560e664 Add examples of matchExpressions validations.
> b317854 Update `k8s.io/utils` to `9bab9ef40391`
> 5f2cc68 Merge pull request # 111254 from dims/update-to-golang-1.19-rc2
> d8abaab Generate and format files
> 02604ce Merge pull request # 108108 from thaJeztah/switch_golang_protobuf_extensions
> fec6a8f downgrade github.com/matttproud/golang_protobuf_extensions to v1.0.1
> a6bd8bf Merge pull request # 111156 from DangerOnTheRanger/cel-traversal-optimization
> b655fba Replace estimateMinSizeJSON with DeclType.MinSerializedSize.
> 758d6d5 Merge pull request # 110135 from jpbetz/efficient-crd-cel-validation
> dbdebf1 Reuse structural schema and cel decls during CRD validation
> 0bdac0d Merge pull request # 111071 from cici37/updateCEL
> d0f6952 Merge pull request # 111242 from wojtek-t/fix_leaking_goroutines_11
> d562a8a Sort out dep order
> 7c89666 Merge pull request # 110178 from kevindelgado/validation-beta-1-25
> 60ed3b4 Clean shutdown of apiserver integration tests
> b19f020 Switch to use cel.TypeToExprType(celType) to generate the exprType.
> 684f2a7 Merge pull request # 109639 from Abirdcfly/fixduplicateimport
> fce3486 update kjson
> 224fabd Turn on DefaultUTCTimeZone for cel-go.
> 1c3183a Merge pull request # 111035 from jiahuif-forks/feature/matrics/cel
> 896ce20 cleanup: remove duplicate import
> e60fcc5 Add server-side metadata unknown field validation
> adca902 Enable the empty list sum support.
> 9bd3fbc Merge pull request # 111008 from cici37/bumpCEL
> 40ff5d6 metrics for CEL compilation and evaluation.
> ea6670e Pick up dispatcher refactor changes from cel-go
> 62e4d5d Adding comment for maxValidDepth check.
> 33c72d2 Bump cel-go to v0.12.4
> 9f63533 Bump cel-go to v0.12.3
> ce744a8 Add test for CEL maxRecurionDepth check.
> 8b5da6a Bump cel-go to v0.12.1
> 0735877 Bump cel-go to v0.12.0
> 8789ab0 Merge pull request # 109111 from chendave/ginkgo_upstream
> 3203d57 update ginkgo from v1 to v2 and gomega to 1.19.0
> f3e37dc Merge pull request # 111001 from pohly/klog-update
> 8a94cf1 build: update to klog v2.70.1
> a394eda Merge pull request # 110831 from chendave/openapi
> 020df44 Merge pull request # 110548 from benluddy/cel-unstructuredtoval-benchmark-fixture
> 7a0dda7 Bump `kube-openapi` to the latest
> 92942df Merge pull request # 110549 from benluddy/skip-unused-oldself-activation-work
> ff05b62 Do test fixture setup outside cel.UnstructuredToVal benchmark loop.
> 25b2ab4 Merge pull request # 110788 from 21kyu/change_reflect_ptr
> a5932b2 Only provide an oldSelf binding when referenced by a CEL rule.
> 17752d1 Change reflect.Ptr to reflect.Pointer
> 442c886 Merge pull request # 109510 from sugangli/pinhole-fw
> 3583982 Merge pull request # 110731 from jkh52/update-netproxy
> 3c6fc8b update kube-controller-manager dependencies
> 3963fc7 Bump konnectivity-client to 0.0.32
> addd726 Merge pull request # 110724 from pohly/klog-update
> 387f93d build: update to klog v2.70.0
> 27cab03 Merge pull request # 110378 from lucacome/bump-grpc
> 356b8d7 Merge pull request # 110519 from dims/update-etcd-packages-to-v3.5.4
> c1fc8de Bump grpc to v1.47.0
> c1ee610 Merge pull request # 110179 from Jefftree/fix_openapi_v2
> e5b7b82 update etcd packages to v3.5.4
> d877aaa Merge pull request # 110518 from dims/switch-to-released-version-of-v3.8.0-github.com/emicklei/go-restful/v3
> 87c2a6e Prune defaults for CRD serving
> e16b199 Merge pull request # 110520 from dims/update-gopkg.in/yaml.v3-to-v3.0.1
> 8aaaf68 Switch to released version of v3.8.0 - github.com/emicklei/go-restful/v3
> 546aafe Update gopkg.in/yaml.v3 to v3.0.1
> bd811e0 Merge pull request # 110351 from dims/switch-to-v3-of-github.com/emicklei/go-restful
> 83fedf3 Switch to v3 of github.com/emicklei/go-restful
> 0b237d1 Merge pull request # 109552 from cyclinder/fix_CVE-2022-27191
> 6af94f0 Merge pull request # 109228 from pacoxu/cleanup-ut
> a8129d5 ix CVE-2022-27191: Bump golang.org/x/crypto to v0.0.0-20220315160706
> 916bb0f Merge pull request # 110131 from stevekuznetsov/skuznets/stop-copying-metadata
> 37a5c7c prevent the unit test name too long in report
> 6ca81eb Merge pull request # 110130 from stevekuznetsov/skuznets/clean-up-crd-storage
> dd95027 customresource: stop shallow-copying metadata
> 774eebb Merge pull request # 109835 from cici37/celUpdate
> 9215e7c customresouce: clean up the storage constructor
> a47945a Merge pull request # 110061 from wojtek-t/shutdown_apiextensions
> 8174e5f Initialize a base CEL env and share it to avoid repeated function declaration validation
> eb1841e Merge pull request # 109880 from Jefftree/patch-4
> 1b8c30d Cleanup CRD storage on shutdown
> 1ec2449 Merge pull request # 109978 from wojtek-t/remove_storage_tracking
> 2046a29 Remove warning log for merging meta and scale type
> 21a45c1 Merge pull request # 108011 from cici37/benchmark
> 94148b5 Cleanup no-longer used storage cleanup method
> fe87b2a Merge pull request # 109891 from pohly/log-dependency-update
> 97b5e54 benchmark unstructuredToVal
> 057887d Merge pull request # 109602 from lavalamp/remove-clustername
> 08a2064 dependencies: logr and zapr v1.2.3
> 9b3dd0c generated files
> 42c5701 Merge pull request # 109804 from cici37/celUpdate
> 4b7a318 Update go-control-plane, cncf/xds/go, cncf/udpa/go and remove unused versions
> fc318e0 Update GRPC
> 42e3d58 Update genproto and antlr.
> 637f1a6 Bump cel-go to v0.11.2
> 0b2f79f Merge pull request # 109440 from liggitt/gomod-1.18
> 47b3e71 Merge pull request # 109268 from liggitt/pruning-metadata
> dd13fbf Regenerate vendor
> 86df472 Merge pull request # 109303 from wojtek-t/clean_storage_shutdown
> b4a77b2 Fix bug treating metadata fields as unknown fields
> d779722 Implement Destroy() method for all registries
> 3a045b9 Expand unit tests of pruning of unknown fields in metadata
> 24b17b0 Merge pull request # 109242 from cici37/addTest
bumping k8s.io/apimachinery 97e5df2...478dd6e:
> 478dd6e Merge pull request # 112527 from liggitt/automated-cherry-pick-of-# 112526-upstream-release-1.25
> 14bc1be Limit redirect proxy handling to redirected responses
> 8252641 Merge pull request # 112330 from enj/automated-cherry-pick-of-# 112193-upstream-release-1.25
> 10b456c Merge pull request # 112161 from pohly/automated-cherry-pick-of-# 112129-origin-release-1.25
> 4759a80 Add an option for aggregator
> 3296217 dependencies: update to ginkgo v2.1.6 and gomega v1.20.1
> 117bd9b Merge pull request # 111113 from mimowo/retriable-pod-failures-job-controller
> 74deb3d Merge pull request # 111696 from liggitt/go119mod
> dbffa07 Support handling of pod failures with respect to the specified rules
> fef5499 Update go.mod to go1.19
> 41606c6 Merge pull request # 111677 from dims/stop-panic-in-govet-levee
> 6627090 run lint-dependencies and follow directions
> addc01f Stop panic in govet-levee CI job
> f15b816 Merge pull request # 110495 from alexzielenski/atomic-objectreference
> e68cae5 update kube-openapi
> b541046 Merge pull request # 111557 from alexzielenski/update-smd-422
> ace683f update smd to 4.2.3
> 899984f Merge pull request # 111587 from ialidzhikov/k8s-utils@ee6ede2d64
> c77a8ed Update `k8s.io/utils` to `ee6ede2d64ed`
> 954536d Merge pull request # 111450 from HecarimV/fix-22072710
> d58901c Merge pull request # 111454 from HecarimV/fix-log-capitalized
> a9b0d05 cleanup: omit redundant arguments in make call
> 8b07078 Merge pull request # 111496 from HecarimV/unnecessary-fmt
> 1be2ea2 cleanup: fix some error log capitalization
> 3bab4b1 Remove unnecessary use of fmt.Sprintf
> 47ba8cb Merge pull request # 111442 from ialidzhikov/k8s-utils@56c0de1e6f
> 0a1c3aa Update `k8s.io/utils` to `9bab9ef40391`
> 7fb0342 Merge pull request # 111254 from dims/update-to-golang-1.19-rc2
> 9652184 Merge pull request # 106388 from alexzielenski/ssa-ignore-nonsemantic-changes
> fdf7942 Generate and format files
> 915d89a Merge pull request # 111173 from BinacsLee/binacs/regenerate-sets
> 65bf58a optimize nil and empty also for slices
> 0661104 Merge pull request # 110178 from kevindelgado/validation-beta-1-25
> fb683c6 Re-Generate k8s.io/apimachinery/pkg/util/sets
> 6586a12 optimize nil and empty case for parity with other branch
> 5d375c1 update kjson
> 019f1e1 revert timestamp updates to object if non-managed fields do not change
> cff14a5 Merge pull request # 111081 from HecarimV/fix-22071210
> afc5e00 Merge pull request # 111008 from cici37/bumpCEL
> 6f7b214 Fix: some typo in apimachinery/pkg
> 51e28bc Merge pull request # 111097 from saltbo/fix-thethe-typo
> cd7836f Bump cel-go to v0.12.0
> 5074e27 fix: update the typo code comment
> 0897ed8 Merge pull request # 108532 from cbandy/status-cause
> 9cb1f71 Merge pull request # 110916 from zhoumingcheng/master-code-v1
> 9291eff api/errors: Improve performance of Is* functions
> 4afd3f4 Merge pull request # 109111 from chendave/ginkgo_upstream
> 8c521e1 Fix a typo
> 1eec2ee Use errors.As to detect wrapping in StatusCause
> c87322d update ginkgo from v1 to v2 and gomega to 1.19.0
> 2496976 Merge pull request # 111001 from pohly/klog-update
> d40a017 build: update to klog v2.70.1
> a869692 Merge pull request # 110831 from chendave/openapi
> e4283bb Bump `kube-openapi` to the latest
> 66bbc50 Merge pull request # 110788 from 21kyu/change_reflect_ptr
> 55378ba Change reflect.Ptr to reflect.Pointer
> e74e8a9 Merge pull request # 110724 from pohly/klog-update
> 21dbaf7 build: update to klog v2.70.0
> eda93e0 Merge pull request # 110448 from twilight0620/test0608
> c5be385 Merge pull request # 110378 from lucacome/bump-grpc
> 6243438 add some uts of group_version.go
> bae69c6 Bump grpc to v1.47.0
> b90ea24 Merge pull request # 110520 from dims/update-gopkg.in/yaml.v3-to-v3.0.1
> 1831736 Update gopkg.in/yaml.v3 to v3.0.1
> d407afb Merge pull request # 110351 from dims/switch-to-v3-of-github.com/emicklei/go-restful
> 5e1e879 Switch to v3 of github.com/emicklei/go-restful
> be3a79b Merge pull request # 109892 from jlsong01/add_annotation_comments
> 5abf6e2 Merge pull request # 110207 from wojtek-t/clean_shutdown_managers
> 21bf400 clarify a comment on annotation key validation
> be75d4d Make contextForChannel public
> e3c6631 Merge pull request # 110079 from ash2k/dial-with-context
> e929c35 Always dial using a context
> e572490 Merge pull request # 109752 from MadhavJivrajani/remove-apimachinery-clocks
> e11374f Merge pull request # 110040 from astoycos/fix-panic
> 5526e82 apimachinery/clock: Delete the clock package
> ca5c89b Merge pull request # 110029 from ash2k/ash2k/no-double-tls-validation
> 7c09d67 Fix additional panic
> 4f2ae94 tls.Dial() validates hostname, no need to do that manually
> ec91382 Write Unit test to imitate Panic
> 28c7554 Merge pull request # 109651 from ash2k/ash2k/spdy-cleanup
> f3b1305 Merge pull request # 108080 from astoycos/issue-132
> 284911f Pass context for TLS dial
> 1c398d5 Merge pull request # 107213 from mk46/portname_validation
> 912a38b Fix Panic Condition
> f1ceaed Don't clone headers twice
> 4778951 Merge pull request # 109891 from pohly/log-dependency-update
> b243f86 Fixed portName validation error message.
> b85d889 Merge pull request # 109602 from lavalamp/remove-clustername
> 1c1f82a dependencies: logr and zapr v1.2.3
> 0ac710b generated files
> 430b920 Remove ClusterName
> 5f6d692 Merge pull request # 109804 from cici37/celUpdate
> a5de00b Merge pull request # 109750 from aojea/spdy_tls
> b6901b9 Update go-control-plane, cncf/xds/go, cncf/udpa/go and remove unused versions
> b042ede Merge pull request # 109440 from liggitt/gomod-1.18
> e57894b spdyroundtripper: d tlson't verify hostname twice
> 535f064 Update GRPC
> 92ee148 Regenerate vendor
> 1564fe5 spdyroundrippter: close the connection if tls handshake fails
> 5ac2923 Update genproto and antlr.
> f71cc27 Merge pull request # 109259 from roycaihw/tweak-quantity-docs
> 54c2bc0 Bump cel-go to v0.11.2
> 080c0c7 Merge pull request # 109212 from alexzielenski/copylock-fix
> af9793b Generated files
> cf0bfa9 Tweak Quantity docstring for ease of reference generation
bumping knative.dev/serving 080aaa5...636e121:
> 636e121 upgrade to latest dependencies (# 13372)
> 2383f14 Update net-istio nightly (# 13375)
> f7bf75a Update net-kourier nightly (# 13374)
> b584945 Update net-gateway-api nightly (# 13373)
> 974d19d upgrade k8s autoscaling to v2 (# 13337)
> 6463604 Update net-certmanager nightly (# 13370)
> 3430cee Update net-kourier nightly (# 13369)
> a90ffb3 upgrade to latest dependencies (# 13367)
> 473faf4 Update net-gateway-api nightly (# 13368)
> a741bc0 upgrade to latest dependencies (# 13361)
> 80aed04 Update actions (# 13366)
> 7c49366 Update net-gateway-api nightly (# 13362)
> 52cba34 Update net-certmanager nightly (# 13365)
> e1b8e36 Update net-istio nightly (# 13364)
> cf0550d Update net-kourier nightly (# 13363)
> 7199d90 Update community files (# 13360)
> c3f3eae Remove kind 1.22 testing, since min-version will be 1.23 in the next release (# 13357)
> ce2a9f6 upgrade to latest dependencies (# 13356)
bumping sigs.k8s.io/json 9f7c6b3...f223a00:
> f223a00 Merge pull request # 16 from kevindelgado/strict-error-path-separation
> ff3dbbe Merge pull request # 14 from liggitt/go118
> b536e95 store err type and path separately in strict errors
> 227cbc7 Merge pull request # 15 from liggitt/ci-refresh-go
> 4017094 sync go1.18 changes from encoding/json
> 93e9748 Check latest go version in CI
bumping k8s.io/utils 3a6ce19...ee6ede2:
> ee6ede2 Merge pull request # 252 from ialidzhikov/cleanup/testdata
> 9bab9ef Merge pull request # 251 from dims/tolerate-path-lookup-issues-in-golang-1.19
> a4934a1 Move test data file under testdata/
> 56c0de1 Merge pull request # 249 from ialidzhikov/pointer-func-deprecations
> ee5bcf5 tolerate path lookup issues in golang 1.19
> f6158b4 Merge pull request # 248 from aojea/ci
> 161a940 Properly deprecate the funcs from the pointer pkg
> 74ebc72 IP.UnmarshalText() uses net.Parse internally
> 105d5f1 fix linter
> f2fee6f call to (*T).Fatalf from a non-test goroutine (govet)
> 8cc7140 update github CI
bumping k8s.io/client-go 3e73df6...593f096:
> 593f096 Update dependencies to v0.25.2 tag
> 1904631 Merge pull request # 112161 from pohly/automated-cherry-pick-of-# 112129-origin-release-1.25
> 8f4eb75 Merge pull request # 112336 from enj/automated-cherry-pick-of-# 112017-upstream-release-1.25
> e278668 dependencies: update to ginkgo v2.1.6 and gomega v1.20.1
> 1874bc6 exec auth: support TLS config caching
> db7e2d8 Merge pull request # 112055 from aanm/automated-cherry-pick-of-# 111752-origin-gh-aanm-release-1.25
> c9008f3 client-go/rest: check if url is nil to prevent nil pointer dereference
> 1a46dfd Revert "client-go: remove no longer used finalURLTemplate"
> b3e4a40 Merge remote-tracking branch 'origin/master' into release-1.25
> c2f61ae Update removal warnings to 1.26
> 54e42ab update-gofmt
> ef26118 Revert "Remove gcp and azure auth plugins"
> a890e7b Merge pull request # 109090 from sarveshr7/multicidr-rangeallocator
> f10f16e Merge pull request # 111113 from mimowo/retriable-pod-failures-job-controller
> 76884cd Auto generate code for ClusterCIDR API
> 3300752 Merge pull request # 111696 from liggitt/go119mod
> c439b25 Support handling of pod failures with respect to the specified rules
> ce9ac37 Merge pull request # 108692 from jsafrane/selinux
> 4100519 Update go.mod to go1.19
> a00e9ab Regenerate files
> 55b6f70 Merge pull request # 111677 from dims/stop-panic-in-govet-levee
> 606e4a8 run lint-dependencies and follow directions
> fe9020e Stop panic in govet-levee CI job
> e803ec6 Merge pull request # 110495 from alexzielenski/atomic-objectreference
> 07171f8 Merge pull request # 111090 from kinvolk/rata/userns-support-2022
> b5feb25 mark persistentvolume's claimRef as granular
> bebf219 Update kubectl kustomize to kyaml/v0.13.9, cmd/config/v0.10.9, api/v0.12.1, kustomize/v4.5.7 (# 111606)
> 912b04a Update autogenerated files
> c5a511a update kube-openapi
> 68639ba Merge pull request # 111557 from alexzielenski/update-smd-422
> 828c3cb pkg/apis, staging: add HostUsers to pod spec
> b0d4101 Merge pull request # 111475 from alculquicondor/clear_pod_disruption
> 1631be4 update smd to 4.2.3
> 3dfaef5 Add clock interface to disruption controller
> 3e9c4b4 Merge pull request # 111587 from ialidzhikov/k8s-utils@ee6ede2d64
> 2037cc6 Update `k8s.io/utils` to `ee6ede2d64ed`
> 07735ea Merge pull request # 111441 from denkensk/respect-topology
> 95367f2 Merge pull request # 110851 from negz/more-like-mac-slow-s
> 2190b2f code generated by script for MatchLabelKeys in TopologySpreadConstraint
> ec0f337 Merge pull request # 111387 from marseel/feature/retry_internal_errors
> 761f55c Use SHA256 sums to verify discovery cache integrity
> c2d2c47 Merge pull request # 111228 from Abirdcfly/220716
> ff6bf67 Add option to retry internal api error in reflector.
> 735524f Use sha256 to sanitize discovery HTTP cache keys
> fe12e65 Merge pull request # 111442 from ialidzhikov/k8s-utils@56c0de1e6f
> 65b1e7d clean Unreachable code
> 1ea239f Use checksums instead of fsyncs to manage discovery cache corruption
> 7a55c3b Update `k8s.io/utils` to `9bab9ef40391`
> 76fccca Add a benchmark for the discovery cache RoundTripper
> cc879cd Merge pull request # 111254 from dims/update-to-golang-1.19-rc2
> b5c7588 Merge pull request # 109141 from ulucinar/bump-discovery-burst
> 2a6c116 Generate and format files
> b2097e6 Merge pull request # 110666 from ldsdsy/modify
> 4aac6a7 Bump discovery burst of default ConfigFlags to 300
> 4db4856 Merge pull request # 111242 from wojtek-t/fix_leaking_goroutines_11
> 123d4e7 Fix a typo
> ed7d154 Bump default burst limit for discovery client to 300
> 95a40e2 Merge pull request # 110178 from kevindelgado/validation-beta-1-25
> 9dde02d Clean shutdown of client integration tests
> 267b657 Merge pull request # 111235 from Abirdcfly/220719
> b4c3510 update kjson
> 0cfc963 Merge pull request # 110649 from harshanarayana/fix/GIT-110335-fix-fake-event-expansion
> ca60e0e fix a possible panic because of taking the address of nil
> c6bd30b Merge pull request # 111176 from p0lyn0mial/upstream-cacher-refactor-for-streaming
> e2ef408 GIT-110335: address namespace defaulting for events
> 441e2c8 reflector: simplify reading the resourceVersion
> 04f67d5 reflector: move LIST to its own method
> 4e28921 reflector: refactor watchHandler
> 163ee0b Merge pull request # 111080 from zhoumingcheng/master-v3
> 743e29d Merge pull request # 111008 from cici37/bumpCEL
> 84e1219 Correct some wrong syntax
> 79a582f Merge pull request # 111097 from saltbo/fix-thethe-typo
> 7f7dbbc Bump cel-go to v0.12.0
> eabd428 Merge pull request # 111002 from HecarimV/fix-220707
> 7c3fa18 fix: update the typo code comment
> 59fda2e Merge pull request # 111087 from HecarimV/fix-22071205
> f295032 fix static-check for staging/src/k8s.io/client-go/
> 2f582c2 Fix:import the same package multiple times
> 3b969f9 Merge pull request # 111001 from pohly/klog-update
> 26b5064 build: update to klog v2.70.1
> 506ef89 Merge pull request # 110990 from thockin/svc-typenames-IPFamilyPolicyType
> 61a7d9d Rename IPFamilyPolicyType => IPFamilyPolicy
> a16e76e Merge pull request # 110831 from chendave/openapi
> 6a58c3a Bump `kube-openapi` to the latest
> f5b6af4 Merge pull request # 110788 from 21kyu/change_reflect_ptr
> 1b23c15 Change reflect.Ptr to reflect.Pointer
> 8dfe88a Merge pull request # 110724 from pohly/klog-update
> 899bcd7 Merge pull request # 110425 from LY-today/fake-evict-list-err
> cb4a10d build: update to klog v2.70.0
> 0647d43 Merge pull request # 109632 from weilaaa/recorrect_byindex_input_param
> 9ac8bfe fix: list pod err after an pod evicted
> acab036 Merge pull request # 110436 from nicks/nicks/issue-1108
> 64585cf correct input params and add godoc
> 830d4c4 Merge pull request # 110378 from lucacome/bump-grpc
> 5ce7078 client-go: fix panic in ConfirmUsable validation
> e0fa504 Bump grpc to v1.47.0
> 2a9f955 Merge pull request # 110518 from dims/switch-to-released-version-of-v3.8.0-github.com/emicklei/go-restful/v3
> d4fc9af Merge pull request # 110520 from dims/update-gopkg.in/yaml.v3-to-v3.0.1
> 7e1edab Switch to released version of v3.8.0 - github.com/emicklei/go-restful/v3
> 08b4bbc Update gopkg.in/yaml.v3 to v3.0.1
> 87a5b7b Merge pull request # 110351 from dims/switch-to-v3-of-github.com/emicklei/go-restful
> cf0ba13 Switch to v3 of github.com/emicklei/go-restful
> f88de91 Merge pull request # 96771 from soltysh/shortcut_expander
> e2598a4 restmapper: re-try shortcut expander after not-found error
> cf13620 Merge pull request # 84145 from bells17/fix-typo
> 33115b4 Merge pull request # 110100 from tkashem/client-go-backoff-fix
> 21f4308 Fix typo: type -> eventtype
> 0bc005e Merge pull request # 110076 from karlkfi/patch-1
> 71b8c17 client-go: fix backoff delay
> 8764064 fix: reflector to return wrapped list errors
> ef63e49 Merge pull request # 110013 from enj/enj/i/remove_azure_gcp_auth_plugins
> 1f7dc88 Remove gcp and azure auth plugins
> 77f6364 Merge pull request # 108080 from astoycos/issue-132
> b4b8e5e Merge pull request # 108492 from kerthcet/feature/add-NodeInclustionPolicies
> f19a514 Fix Panic Condition
> 5bb1a76 feat: add NodeInclusionPolicy to TopologySpreadConstraint in PodSpec
> 2420926 Merge pull request # 109891 from pohly/log-dependency-update
> 7c5b742 Merge pull request # 109602 from lavalamp/remove-clustername
> c301d44 dependencies: logr and zapr v1.2.3
> 5a9c3ac generated files
> c0ab12a Merge pull request # 109804 from cici37/celUpdate
> bb66d80 Merge pull request # 109607 from liggitt/drop-unused-third-party
> 631da0c Update go-control-plane, cncf/xds/go, cncf/udpa/go and remove unused versions
> f4c553f Merge pull request # 109440 from liggitt/gomod-1.18
> 7481505 Remove package variables
> 28feddd Update GRPC
> ccff963 Regenerate vendor
> 28c4a4d Drop unused golang/template funcs
> a8fbc49 Update genproto and antlr.
> 671a40c Merge pull request # 109520 from cncal/lcn_cleanup_duplicate_code_snippet
> ac3e4ba Bump cel-go to v0.11.2
> 211d8a7 Merge pull request # 109443 from kevindelgado/dynamic-apply
> 152e68f Remove the duplicate code snippet in client-go delaying_queue tests
> 2cf1a8f Merge pull request # 105963 from zhucan/bugfix-95367
> 4eab6be Add Apply and ApplyStatus methods to dynamic ResourceInterface
> 28ccde7 Merge pull request # 109436 from JamesLaverack/revert-108290
> 1f8debf generated code and doc
> f9fdccd Revert "Introduce APIs to support multiple ClusterCIDRs (# 108290)"
Signed-off-by: Knative Automation <automation@knative.team>
Signed-off-by: Knative Automation <automation@knative.team>
This commit is contained in:
parent
97690ef46c
commit
3120ee616a
30
go.mod
30
go.mod
|
|
@ -14,17 +14,17 @@ require (
|
|||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4
|
||||
golang.org/x/term v0.0.0-20220919170432-7a66f970e087
|
||||
gotest.tools/v3 v3.3.0
|
||||
k8s.io/api v0.24.4
|
||||
k8s.io/apiextensions-apiserver v0.24.4
|
||||
k8s.io/apimachinery v0.24.4
|
||||
k8s.io/api v0.25.2
|
||||
k8s.io/apiextensions-apiserver v0.25.2
|
||||
k8s.io/apimachinery v0.25.2
|
||||
k8s.io/cli-runtime v0.24.4
|
||||
k8s.io/client-go v0.24.4
|
||||
k8s.io/code-generator v0.24.4
|
||||
knative.dev/eventing v0.34.1-0.20221005061829-af2298ff121a
|
||||
knative.dev/hack v0.0.0-20221004153928-92a65f105c37
|
||||
knative.dev/networking v0.0.0-20221003195429-a5c26fe64325
|
||||
knative.dev/pkg v0.0.0-20221003153827-158538cc46ec
|
||||
knative.dev/serving v0.34.1-0.20221005094629-080aaa5c6241
|
||||
k8s.io/client-go v0.25.2
|
||||
k8s.io/code-generator v0.25.2
|
||||
knative.dev/eventing v0.34.1-0.20221010083935-56e9ee13b301
|
||||
knative.dev/hack v0.0.0-20221010154335-3fdc50b9c24a
|
||||
knative.dev/networking v0.0.0-20221007151732-fcbfa31ac035
|
||||
knative.dev/pkg v0.0.0-20221010143036-21d3b47e2efe
|
||||
knative.dev/serving v0.34.1-0.20221010213350-636e12161662
|
||||
sigs.k8s.io/yaml v1.3.0
|
||||
)
|
||||
|
||||
|
|
@ -47,7 +47,7 @@ require (
|
|||
github.com/cloudevents/sdk-go/v2 v2.12.0 // indirect
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/emicklei/go-restful v2.9.5+incompatible // indirect
|
||||
github.com/emicklei/go-restful/v3 v3.8.0 // indirect
|
||||
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
|
||||
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
|
||||
github.com/fsnotify/fsnotify v1.5.4 // indirect
|
||||
|
|
@ -130,10 +130,10 @@ require (
|
|||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
k8s.io/gengo v0.0.0-20220613173612-397b4ae3bce7 // indirect
|
||||
k8s.io/klog/v2 v2.70.2-0.20220707122935-0990e81f1a8f // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 // indirect
|
||||
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 // indirect
|
||||
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
|
||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 // indirect
|
||||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed // indirect
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 // indirect
|
||||
sigs.k8s.io/kustomize/api v0.11.4 // indirect
|
||||
sigs.k8s.io/kustomize/kyaml v0.13.6 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
|
||||
)
|
||||
|
|
|
|||
169
go.sum
169
go.sum
|
|
@ -64,7 +64,6 @@ contrib.go.opencensus.io/exporter/ocagent v0.7.1-0.20200907061046-05415f1de66d/g
|
|||
contrib.go.opencensus.io/exporter/prometheus v0.4.2 h1:sqfsYl5GIY/L570iT+l93ehxaWJs2/OwXtiWwew3oAg=
|
||||
contrib.go.opencensus.io/exporter/prometheus v0.4.2/go.mod h1:dvEHbiKmgvbr5pjaF9fpw1KeYcjrnC1J8B+JKjsZyRQ=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
|
||||
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
|
||||
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
|
||||
|
|
@ -86,7 +85,6 @@ github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBp
|
|||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
|
||||
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
|
||||
|
|
@ -99,7 +97,6 @@ github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRF
|
|||
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
|
||||
github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE=
|
||||
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
||||
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
|
||||
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 h1:yL7+Jz0jTC6yykIK/Wh74gnTJnrGr5AyrNMXuA0gves=
|
||||
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
|
||||
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
|
||||
|
|
@ -107,25 +104,19 @@ github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmV
|
|||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
|
||||
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
|
||||
github.com/benbjohnson/clock v1.0.3/go.mod h1:bGMdMPoPVvcYyt1gHDf4J2KE153Yf9BuiUKYMaxlTDM=
|
||||
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
||||
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
|
||||
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
|
||||
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
|
||||
github.com/blendle/zapdriver v1.3.1 h1:C3dydBOWYRiOk+B8X9IVZ5IOe+7cl+tGOexN4QqHfpE=
|
||||
github.com/blendle/zapdriver v1.3.1/go.mod h1:mdXfREi6u5MArG4j9fewC+FGnXaBR+T4Ox4J2u4eHCc=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/census-instrumentation/opencensus-proto v0.3.0 h1:t/LhUZLVitR1Ow2YOnduCsavhwFUklBMoGVYUCqmCqk=
|
||||
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
|
||||
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
|
||||
|
|
@ -147,37 +138,26 @@ github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWH
|
|||
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
|
||||
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
|
||||
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
|
||||
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
|
||||
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
|
||||
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
|
||||
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
|
||||
github.com/eapache/go-resiliency v1.2.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
|
||||
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
|
||||
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
|
||||
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
|
||||
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
|
||||
github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao2r4iyvLdACqsl/Ljk=
|
||||
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
|
||||
github.com/emicklei/go-restful/v3 v3.8.0 h1:eCZ8ulSerjdAiaNpF7GxXIE7ZCMo1moN1qX+S609eVw=
|
||||
github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
||||
|
|
@ -196,7 +176,6 @@ github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
|
|||
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
|
||||
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/flowstack/go-jsonschema v0.1.1/go.mod h1:yL7fNggx1o8rm9RlgXv7hTBWxdBM0rVwpMwimd3F3N0=
|
||||
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
|
||||
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
|
||||
|
|
@ -208,7 +187,6 @@ github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4
|
|||
github.com/fsnotify/fsnotify v1.5.4 h1:jRbGcIw6P2Meqdwuo0H1p6JVLbL5DHKAKlYndzMwVZI=
|
||||
github.com/fsnotify/fsnotify v1.5.4/go.mod h1:OVB6XrOHzAwXMpEM7uPOzcehqUV2UqJxmVXmkdnm1bU=
|
||||
github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
|
||||
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
|
||||
github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA=
|
||||
|
|
@ -232,7 +210,6 @@ github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTg
|
|||
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
|
||||
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
|
||||
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
|
||||
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
|
||||
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
|
||||
|
|
@ -248,8 +225,6 @@ github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/me
|
|||
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
|
||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
|
||||
|
|
@ -257,8 +232,6 @@ github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzw
|
|||
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
|
||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
|
|
@ -298,8 +271,6 @@ github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ
|
|||
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
|
||||
github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU=
|
||||
github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
|
||||
github.com/google/cel-go v0.10.1/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
|
||||
github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
|
||||
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
|
||||
github.com/google/gnostic v0.6.9 h1:ZK/5VhkoX835RikCHpSUJV9a+S3e1zLh59YnyWeBW+0=
|
||||
github.com/google/gnostic v0.6.9/go.mod h1:Nm8234We1lq6iB9OmlgNv3nH91XLLVZHCDayfA3xq+E=
|
||||
|
|
@ -371,10 +342,6 @@ github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/ad
|
|||
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA=
|
||||
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.14.6/go.mod h1:zdiPV4Yse/1gnckTHtghG4GkDEdKCRJduHpTxT3/jcw=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
|
||||
|
|
@ -418,8 +385,6 @@ github.com/jcmturner/goidentity/v6 v6.0.1/go.mod h1:X1YW3bgtvwAXju7V3LCIMpY0Gbxy
|
|||
github.com/jcmturner/gokrb5/v8 v8.4.2/go.mod h1:sb+Xq/fTY5yktf/VxLsE3wlfPqQjp0aWNYyvBVK62bc=
|
||||
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
|
||||
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
|
||||
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
|
||||
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
|
||||
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
|
||||
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
|
||||
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
|
||||
|
|
@ -435,8 +400,6 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V
|
|||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
|
||||
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
|
|
@ -454,7 +417,6 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
|||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
|
||||
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/magiconair/properties v1.8.5/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
|
||||
github.com/magiconair/properties v1.8.6 h1:5ibWZ6iY0NctNGWo87LalDlEZ6R41TqbbDamhfG/Qzo=
|
||||
github.com/magiconair/properties v1.8.6/go.mod h1:y3VJvCyxH9uVvJTWEGAELF3aiYNyPKd5NZ3oSwXrF60=
|
||||
|
|
@ -483,7 +445,6 @@ github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RR
|
|||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
|
||||
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
|
|
@ -501,9 +462,7 @@ github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRW
|
|||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
|
||||
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
|
||||
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
|
||||
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
|
||||
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
|
||||
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
|
||||
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
|
||||
|
|
@ -511,18 +470,17 @@ github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9k
|
|||
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
|
||||
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
|
||||
github.com/onsi/ginkgo v1.16.5/go.mod h1:+E8gABHa3K6zRBolWtd+ROzc/U5bkGt0FwiG042wbpU=
|
||||
github.com/onsi/ginkgo/v2 v2.1.6 h1:Fx2POJZfKRQcM1pH49qSZiYeu319wji004qX+GDovrU=
|
||||
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
|
||||
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
|
||||
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
|
||||
github.com/onsi/gomega v1.16.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
|
||||
github.com/onsi/gomega v1.19.0 h1:4ieX6qQjPP/BfC3mpsAtIGGlxTWPeA3Inl/7DtXw1tw=
|
||||
github.com/onsi/gomega v1.20.1 h1:PA/3qinGoukvymdIDV8pii6tiZgC8kbmJO6Z5+b002Q=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/openzipkin/zipkin-go v0.4.0 h1:CtfRrOVZtbDj8rt1WXjklw0kqqJQwICrCKmlfUuBUUw=
|
||||
github.com/openzipkin/zipkin-go v0.4.0/go.mod h1:4c3sLeE8xjNqehmF5RpAFLPLJxXscc0R4l6Zg0P1tTQ=
|
||||
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
|
||||
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
|
||||
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
|
||||
|
|
@ -540,9 +498,7 @@ github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qR
|
|||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
|
||||
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
|
||||
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
|
||||
|
|
@ -555,8 +511,6 @@ github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:
|
|||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
|
||||
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
|
||||
|
|
@ -565,7 +519,6 @@ github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJ
|
|||
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
|
||||
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
|
|
@ -575,7 +528,6 @@ github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0ua
|
|||
github.com/prometheus/statsd_exporter v0.22.7/go.mod h1:N/TevpjkIh9ccs6nuzY3jQn9dFqnUakOjnEuMPJJJnI=
|
||||
github.com/prometheus/statsd_exporter v0.22.8 h1:Qo2D9ZzaQG+id9i5NYNGmbf1aa/KxKbB9aKfMS+Yib0=
|
||||
github.com/prometheus/statsd_exporter v0.22.8/go.mod h1:/DzwbTEaFTE0Ojz5PqcSk6+PFHOPWGxdXVr6yC8eFOM=
|
||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||
github.com/rabbitmq/amqp091-go v1.1.0/go.mod h1:ogQDLSOACsLPsIq0NpbtiifNZi2YOz0VTJ0kHRghqbM=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/rickb777/date v1.20.0 h1:oRGcq4b+ba12N/HnsVZuWSK/QJb/o/hnjOJEyRMGUT0=
|
||||
|
|
@ -584,7 +536,6 @@ github.com/rickb777/plural v1.4.1 h1:5MMLcbIaapLFmvDGRT5iPk8877hpTPt8Y9cdSKRw9sU
|
|||
github.com/rickb777/plural v1.4.1/go.mod h1:kdmXUpmKBJTS0FtG/TFumd//VBWsNTD7zOw7x4umxNw=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
|
||||
|
|
@ -599,35 +550,27 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
|
|||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
|
||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
|
||||
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
|
||||
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
|
||||
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
|
||||
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
|
||||
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
|
||||
github.com/spf13/afero v1.9.2 h1:j49Hj62F0n+DaZ1dDCvhABaPNSGNkt32oRFxI33IEMw=
|
||||
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cast v1.5.0 h1:rj3WzYc11XZaIZMPKmwP96zkFEnnAmV8s6XbB2aY32w=
|
||||
github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155UU=
|
||||
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
|
||||
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
|
||||
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
|
||||
github.com/spf13/cobra v1.5.0 h1:X+jTBEBqF0bHN+9cSMgmfuvv2VHJ9ezmFNf9Y/XstYU=
|
||||
github.com/spf13/cobra v1.5.0/go.mod h1:dWXEIy2H428czQCjInthrTRUg7yKbok+2Qi/yBIJoUM=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
|
||||
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
|
||||
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
|
||||
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
|
||||
github.com/spf13/viper v1.13.0 h1:BWSJ/M+f+3nmdz9bxB+bWX28kkALN2ok11D0rSo8EJU=
|
||||
github.com/spf13/viper v1.13.0/go.mod h1:Icm2xNL3/8uyh/wFuB1jI7TiTNKp8632Nwegu+zgdYw=
|
||||
|
|
@ -650,8 +593,6 @@ github.com/stvp/go-udp-testing v0.0.0-20201019212854-469649b16807/go.mod h1:7jxm
|
|||
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
|
||||
github.com/subosito/gotenv v1.4.1 h1:jyEFiXpy21Wm81FBN71l9VoMMV8H8jG+qIK3GCpY6Qs=
|
||||
github.com/subosito/gotenv v1.4.1/go.mod h1:ayKnFf/c6rvx/2iiLrJUk1e6plDbT3edrFNGqEflhK0=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/urfave/cli/v2 v2.3.0/go.mod h1:LJmUH05zAU44vOAcrfzZQKsZbVcdbOG8rtL3/XcUArI=
|
||||
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
|
||||
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
|
||||
|
|
@ -660,7 +601,6 @@ github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6
|
|||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
|
||||
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
|
||||
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
|
||||
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
|
||||
|
|
@ -669,19 +609,9 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
|
|||
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
|
||||
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
|
||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
|
||||
go.etcd.io/etcd/api/v3 v3.5.0/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
|
||||
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.5.0/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
|
||||
go.etcd.io/etcd/client/v2 v2.305.0/go.mod h1:h9puh54ZTgAKtEbut2oe9P4L/oqKCVB6xsXlzd7alYQ=
|
||||
go.etcd.io/etcd/client/v3 v3.5.0/go.mod h1:AIKXXVX/DQXtfTEqBryiLTUXwON+GuvO6Z7lLS/oTh0=
|
||||
go.etcd.io/etcd/client/v3 v3.5.1/go.mod h1:OnjH4M8OnAotwaB2l9bVgZzRFKru7/ZMoS46OtKyd3Q=
|
||||
go.etcd.io/etcd/pkg/v3 v3.5.0/go.mod h1:UzJGatBQ1lXChBkQF0AuAtkRQMYnHubxAEYIrC3MSsE=
|
||||
go.etcd.io/etcd/raft/v3 v3.5.0/go.mod h1:UFOHSIvO/nKwd4lhkwabrTD3cqW5yVyYYf/KlD00Szc=
|
||||
go.etcd.io/etcd/server/v3 v3.5.0/go.mod h1:3Ah5ruV+M+7RZr0+Y/5mNLwC+eQlni+mQmOVdCRJoS4=
|
||||
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
|
||||
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
|
||||
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
||||
|
|
@ -690,17 +620,6 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
|
|||
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
|
||||
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
||||
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
|
||||
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
|
||||
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
|
||||
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
|
||||
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
|
||||
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
|
||||
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
|
||||
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
|
||||
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
|
||||
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
||||
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
|
||||
go.starlark.net v0.0.0-20220817180228-f738f5508c12 h1:xOBJXWGEDwU5xSDxH6macxO11Us0AH2fTa9rmsbbF7g=
|
||||
|
|
@ -709,7 +628,6 @@ go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
|||
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
|
||||
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
|
||||
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
|
||||
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
|
||||
|
|
@ -717,7 +635,6 @@ go.uber.org/multierr v1.8.0 h1:dg6GjLku4EH+249NNmoIciG9N/jURbDG+pFlTkhzIC8=
|
|||
go.uber.org/multierr v1.8.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
|
||||
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
|
||||
go.uber.org/zap v1.23.0 h1:OjGQ5KQDEUawVHxNwQgPpiypGHOxo2mNZsOqTak4fFY=
|
||||
go.uber.org/zap v1.23.0/go.mod h1:D+nX8jyLsMHMYrln8A0rJjFt/T/9/bGgIhAqxv5URuY=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
|
|
@ -773,7 +690,6 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
|||
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
|
|
@ -782,7 +698,6 @@ golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73r
|
|||
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
|
|
@ -814,7 +729,6 @@ golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81R
|
|||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
|
|
@ -825,9 +739,7 @@ golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT
|
|||
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210917221730-978cfadd31cf/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
|
|
@ -882,7 +794,6 @@ golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5h
|
|||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
|
@ -919,7 +830,6 @@ golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
|
|
@ -945,9 +855,7 @@ golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210831042530-f4d43177bf5e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
|
|
@ -983,13 +891,10 @@ golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
|||
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20220920022843-2ce7c2934d45 h1:yuLAip3bfURHClMG9VBdzPrQvCWjWiWUTBGV+/fCbUs=
|
||||
golang.org/x/time v0.0.0-20220920022843-2ce7c2934d45/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
|
|
@ -1001,12 +906,10 @@ golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBn
|
|||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
|
|
@ -1049,7 +952,6 @@ golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
|||
golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
|
||||
golang.org/x/tools v0.1.10-0.20220218145154-897bd77cd717/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
|
||||
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
|
@ -1133,7 +1035,6 @@ google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfG
|
|||
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200423170343-7949de9c1215/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
|
|
@ -1146,7 +1047,6 @@ google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6D
|
|||
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20201102152239-715cce707fb0/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
|
|
@ -1262,16 +1162,10 @@ gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
|||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
|
||||
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
|
||||
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/ini.v1 v1.62.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
|
||||
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
|
||||
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
||||
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
|
|
@ -1287,8 +1181,6 @@ gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C
|
|||
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
|
||||
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
|
||||
gotest.tools/v3 v3.3.0 h1:MfDY1b1/0xN1CyMlQDac0ziEy9zJQd9CXBRRDHw2jJo=
|
||||
gotest.tools/v3 v3.3.0/go.mod h1:Mcr9QNxkg0uMvy/YElmo4SpXgJKWgQvYrT7Kw5RzJ1A=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
|
|
@ -1298,23 +1190,23 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
|
|||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
k8s.io/api v0.24.4 h1:I5Y645gJ8zWKawyr78lVfDQkZrAViSbeRXsPZWTxmXk=
|
||||
k8s.io/api v0.24.4/go.mod h1:42pVfA0NRxrtJhZQOvRSyZcJihzAdU59WBtTjYcB0/M=
|
||||
k8s.io/apiextensions-apiserver v0.24.4 h1:w53Pm4zu8fCt9WfiRgS2YI6LE6I4NJ5aUi78GElD3K8=
|
||||
k8s.io/apiextensions-apiserver v0.24.4/go.mod h1:iDK+Xb4jsPNnRGj5jU/WqqjLvt8363M7cKixKe1C9+U=
|
||||
k8s.io/apimachinery v0.24.4 h1:S0Ur3J/PbivTcL43EdSdPhqCqKla2NIuneNwZcTDeGQ=
|
||||
k8s.io/api v0.25.2 h1:v6G8RyFcwf0HR5jQGIAYlvtRNrxMJQG1xJzaSeVnIS8=
|
||||
k8s.io/api v0.25.2/go.mod h1:qP1Rn4sCVFwx/xIhe+we2cwBLTXNcheRyYXwajonhy0=
|
||||
k8s.io/apiextensions-apiserver v0.25.2 h1:8uOQX17RE7XL02ngtnh3TgifY7EhekpK+/piwzQNnBo=
|
||||
k8s.io/apiextensions-apiserver v0.25.2/go.mod h1:iRwwRDlWPfaHhuBfQ0WMa5skdQfrE18QXJaJvIDLvE8=
|
||||
k8s.io/apimachinery v0.24.4/go.mod h1:82Bi4sCzVBdpYjyI4jY6aHX+YCUchUIrZrXKedjd2UM=
|
||||
k8s.io/apiserver v0.24.4/go.mod h1:mAuC3pZVc0IDXLx7lUHoisBOtBa1SobfLW/CI3klXQE=
|
||||
k8s.io/apimachinery v0.25.2 h1:WbxfAjCx+AeN8Ilp9joWnyJ6xu9OMeS/fsfjK/5zaQs=
|
||||
k8s.io/apimachinery v0.25.2/go.mod h1:hqqA1X0bsgsxI6dXsJ4HnNTBOmJNxyPp8dw3u2fSHwA=
|
||||
k8s.io/cli-runtime v0.24.4 h1:YCSf0dZp+pYXVR/8aZQ6MEBSiicv8rLyVsGBEbRnwfY=
|
||||
k8s.io/cli-runtime v0.24.4/go.mod h1:RF+cSLYXkPV3WyvPrX2qeRLEUJY38INWx6jLKVLFCxM=
|
||||
k8s.io/client-go v0.24.4 h1:hIAIJZIPyaw46AkxwyR0FRfM/pRxpUNTd3ysYu9vyRg=
|
||||
k8s.io/client-go v0.24.4/go.mod h1:+AxlPWw/H6f+EJhRSjIeALaJT4tbeB/8g9BNvXGPd0Y=
|
||||
k8s.io/code-generator v0.24.4 h1:HLnoAabkTFKy1Ex4cMvffz6KkWFJ7oFN2yX37Icbbyg=
|
||||
k8s.io/code-generator v0.24.4/go.mod h1:dpVhs00hTuTdTY6jvVxvTFCk6gSMrtfRydbhZwHI15w=
|
||||
k8s.io/component-base v0.24.4/go.mod h1:sWxkgcMfbYHadw0OJ0N+vIscd14/nqSIM2veCdg843o=
|
||||
k8s.io/client-go v0.25.2 h1:SUPp9p5CwM0yXGQrwYurw9LWz+YtMwhWd0GqOsSiefo=
|
||||
k8s.io/client-go v0.25.2/go.mod h1:i7cNU7N+yGQmJkewcRD2+Vuj4iz7b30kI8OcL3horQ4=
|
||||
k8s.io/code-generator v0.25.2 h1:qEHux0+E1c+j1MhsWn9+4Z6av8zrZBixOTPW064rSiY=
|
||||
k8s.io/code-generator v0.25.2/go.mod h1:f61OcU2VqVQcjt/6TrU0sta1TA5hHkOO6ZZPwkL9Eys=
|
||||
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
|
||||
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
|
||||
k8s.io/gengo v0.0.0-20211129171323-c02415ce4185/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
|
||||
k8s.io/gengo v0.0.0-20220613173612-397b4ae3bce7 h1:RGb68G3yotdQggcyenx9y0+lnVJCXXcLa6geXOMlf5o=
|
||||
k8s.io/gengo v0.0.0-20220613173612-397b4ae3bce7/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
|
||||
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
|
||||
|
|
@ -1323,34 +1215,37 @@ k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
|||
k8s.io/klog/v2 v2.70.2-0.20220707122935-0990e81f1a8f h1:dltw7bAn8bCrQ2CmzzhgoieUZEbWqrvIGVdHGioP5nY=
|
||||
k8s.io/klog/v2 v2.70.2-0.20220707122935-0990e81f1a8f/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
|
||||
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
|
||||
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 h1:Gii5eqf+GmIEwGNKQYQClCayuJCe2/4fZUvF7VG99sU=
|
||||
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42/go.mod h1:Z/45zLw8lUo4wdiUkI+v/ImEGAvu3WatcZl3lPMR4Rk=
|
||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 h1:MQ8BAZPZlWk3S9K4a9NCkIFQtZShWqoha7snGixVgEA=
|
||||
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1/go.mod h1:C/N6wCaBHeBHkHUesQOQy2/MZqGgMAFPqGsGQLdbZBU=
|
||||
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
|
||||
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
knative.dev/eventing v0.34.1-0.20221005061829-af2298ff121a h1:Bh/WV0aSUVpa5V9t+KVT4Yrthr2iOwvxaPFk4b2E6Q8=
|
||||
knative.dev/eventing v0.34.1-0.20221005061829-af2298ff121a/go.mod h1:KTyxEUhiRMbAUTPIscPDEYmCjXSXc0gHD+gQciefuGg=
|
||||
knative.dev/hack v0.0.0-20221004153928-92a65f105c37 h1:4xB0A2aWQtzUcFjpZf9ufxRsjt+E7tEL364VlPttI8s=
|
||||
knative.dev/hack v0.0.0-20221004153928-92a65f105c37/go.mod h1:yk2OjGDsbEnQjfxdm0/HJKS2WqTLEFg/N6nUs6Rqx3Q=
|
||||
knative.dev/networking v0.0.0-20221003195429-a5c26fe64325 h1:0cMVoIs4hQ+kQzkZBon4WncT32AU/dHocb5gfj6/Wjk=
|
||||
knative.dev/networking v0.0.0-20221003195429-a5c26fe64325/go.mod h1:fabM/jvxxkaMWBQGoY8N6vqJAdvYULWhJEnht4hacfM=
|
||||
knative.dev/pkg v0.0.0-20221003153827-158538cc46ec h1:Cen2FjQ3sUDLfeKkP8gVA9YhLbj/VueTlGzLHHKWNfU=
|
||||
knative.dev/pkg v0.0.0-20221003153827-158538cc46ec/go.mod h1:qTItuDia3CTW2J9oC4fzJsXifRPZcFWvESKozCE364Q=
|
||||
knative.dev/serving v0.34.1-0.20221005094629-080aaa5c6241 h1:ddHb6DwHQxpksH6qQB4sdEqQ7CfV7ddeM0f1+kqRyis=
|
||||
knative.dev/serving v0.34.1-0.20221005094629-080aaa5c6241/go.mod h1:JW/0/lyHDG3QclYrbuR4B5UbK455TkEJBPC+00W/xyM=
|
||||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed h1:jAne/RjBTyawwAy0utX5eqigAwz/lQhTmy+Hr/Cpue4=
|
||||
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
|
||||
knative.dev/eventing v0.34.1-0.20221010083935-56e9ee13b301 h1:4C1AYI1N5yugf/20yn1eErTRE2L0IwwUzOZQ7qEnV08=
|
||||
knative.dev/eventing v0.34.1-0.20221010083935-56e9ee13b301/go.mod h1:KTyxEUhiRMbAUTPIscPDEYmCjXSXc0gHD+gQciefuGg=
|
||||
knative.dev/hack v0.0.0-20221010154335-3fdc50b9c24a h1:yfq1OMrkyYkxDeM0pmAOeN4YF16R/WG0C+VvLBeq4uc=
|
||||
knative.dev/hack v0.0.0-20221010154335-3fdc50b9c24a/go.mod h1:yk2OjGDsbEnQjfxdm0/HJKS2WqTLEFg/N6nUs6Rqx3Q=
|
||||
knative.dev/networking v0.0.0-20221007151732-fcbfa31ac035 h1:OOw/KMAKITv3godQbwzT617mjJGYiNuRQxPGpjqwHkE=
|
||||
knative.dev/networking v0.0.0-20221007151732-fcbfa31ac035/go.mod h1:ySkvDWiPZ0QD6H5GlYxOgUI782Fv2vLkyQRXCr4wkrQ=
|
||||
knative.dev/pkg v0.0.0-20221010143036-21d3b47e2efe h1:mlyaEOLotR3ma5hIN4XP/KE3/SeQvJa6hNd2BNYg2U4=
|
||||
knative.dev/pkg v0.0.0-20221010143036-21d3b47e2efe/go.mod h1:8MLQO2NWHa+1GhEUlogKfOEkyh2T2EWgfSayCA0q2gc=
|
||||
knative.dev/serving v0.34.1-0.20221010213350-636e12161662 h1:9KTtuPDKmy6SSjdJJ0GxtyFtyxqQUth730h4ZPVnYj8=
|
||||
knative.dev/serving v0.34.1-0.20221010213350-636e12161662/go.mod h1:ER6NpJWxr9ovyBpwAmVFeoKs70aJ+iz3ex+rneYkBXM=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
|
||||
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
|
||||
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30/go.mod h1:fEO7lRTdivWO2qYVCVG7dEADOMo/MLDCVr8So2g88Uw=
|
||||
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
|
||||
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
|
||||
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
|
||||
sigs.k8s.io/kustomize/api v0.11.4 h1:/0Mr3kfBBNcNPOW5Qwk/3eb8zkswCwnqQxxKtmrTkRo=
|
||||
sigs.k8s.io/kustomize/api v0.11.4/go.mod h1:k+8RsqYbgpkIrJ4p9jcdPqe8DprLxFUUO0yNOq8C+xI=
|
||||
sigs.k8s.io/kustomize/kyaml v0.13.6 h1:eF+wsn4J7GOAXlvajv6OknSunxpcOBQQqsnPxObtkGs=
|
||||
sigs.k8s.io/kustomize/kyaml v0.13.6/go.mod h1:yHP031rn1QX1lr/Xd934Ri/xdVNG8BE2ECa78Ht/kEg=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.1/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
|
||||
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
|
||||
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
|
||||
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
|
||||
sigs.k8s.io/yaml v1.3.0/go.mod h1:GeOyir5tyXNByN85N/dRIT9es5UQNerPYEKK56eTBm8=
|
||||
|
|
|
|||
|
|
@ -1,6 +0,0 @@
|
|||
language: go
|
||||
|
||||
go:
|
||||
- 1.x
|
||||
|
||||
script: go test -v
|
||||
|
|
@ -1,7 +0,0 @@
|
|||
all: test
|
||||
|
||||
test:
|
||||
go test -v .
|
||||
|
||||
ex:
|
||||
cd examples && ls *.go | xargs go build -o /tmp/ignore
|
||||
|
|
@ -68,3 +68,4 @@ examples/restful-html-template
|
|||
|
||||
s.html
|
||||
restful-path-tail
|
||||
.idea
|
||||
|
|
@ -0,0 +1 @@
|
|||
ignore
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
language: go
|
||||
|
||||
go:
|
||||
- 1.x
|
||||
|
||||
before_install:
|
||||
- go test -v
|
||||
|
||||
script:
|
||||
- go test -race -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
|
|
@ -1,7 +1,106 @@
|
|||
## Change history of go-restful
|
||||
# Change history of go-restful
|
||||
|
||||
## [v3.8.0] - 20221-06-06
|
||||
|
||||
- use exact matching of allowed domain entries, issue #489 (#493)
|
||||
- this changes fixes [security] Authorization Bypass Through User-Controlled Key
|
||||
by changing the behaviour of the AllowedDomains setting in the CORS filter.
|
||||
To support the previous behaviour, the CORS filter type now has a AllowedDomainFunc
|
||||
callback mechanism which is called when a simple domain match fails.
|
||||
- add test and fix for POST without body and Content-type, issue #492 (#496)
|
||||
- [Minor] Bad practice to have a mix of Receiver types. (#491)
|
||||
|
||||
## [v3.7.2] - 2021-11-24
|
||||
|
||||
- restored FilterChain (#482 by SVilgelm)
|
||||
|
||||
|
||||
## [v3.7.1] - 2021-10-04
|
||||
|
||||
- fix problem with contentEncodingEnabled setting (#479)
|
||||
|
||||
## [v3.7.0] - 2021-09-24
|
||||
|
||||
- feat(parameter): adds additional openapi mappings (#478)
|
||||
|
||||
## [v3.6.0] - 2021-09-18
|
||||
|
||||
- add support for vendor extensions (#477 thx erraggy)
|
||||
|
||||
## [v3.5.2] - 2021-07-14
|
||||
|
||||
- fix removing absent route from webservice (#472)
|
||||
|
||||
## [v3.5.1] - 2021-04-12
|
||||
|
||||
- fix handling no match access selected path
|
||||
- remove obsolete field
|
||||
|
||||
## [v3.5.0] - 2021-04-10
|
||||
|
||||
- add check for wildcard (#463) in CORS
|
||||
- add access to Route from Request, issue #459 (#462)
|
||||
|
||||
## [v3.4.0] - 2020-11-10
|
||||
|
||||
- Added OPTIONS to WebService
|
||||
|
||||
## [v3.3.2] - 2020-01-23
|
||||
|
||||
- Fixed duplicate compression in dispatch. #449
|
||||
|
||||
|
||||
## [v3.3.1] - 2020-08-31
|
||||
|
||||
- Added check on writer to prevent compression of response twice. #447
|
||||
|
||||
## [v3.3.0] - 2020-08-19
|
||||
|
||||
- Enable content encoding on Handle and ServeHTTP (#446)
|
||||
- List available representations in 406 body (#437)
|
||||
- Convert to string using rune() (#443)
|
||||
|
||||
## [v3.2.0] - 2020-06-21
|
||||
|
||||
- 405 Method Not Allowed must have Allow header (#436) (thx Bracken <abdawson@gmail.com>)
|
||||
- add field allowedMethodsWithoutContentType (#424)
|
||||
|
||||
## [v3.1.0]
|
||||
|
||||
- support describing response headers (#426)
|
||||
- fix openapi examples (#425)
|
||||
|
||||
v3.0.0
|
||||
|
||||
- fix: use request/response resulting from filter chain
|
||||
- add Go module
|
||||
Module consumer should use github.com/emicklei/go-restful/v3 as import path
|
||||
|
||||
v2.10.0
|
||||
|
||||
- support for Custom Verbs (thanks Vinci Xu <277040271@qq.com>)
|
||||
- fixed static example (thanks Arthur <yang_yapo@126.com>)
|
||||
- simplify code (thanks Christian Muehlhaeuser <muesli@gmail.com>)
|
||||
- added JWT HMAC with SHA-512 authentication code example (thanks Amim Knabben <amim.knabben@gmail.com>)
|
||||
|
||||
v2.9.6
|
||||
|
||||
- small optimization in filter code
|
||||
|
||||
v2.11.1
|
||||
|
||||
- fix WriteError return value (#415)
|
||||
|
||||
v2.11.0
|
||||
|
||||
- allow prefix and suffix in path variable expression (#414)
|
||||
|
||||
v2.9.6
|
||||
|
||||
- support google custome verb (#413)
|
||||
|
||||
v2.9.5
|
||||
|
||||
- fix panic in Response.WriteError if err == nil
|
||||
|
||||
v2.9.4
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
all: test
|
||||
|
||||
test:
|
||||
go vet .
|
||||
go test -cover -v .
|
||||
|
||||
ex:
|
||||
find ./examples -type f -name "*.go" | xargs -I {} go build -o /tmp/ignore {}
|
||||
|
|
@ -4,9 +4,10 @@ package for building REST-style Web Services using Google Go
|
|||
|
||||
[](https://travis-ci.org/emicklei/go-restful)
|
||||
[](https://goreportcard.com/report/github.com/emicklei/go-restful)
|
||||
[](https://godoc.org/github.com/emicklei/go-restful)
|
||||
[](https://pkg.go.dev/github.com/emicklei/go-restful)
|
||||
[](https://codecov.io/gh/emicklei/go-restful)
|
||||
|
||||
- [Code examples](https://github.com/emicklei/go-restful/tree/master/examples)
|
||||
- [Code examples use v3](https://github.com/emicklei/go-restful/tree/v3/examples)
|
||||
|
||||
REST asks developers to use HTTP methods explicitly and in a way that's consistent with the protocol definition. This basic REST design principle establishes a one-to-one mapping between create, read, update, and delete (CRUD) operations and HTTP methods. According to this mapping:
|
||||
|
||||
|
|
@ -18,6 +19,28 @@ REST asks developers to use HTTP methods explicitly and in a way that's consiste
|
|||
- PATCH = Update partial content of a resource
|
||||
- OPTIONS = Get information about the communication options for the request URI
|
||||
|
||||
### Usage
|
||||
|
||||
#### Without Go Modules
|
||||
|
||||
All versions up to `v2.*.*` (on the master) are not supporting Go modules.
|
||||
|
||||
```
|
||||
import (
|
||||
restful "github.com/emicklei/go-restful"
|
||||
)
|
||||
```
|
||||
|
||||
#### Using Go Modules
|
||||
|
||||
As of version `v3.0.0` (on the v3 branch), this package supports Go modules.
|
||||
|
||||
```
|
||||
import (
|
||||
restful "github.com/emicklei/go-restful/v3"
|
||||
)
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
```Go
|
||||
|
|
@ -39,15 +62,15 @@ func (u UserResource) findUser(request *restful.Request, response *restful.Respo
|
|||
}
|
||||
```
|
||||
|
||||
[Full API of a UserResource](https://github.com/emicklei/go-restful/tree/master/examples/restful-user-resource.go)
|
||||
[Full API of a UserResource](https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go)
|
||||
|
||||
### Features
|
||||
|
||||
- Routes for request → function mapping with path parameter (e.g. {id}) support
|
||||
- Routes for request → function mapping with path parameter (e.g. {id} but also prefix_{var} and {var}_suffix) support
|
||||
- Configurable router:
|
||||
- (default) Fast routing algorithm that allows static elements, regular expressions and dynamic parameters in the URL path (e.g. /meetings/{id} or /static/{subpath:*}
|
||||
- (default) Fast routing algorithm that allows static elements, [google custom method](https://cloud.google.com/apis/design/custom_methods), regular expressions and dynamic parameters in the URL path (e.g. /resource/name:customVerb, /meetings/{id} or /static/{subpath:*})
|
||||
- Routing algorithm after [JSR311](http://jsr311.java.net/nonav/releases/1.1/spec/spec.html) that is implemented using (but does **not** accept) regular expressions
|
||||
- Request API for reading structs from JSON/XML and accesing parameters (path,query,header)
|
||||
- Request API for reading structs from JSON/XML and accessing parameters (path,query,header)
|
||||
- Response API for writing structs to JSON/XML and setting headers
|
||||
- Customizable encoding using EntityReaderWriter registration
|
||||
- Filters for intercepting the request → response flow on Service or Route level
|
||||
|
|
@ -73,10 +96,9 @@ There are several hooks to customize the behavior of the go-restful package.
|
|||
- Encoders for other serializers
|
||||
- Use [jsoniter](https://github.com/json-iterator/go) by build this package using a tag, e.g. `go build -tags=jsoniter .`
|
||||
|
||||
TODO: write examples of these.
|
||||
|
||||
## Resources
|
||||
|
||||
- [Example programs](./examples)
|
||||
- [Example posted on blog](http://ernestmicklei.com/2012/11/go-restful-first-working-example/)
|
||||
- [Design explained on blog](http://ernestmicklei.com/2012/11/go-restful-api-design/)
|
||||
- [sourcegraph](https://sourcegraph.com/github.com/emicklei/go-restful)
|
||||
|
|
@ -85,4 +107,4 @@ TODO: write examples of these.
|
|||
|
||||
Type ```git shortlog -s``` for a full list of contributors.
|
||||
|
||||
© 2012 - 2018, http://ernestmicklei.com. MIT License. Contributions are welcome.
|
||||
© 2012 - 2022, http://ernestmicklei.com. MIT License. Contributions are welcome.
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| v3.7.x | :white_check_mark: |
|
||||
| < v3.0.1 | :x: |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Create an Issue and put the label `[security]` in the title of the issue.
|
||||
Valid reported security issues are expected to be solved within a week.
|
||||
|
|
@ -83,7 +83,11 @@ func (c *CompressingResponseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error
|
|||
}
|
||||
|
||||
// WantsCompressedResponse reads the Accept-Encoding header to see if and which encoding is requested.
|
||||
func wantsCompressedResponse(httpRequest *http.Request) (bool, string) {
|
||||
// It also inspects the httpWriter whether its content-encoding is already set (non-empty).
|
||||
func wantsCompressedResponse(httpRequest *http.Request, httpWriter http.ResponseWriter) (bool, string) {
|
||||
if contentEncoding := httpWriter.Header().Get(HEADER_ContentEncoding); contentEncoding != "" {
|
||||
return false, ""
|
||||
}
|
||||
header := httpRequest.Header.Get(HEADER_AcceptEncoding)
|
||||
gi := strings.Index(header, ENCODING_GZIP)
|
||||
zi := strings.Index(header, ENCODING_DEFLATE)
|
||||
|
|
@ -14,7 +14,7 @@ import (
|
|||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/emicklei/go-restful/log"
|
||||
"github.com/emicklei/go-restful/v3/log"
|
||||
)
|
||||
|
||||
// Container holds a collection of WebServices and a http.ServeMux to dispatch http requests.
|
||||
|
|
@ -185,6 +185,11 @@ func logStackOnRecover(panicReason interface{}, httpWriter http.ResponseWriter)
|
|||
// when a ServiceError is returned during route selection. Default implementation
|
||||
// calls resp.WriteErrorString(err.Code, err.Message)
|
||||
func writeServiceError(err ServiceError, req *Request, resp *Response) {
|
||||
for header, values := range err.Header {
|
||||
for _, value := range values {
|
||||
resp.Header().Add(header, value)
|
||||
}
|
||||
}
|
||||
resp.WriteErrorString(err.Code, err.Message)
|
||||
}
|
||||
|
||||
|
|
@ -201,6 +206,7 @@ func (c *Container) Dispatch(httpWriter http.ResponseWriter, httpRequest *http.R
|
|||
|
||||
// Dispatch the incoming Http Request to a matching WebService.
|
||||
func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.Request) {
|
||||
// so we can assign a compressing one later
|
||||
writer := httpWriter
|
||||
|
||||
// CompressingResponseWriter should be closed after all operations are done
|
||||
|
|
@ -231,28 +237,8 @@ func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.R
|
|||
c.webServices,
|
||||
httpRequest)
|
||||
}()
|
||||
|
||||
// Detect if compression is needed
|
||||
// assume without compression, test for override
|
||||
contentEncodingEnabled := c.contentEncodingEnabled
|
||||
if route != nil && route.contentEncodingEnabled != nil {
|
||||
contentEncodingEnabled = *route.contentEncodingEnabled
|
||||
}
|
||||
if contentEncodingEnabled {
|
||||
doCompress, encoding := wantsCompressedResponse(httpRequest)
|
||||
if doCompress {
|
||||
var err error
|
||||
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
|
||||
if err != nil {
|
||||
log.Print("unable to install compressor: ", err)
|
||||
httpWriter.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// a non-200 response has already been written
|
||||
// a non-200 response (may be compressed) has already been written
|
||||
// run container filters anyway ; they should not touch the response...
|
||||
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
|
||||
switch err.(type) {
|
||||
|
|
@ -265,6 +251,29 @@ func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.R
|
|||
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(writer))
|
||||
return
|
||||
}
|
||||
|
||||
// Unless httpWriter is already an CompressingResponseWriter see if we need to install one
|
||||
if _, isCompressing := httpWriter.(*CompressingResponseWriter); !isCompressing {
|
||||
// Detect if compression is needed
|
||||
// assume without compression, test for override
|
||||
contentEncodingEnabled := c.contentEncodingEnabled
|
||||
if route != nil && route.contentEncodingEnabled != nil {
|
||||
contentEncodingEnabled = *route.contentEncodingEnabled
|
||||
}
|
||||
if contentEncodingEnabled {
|
||||
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
|
||||
if doCompress {
|
||||
var err error
|
||||
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
|
||||
if err != nil {
|
||||
log.Print("unable to install compressor: ", err)
|
||||
httpWriter.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pathProcessor, routerProcessesPath := c.router.(PathProcessor)
|
||||
if !routerProcessesPath {
|
||||
pathProcessor = defaultPathProcessor{}
|
||||
|
|
@ -272,16 +281,18 @@ func (c *Container) dispatch(httpWriter http.ResponseWriter, httpRequest *http.R
|
|||
pathParams := pathProcessor.ExtractParameters(route, webService, httpRequest.URL.Path)
|
||||
wrappedRequest, wrappedResponse := route.wrapRequestResponse(writer, httpRequest, pathParams)
|
||||
// pass through filters (if any)
|
||||
if len(c.containerFilters)+len(webService.filters)+len(route.Filters) > 0 {
|
||||
if size := len(c.containerFilters) + len(webService.filters) + len(route.Filters); size > 0 {
|
||||
// compose filter chain
|
||||
allFilters := []FilterFunction{}
|
||||
allFilters := make([]FilterFunction, 0, size)
|
||||
allFilters = append(allFilters, c.containerFilters...)
|
||||
allFilters = append(allFilters, webService.filters...)
|
||||
allFilters = append(allFilters, route.Filters...)
|
||||
chain := FilterChain{Filters: allFilters, Target: func(req *Request, resp *Response) {
|
||||
// handle request by route after passing all filters
|
||||
route.Function(wrappedRequest, wrappedResponse)
|
||||
}}
|
||||
chain := FilterChain{
|
||||
Filters: allFilters,
|
||||
Target: route.Function,
|
||||
ParameterDocs: route.ParameterDocs,
|
||||
Operation: route.Operation,
|
||||
}
|
||||
chain.ProcessFilter(wrappedRequest, wrappedResponse)
|
||||
} else {
|
||||
// no filters, handle request by route
|
||||
|
|
@ -299,13 +310,75 @@ func fixedPrefixPath(pathspec string) string {
|
|||
}
|
||||
|
||||
// ServeHTTP implements net/http.Handler therefore a Container can be a Handler in a http.Server
|
||||
func (c *Container) ServeHTTP(httpwriter http.ResponseWriter, httpRequest *http.Request) {
|
||||
c.ServeMux.ServeHTTP(httpwriter, httpRequest)
|
||||
func (c *Container) ServeHTTP(httpWriter http.ResponseWriter, httpRequest *http.Request) {
|
||||
// Skip, if content encoding is disabled
|
||||
if !c.contentEncodingEnabled {
|
||||
c.ServeMux.ServeHTTP(httpWriter, httpRequest)
|
||||
return
|
||||
}
|
||||
// content encoding is enabled
|
||||
|
||||
// Skip, if httpWriter is already an CompressingResponseWriter
|
||||
if _, ok := httpWriter.(*CompressingResponseWriter); ok {
|
||||
c.ServeMux.ServeHTTP(httpWriter, httpRequest)
|
||||
return
|
||||
}
|
||||
|
||||
writer := httpWriter
|
||||
// CompressingResponseWriter should be closed after all operations are done
|
||||
defer func() {
|
||||
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
|
||||
compressWriter.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
|
||||
if doCompress {
|
||||
var err error
|
||||
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
|
||||
if err != nil {
|
||||
log.Print("unable to install compressor: ", err)
|
||||
httpWriter.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
c.ServeMux.ServeHTTP(writer, httpRequest)
|
||||
}
|
||||
|
||||
// Handle registers the handler for the given pattern. If a handler already exists for pattern, Handle panics.
|
||||
func (c *Container) Handle(pattern string, handler http.Handler) {
|
||||
c.ServeMux.Handle(pattern, handler)
|
||||
c.ServeMux.Handle(pattern, http.HandlerFunc(func(httpWriter http.ResponseWriter, httpRequest *http.Request) {
|
||||
// Skip, if httpWriter is already an CompressingResponseWriter
|
||||
if _, ok := httpWriter.(*CompressingResponseWriter); ok {
|
||||
handler.ServeHTTP(httpWriter, httpRequest)
|
||||
return
|
||||
}
|
||||
|
||||
writer := httpWriter
|
||||
|
||||
// CompressingResponseWriter should be closed after all operations are done
|
||||
defer func() {
|
||||
if compressWriter, ok := writer.(*CompressingResponseWriter); ok {
|
||||
compressWriter.Close()
|
||||
}
|
||||
}()
|
||||
|
||||
if c.contentEncodingEnabled {
|
||||
doCompress, encoding := wantsCompressedResponse(httpRequest, httpWriter)
|
||||
if doCompress {
|
||||
var err error
|
||||
writer, err = NewCompressingResponseWriter(httpWriter, encoding)
|
||||
if err != nil {
|
||||
log.Print("unable to install compressor: ", err)
|
||||
httpWriter.WriteHeader(http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
handler.ServeHTTP(writer, httpRequest)
|
||||
}))
|
||||
}
|
||||
|
||||
// HandleWithFilter registers the handler for the given pattern.
|
||||
|
|
@ -319,7 +392,7 @@ func (c *Container) HandleWithFilter(pattern string, handler http.Handler) {
|
|||
}
|
||||
|
||||
chain := FilterChain{Filters: c.containerFilters, Target: func(req *Request, resp *Response) {
|
||||
handler.ServeHTTP(httpResponse, httpRequest)
|
||||
handler.ServeHTTP(resp, req.Request)
|
||||
}}
|
||||
chain.ProcessFilter(NewRequest(httpRequest), NewResponse(httpResponse))
|
||||
}
|
||||
|
|
@ -19,8 +19,21 @@ import (
|
|||
// http://www.html5rocks.com/en/tutorials/cors/#toc-handling-a-not-so-simple-request
|
||||
type CrossOriginResourceSharing struct {
|
||||
ExposeHeaders []string // list of Header names
|
||||
AllowedHeaders []string // list of Header names
|
||||
AllowedDomains []string // list of allowed values for Http Origin. An allowed value can be a regular expression to support subdomain matching. If empty all are allowed.
|
||||
|
||||
// AllowedHeaders is alist of Header names. Checking is case-insensitive.
|
||||
// The list may contain the special wildcard string ".*" ; all is allowed
|
||||
AllowedHeaders []string
|
||||
|
||||
// AllowedDomains is a list of allowed values for Http Origin.
|
||||
// The list may contain the special wildcard string ".*" ; all is allowed
|
||||
// If empty all are allowed.
|
||||
AllowedDomains []string
|
||||
|
||||
// AllowedDomainFunc is optional and is a function that will do the check
|
||||
// when the origin is not part of the AllowedDomains and it does not contain the wildcard ".*".
|
||||
AllowedDomainFunc func(origin string) bool
|
||||
|
||||
// AllowedMethods is either empty or has a list of http methods names. Checking is case-insensitive.
|
||||
AllowedMethods []string
|
||||
MaxAge int // number of seconds before requiring new Options request
|
||||
CookiesAllowed bool
|
||||
|
|
@ -119,37 +132,25 @@ func (c CrossOriginResourceSharing) isOriginAllowed(origin string) bool {
|
|||
if len(origin) == 0 {
|
||||
return false
|
||||
}
|
||||
lowerOrigin := strings.ToLower(origin)
|
||||
if len(c.AllowedDomains) == 0 {
|
||||
if c.AllowedDomainFunc != nil {
|
||||
return c.AllowedDomainFunc(lowerOrigin)
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
allowed := false
|
||||
// exact match on each allowed domain
|
||||
for _, domain := range c.AllowedDomains {
|
||||
if domain == origin {
|
||||
allowed = true
|
||||
break
|
||||
if domain == ".*" || strings.ToLower(domain) == lowerOrigin {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
if !allowed {
|
||||
if len(c.allowedOriginPatterns) == 0 {
|
||||
// compile allowed domains to allowed origin patterns
|
||||
allowedOriginRegexps, err := compileRegexps(c.AllowedDomains)
|
||||
if err != nil {
|
||||
if c.AllowedDomainFunc != nil {
|
||||
return c.AllowedDomainFunc(origin)
|
||||
}
|
||||
return false
|
||||
}
|
||||
c.allowedOriginPatterns = allowedOriginRegexps
|
||||
}
|
||||
|
||||
for _, pattern := range c.allowedOriginPatterns {
|
||||
if allowed = pattern.MatchString(origin); allowed {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return allowed
|
||||
}
|
||||
|
||||
func (c CrossOriginResourceSharing) setAllowOriginHeader(req *Request, resp *Response) {
|
||||
origin := req.Request.Header.Get(HEADER_Origin)
|
||||
|
|
@ -184,19 +185,9 @@ func (c CrossOriginResourceSharing) isValidAccessControlRequestHeader(header str
|
|||
if strings.ToLower(each) == strings.ToLower(header) {
|
||||
return true
|
||||
}
|
||||
if each == "*" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Take a list of strings and compile them into a list of regular expressions.
|
||||
func compileRegexps(regexpStrings []string) ([]*regexp.Regexp, error) {
|
||||
regexps := []*regexp.Regexp{}
|
||||
for _, regexpStr := range regexpStrings {
|
||||
r, err := regexp.Compile(regexpStr)
|
||||
if err != nil {
|
||||
return regexps, err
|
||||
}
|
||||
regexps = append(regexps, r)
|
||||
}
|
||||
return regexps, nil
|
||||
}
|
||||
|
|
@ -47,7 +47,7 @@ func (c CurlyRouter) SelectRoute(
|
|||
func (c CurlyRouter) selectRoutes(ws *WebService, requestTokens []string) sortableCurlyRoutes {
|
||||
candidates := make(sortableCurlyRoutes, 0, 8)
|
||||
for _, each := range ws.routes {
|
||||
matches, paramCount, staticCount := c.matchesRouteByPathTokens(each.pathParts, requestTokens)
|
||||
matches, paramCount, staticCount := c.matchesRouteByPathTokens(each.pathParts, requestTokens, each.hasCustomVerb)
|
||||
if matches {
|
||||
candidates.add(curlyRoute{each, paramCount, staticCount}) // TODO make sure Routes() return pointers?
|
||||
}
|
||||
|
|
@ -57,7 +57,7 @@ func (c CurlyRouter) selectRoutes(ws *WebService, requestTokens []string) sortab
|
|||
}
|
||||
|
||||
// matchesRouteByPathTokens computes whether it matches, howmany parameters do match and what the number of static path elements are.
|
||||
func (c CurlyRouter) matchesRouteByPathTokens(routeTokens, requestTokens []string) (matches bool, paramCount int, staticCount int) {
|
||||
func (c CurlyRouter) matchesRouteByPathTokens(routeTokens, requestTokens []string, routeHasCustomVerb bool) (matches bool, paramCount int, staticCount int) {
|
||||
if len(routeTokens) < len(requestTokens) {
|
||||
// proceed in matching only if last routeToken is wildcard
|
||||
count := len(routeTokens)
|
||||
|
|
@ -72,6 +72,15 @@ func (c CurlyRouter) matchesRouteByPathTokens(routeTokens, requestTokens []strin
|
|||
return false, 0, 0
|
||||
}
|
||||
requestToken := requestTokens[i]
|
||||
if routeHasCustomVerb && hasCustomVerb(routeToken){
|
||||
if !isMatchCustomVerb(routeToken, requestToken) {
|
||||
return false, 0, 0
|
||||
}
|
||||
staticCount++
|
||||
requestToken = removeCustomVerb(requestToken)
|
||||
routeToken = removeCustomVerb(routeToken)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(routeToken, "{") {
|
||||
paramCount++
|
||||
if colon := strings.Index(routeToken, ":"); colon != -1 {
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
package restful
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
customVerbReg = regexp.MustCompile(":([A-Za-z]+)$")
|
||||
)
|
||||
|
||||
func hasCustomVerb(routeToken string) bool {
|
||||
return customVerbReg.MatchString(routeToken)
|
||||
}
|
||||
|
||||
func isMatchCustomVerb(routeToken string, pathToken string) bool {
|
||||
rs := customVerbReg.FindStringSubmatch(routeToken)
|
||||
if len(rs) < 2 {
|
||||
return false
|
||||
}
|
||||
|
||||
customVerb := rs[1]
|
||||
specificVerbReg := regexp.MustCompile(fmt.Sprintf(":%s$", customVerb))
|
||||
return specificVerbReg.MatchString(pathToken)
|
||||
}
|
||||
|
||||
func removeCustomVerb(str string) string {
|
||||
return customVerbReg.ReplaceAllString(str, "")
|
||||
}
|
||||
|
|
@ -28,7 +28,7 @@ This package has the logic to find the best matching Route and if found, call it
|
|||
|
||||
The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response.
|
||||
|
||||
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation.
|
||||
See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation.
|
||||
|
||||
Regular expression matching Routes
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ These are processed before calling the function associated with the Route.
|
|||
// install 2 chained route filters (processed before calling findUser)
|
||||
ws.Route(ws.GET("/{user-id}").Filter(routeLogging).Filter(NewCountFilter().routeCounter).To(findUser))
|
||||
|
||||
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations.
|
||||
See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations.
|
||||
|
||||
Response Encoding
|
||||
|
||||
|
|
@ -93,7 +93,7 @@ Two encodings are supported: gzip and deflate. To enable this for all responses:
|
|||
If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding.
|
||||
Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route.
|
||||
|
||||
See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go
|
||||
See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go
|
||||
|
||||
OPTIONS support
|
||||
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
package restful
|
||||
|
||||
// Copyright 2021 Ernest Micklei. All rights reserved.
|
||||
// Use of this source code is governed by a license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
// ExtensionProperties provides storage of vendor extensions for entities
|
||||
type ExtensionProperties struct {
|
||||
// Extensions vendor extensions used to describe extra functionality
|
||||
// (https://swagger.io/docs/specification/2-0/swagger-extensions/)
|
||||
Extensions map[string]interface{}
|
||||
}
|
||||
|
||||
// AddExtension adds or updates a key=value pair to the extension map.
|
||||
func (ep *ExtensionProperties) AddExtension(key string, value interface{}) {
|
||||
if ep.Extensions == nil {
|
||||
ep.Extensions = map[string]interface{}{key: value}
|
||||
} else {
|
||||
ep.Extensions[key] = value
|
||||
}
|
||||
}
|
||||
|
|
@ -9,6 +9,8 @@ type FilterChain struct {
|
|||
Filters []FilterFunction // ordered list of FilterFunction
|
||||
Index int // index into filters that is currently in progress
|
||||
Target RouteFunction // function to call after passing all filters
|
||||
ParameterDocs []*Parameter // the parameter docs for the route
|
||||
Operation string // the name of the operation
|
||||
}
|
||||
|
||||
// ProcessFilter passes the request,response pair through the next of Filters.
|
||||
|
|
@ -9,6 +9,7 @@ import (
|
|||
"fmt"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// RouterJSR311 implements the flow for matching Requests to Routes (and consequently Resource Functions)
|
||||
|
|
@ -98,7 +99,18 @@ func (r RouterJSR311) detectRoute(routes []Route, httpRequest *http.Request) (*R
|
|||
if trace {
|
||||
traceLogger.Printf("no Route found (in %d routes) that matches HTTP method %s\n", len(previous), httpRequest.Method)
|
||||
}
|
||||
return nil, NewError(http.StatusMethodNotAllowed, "405: Method Not Allowed")
|
||||
allowed := []string{}
|
||||
allowedLoop:
|
||||
for _, candidate := range previous {
|
||||
for _, method := range allowed {
|
||||
if method == candidate.Method {
|
||||
continue allowedLoop
|
||||
}
|
||||
}
|
||||
allowed = append(allowed, candidate.Method)
|
||||
}
|
||||
header := http.Header{"Allow": []string{strings.Join(allowed, ", ")}}
|
||||
return nil, NewErrorWithHeader(http.StatusMethodNotAllowed, "405: Method Not Allowed", header)
|
||||
}
|
||||
|
||||
// content-type
|
||||
|
|
@ -135,7 +147,24 @@ func (r RouterJSR311) detectRoute(routes []Route, httpRequest *http.Request) (*R
|
|||
if trace {
|
||||
traceLogger.Printf("no Route found (from %d) that matches HTTP Accept: %s\n", len(previous), accept)
|
||||
}
|
||||
return nil, NewError(http.StatusNotAcceptable, "406: Not Acceptable")
|
||||
available := []string{}
|
||||
for _, candidate := range previous {
|
||||
available = append(available, candidate.Produces...)
|
||||
}
|
||||
// if POST,PUT,PATCH without body
|
||||
method, length := httpRequest.Method, httpRequest.Header.Get("Content-Length")
|
||||
if (method == http.MethodPost ||
|
||||
method == http.MethodPut ||
|
||||
method == http.MethodPatch) && length == "" {
|
||||
return nil, NewError(
|
||||
http.StatusUnsupportedMediaType,
|
||||
fmt.Sprintf("415: Unsupported Media Type\n\nAvailable representations: %s", strings.Join(available, ", ")),
|
||||
)
|
||||
}
|
||||
return nil, NewError(
|
||||
http.StatusNotAcceptable,
|
||||
fmt.Sprintf("406: Not Acceptable\n\nAvailable representations: %s", strings.Join(available, ", ")),
|
||||
)
|
||||
}
|
||||
// return r.bestMatchByMedia(outputMediaOk, contentType, accept), nil
|
||||
return candidates[0], nil
|
||||
|
|
@ -4,7 +4,7 @@ package restful
|
|||
// Use of this source code is governed by a license
|
||||
// that can be found in the LICENSE file.
|
||||
import (
|
||||
"github.com/emicklei/go-restful/log"
|
||||
"github.com/emicklei/go-restful/v3/log"
|
||||
)
|
||||
|
||||
var trace bool = false
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
package restful
|
||||
|
||||
import "sort"
|
||||
|
||||
// Copyright 2013 Ernest Micklei. All rights reserved.
|
||||
// Use of this source code is governed by a license
|
||||
// that can be found in the LICENSE file.
|
||||
|
|
@ -52,13 +54,25 @@ type Parameter struct {
|
|||
// ParameterData represents the state of a Parameter.
|
||||
// It is made public to make it accessible to e.g. the Swagger package.
|
||||
type ParameterData struct {
|
||||
ExtensionProperties
|
||||
Name, Description, DataType, DataFormat string
|
||||
Kind int
|
||||
Required bool
|
||||
// AllowableValues is deprecated. Use PossibleValues instead
|
||||
AllowableValues map[string]string
|
||||
PossibleValues []string
|
||||
AllowMultiple bool
|
||||
AllowEmptyValue bool
|
||||
DefaultValue string
|
||||
CollectionFormat string
|
||||
Pattern string
|
||||
Minimum *float64
|
||||
Maximum *float64
|
||||
MinLength *int64
|
||||
MaxLength *int64
|
||||
MinItems *int64
|
||||
MaxItems *int64
|
||||
UniqueItems bool
|
||||
}
|
||||
|
||||
// Data returns the state of the Parameter
|
||||
|
|
@ -106,9 +120,38 @@ func (p *Parameter) AllowMultiple(multiple bool) *Parameter {
|
|||
return p
|
||||
}
|
||||
|
||||
// AllowableValues sets the allowableValues field and returns the receiver
|
||||
// AddExtension adds or updates a key=value pair to the extension map
|
||||
func (p *Parameter) AddExtension(key string, value interface{}) *Parameter {
|
||||
p.data.AddExtension(key, value)
|
||||
return p
|
||||
}
|
||||
|
||||
// AllowEmptyValue sets the AllowEmptyValue field and returns the receiver
|
||||
func (p *Parameter) AllowEmptyValue(multiple bool) *Parameter {
|
||||
p.data.AllowEmptyValue = multiple
|
||||
return p
|
||||
}
|
||||
|
||||
// AllowableValues is deprecated. Use PossibleValues instead. Both will be set.
|
||||
func (p *Parameter) AllowableValues(values map[string]string) *Parameter {
|
||||
p.data.AllowableValues = values
|
||||
|
||||
allowableSortedKeys := make([]string, 0, len(values))
|
||||
for k := range values {
|
||||
allowableSortedKeys = append(allowableSortedKeys, k)
|
||||
}
|
||||
sort.Strings(allowableSortedKeys)
|
||||
|
||||
p.data.PossibleValues = make([]string, 0, len(values))
|
||||
for _, k := range allowableSortedKeys {
|
||||
p.data.PossibleValues = append(p.data.PossibleValues, values[k])
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
// PossibleValues sets the possible values field and returns the receiver
|
||||
func (p *Parameter) PossibleValues(values []string) *Parameter {
|
||||
p.data.PossibleValues = values
|
||||
return p
|
||||
}
|
||||
|
||||
|
|
@ -141,3 +184,51 @@ func (p *Parameter) CollectionFormat(format CollectionFormat) *Parameter {
|
|||
p.data.CollectionFormat = format.String()
|
||||
return p
|
||||
}
|
||||
|
||||
// Pattern sets the pattern field and returns the receiver
|
||||
func (p *Parameter) Pattern(pattern string) *Parameter {
|
||||
p.data.Pattern = pattern
|
||||
return p
|
||||
}
|
||||
|
||||
// Minimum sets the minimum field and returns the receiver
|
||||
func (p *Parameter) Minimum(minimum float64) *Parameter {
|
||||
p.data.Minimum = &minimum
|
||||
return p
|
||||
}
|
||||
|
||||
// Maximum sets the maximum field and returns the receiver
|
||||
func (p *Parameter) Maximum(maximum float64) *Parameter {
|
||||
p.data.Maximum = &maximum
|
||||
return p
|
||||
}
|
||||
|
||||
// MinLength sets the minLength field and returns the receiver
|
||||
func (p *Parameter) MinLength(minLength int64) *Parameter {
|
||||
p.data.MinLength = &minLength
|
||||
return p
|
||||
}
|
||||
|
||||
// MaxLength sets the maxLength field and returns the receiver
|
||||
func (p *Parameter) MaxLength(maxLength int64) *Parameter {
|
||||
p.data.MaxLength = &maxLength
|
||||
return p
|
||||
}
|
||||
|
||||
// MinItems sets the minItems field and returns the receiver
|
||||
func (p *Parameter) MinItems(minItems int64) *Parameter {
|
||||
p.data.MinItems = &minItems
|
||||
return p
|
||||
}
|
||||
|
||||
// MaxItems sets the maxItems field and returns the receiver
|
||||
func (p *Parameter) MaxItems(maxItems int64) *Parameter {
|
||||
p.data.MaxItems = &maxItems
|
||||
return p
|
||||
}
|
||||
|
||||
// UniqueItems sets the uniqueItems field and returns the receiver
|
||||
func (p *Parameter) UniqueItems(uniqueItems bool) *Parameter {
|
||||
p.data.UniqueItems = uniqueItems
|
||||
return p
|
||||
}
|
||||
|
|
@ -29,7 +29,12 @@ func (d defaultPathProcessor) ExtractParameters(r *Route, _ *WebService, urlPath
|
|||
} else {
|
||||
value = urlParts[i]
|
||||
}
|
||||
if strings.HasPrefix(key, "{") { // path-parameter
|
||||
if r.hasCustomVerb && hasCustomVerb(key) {
|
||||
key = removeCustomVerb(key)
|
||||
value = removeCustomVerb(value)
|
||||
}
|
||||
|
||||
if strings.Index(key, "{") > -1 { // path-parameter
|
||||
if colon := strings.Index(key, ":"); colon != -1 {
|
||||
// extract by regex
|
||||
regPart := key[colon+1 : len(key)-1]
|
||||
|
|
@ -42,7 +47,13 @@ func (d defaultPathProcessor) ExtractParameters(r *Route, _ *WebService, urlPath
|
|||
}
|
||||
} else {
|
||||
// without enclosing {}
|
||||
pathParameters[key[1:len(key)-1]] = value
|
||||
startIndex := strings.Index(key, "{")
|
||||
endKeyIndex := strings.Index(key, "}")
|
||||
|
||||
suffixLength := len(key) - endKeyIndex - 1
|
||||
endValueIndex := len(value) - suffixLength
|
||||
|
||||
pathParameters[key[startIndex+1:endKeyIndex]] = value[startIndex:endValueIndex]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -16,7 +16,7 @@ type Request struct {
|
|||
Request *http.Request
|
||||
pathParameters map[string]string
|
||||
attributes map[string]interface{} // for storing request-scoped values
|
||||
selectedRoutePath string // root path + route path that matched the request, e.g. /meetings/{id}/attendees
|
||||
selectedRoute *Route // is nil when no route was matched
|
||||
}
|
||||
|
||||
func NewRequest(httpRequest *http.Request) *Request {
|
||||
|
|
@ -113,6 +113,20 @@ func (r Request) Attribute(name string) interface{} {
|
|||
}
|
||||
|
||||
// SelectedRoutePath root path + route path that matched the request, e.g. /meetings/{id}/attendees
|
||||
// If no route was matched then return an empty string.
|
||||
func (r Request) SelectedRoutePath() string {
|
||||
return r.selectedRoutePath
|
||||
if r.selectedRoute == nil {
|
||||
return ""
|
||||
}
|
||||
// skip creating an accessor
|
||||
return r.selectedRoute.Path
|
||||
}
|
||||
|
||||
// SelectedRoute returns a reader to access the selected Route by the container
|
||||
// Returns nil if no route was matched.
|
||||
func (r Request) SelectedRoute() RouteReader {
|
||||
if r.selectedRoute == nil {
|
||||
return nil
|
||||
}
|
||||
return routeAccessor{route: r.selectedRoute}
|
||||
}
|
||||
|
|
@ -174,15 +174,16 @@ func (r *Response) WriteHeaderAndJson(status int, value interface{}, contentType
|
|||
return writeJSON(r, status, contentType, value)
|
||||
}
|
||||
|
||||
// WriteError write the http status and the error string on the response. err can be nil.
|
||||
func (r *Response) WriteError(httpStatus int, err error) error {
|
||||
// WriteError writes the http status and the error string on the response. err can be nil.
|
||||
// Return an error if writing was not successful.
|
||||
func (r *Response) WriteError(httpStatus int, err error) (writeErr error) {
|
||||
r.err = err
|
||||
if err == nil {
|
||||
r.WriteErrorString(httpStatus, "")
|
||||
writeErr = r.WriteErrorString(httpStatus, "")
|
||||
} else {
|
||||
r.WriteErrorString(httpStatus, err.Error())
|
||||
writeErr = r.WriteErrorString(httpStatus, err.Error())
|
||||
}
|
||||
return err
|
||||
return writeErr
|
||||
}
|
||||
|
||||
// WriteServiceError is a convenience method for a responding with a status and a ServiceError
|
||||
|
|
@ -19,6 +19,7 @@ type RouteSelectionConditionFunction func(httpRequest *http.Request) bool
|
|||
|
||||
// Route binds a HTTP Method,Path,Consumes combination to a RouteFunction.
|
||||
type Route struct {
|
||||
ExtensionProperties
|
||||
Method string
|
||||
Produces []string
|
||||
Consumes []string
|
||||
|
|
@ -49,35 +50,33 @@ type Route struct {
|
|||
|
||||
//Overrides the container.contentEncodingEnabled
|
||||
contentEncodingEnabled *bool
|
||||
|
||||
// indicate route path has custom verb
|
||||
hasCustomVerb bool
|
||||
|
||||
// if a request does not include a content-type header then
|
||||
// depending on the method, it may return a 415 Unsupported Media
|
||||
// Must have uppercase HTTP Method names such as GET,HEAD,OPTIONS,...
|
||||
allowedMethodsWithoutContentType []string
|
||||
}
|
||||
|
||||
// Initialize for Route
|
||||
func (r *Route) postBuild() {
|
||||
r.pathParts = tokenizePath(r.Path)
|
||||
r.hasCustomVerb = hasCustomVerb(r.Path)
|
||||
}
|
||||
|
||||
// Create Request and Response from their http versions
|
||||
func (r *Route) wrapRequestResponse(httpWriter http.ResponseWriter, httpRequest *http.Request, pathParams map[string]string) (*Request, *Response) {
|
||||
wrappedRequest := NewRequest(httpRequest)
|
||||
wrappedRequest.pathParameters = pathParams
|
||||
wrappedRequest.selectedRoutePath = r.Path
|
||||
wrappedRequest.selectedRoute = r
|
||||
wrappedResponse := NewResponse(httpWriter)
|
||||
wrappedResponse.requestAccept = httpRequest.Header.Get(HEADER_Accept)
|
||||
wrappedResponse.routeProduces = r.Produces
|
||||
return wrappedRequest, wrappedResponse
|
||||
}
|
||||
|
||||
// dispatchWithFilters call the function after passing through its own filters
|
||||
func (r *Route) dispatchWithFilters(wrappedRequest *Request, wrappedResponse *Response) {
|
||||
if len(r.Filters) > 0 {
|
||||
chain := FilterChain{Filters: r.Filters, Target: r.Function}
|
||||
chain.ProcessFilter(wrappedRequest, wrappedResponse)
|
||||
} else {
|
||||
// unfiltered
|
||||
r.Function(wrappedRequest, wrappedResponse)
|
||||
}
|
||||
}
|
||||
|
||||
func stringTrimSpaceCutset(r rune) bool {
|
||||
return r == ' '
|
||||
}
|
||||
|
|
@ -121,9 +120,18 @@ func (r Route) matchesContentType(mimeTypes string) bool {
|
|||
if len(mimeTypes) == 0 {
|
||||
// idempotent methods with (most-likely or guaranteed) empty content match missing Content-Type
|
||||
m := r.Method
|
||||
// if route specifies less or non-idempotent methods then use that
|
||||
if len(r.allowedMethodsWithoutContentType) > 0 {
|
||||
for _, each := range r.allowedMethodsWithoutContentType {
|
||||
if m == each {
|
||||
return true
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if m == "GET" || m == "HEAD" || m == "OPTIONS" || m == "DELETE" || m == "TRACE" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
// proceed with default
|
||||
mimeTypes = MIME_OCTET
|
||||
}
|
||||
|
|
@ -160,11 +168,11 @@ func tokenizePath(path string) []string {
|
|||
}
|
||||
|
||||
// for debugging
|
||||
func (r Route) String() string {
|
||||
func (r *Route) String() string {
|
||||
return r.Method + " " + r.Path
|
||||
}
|
||||
|
||||
// EnableContentEncoding (default=false) allows for GZIP or DEFLATE encoding of responses. Overrides the container.contentEncodingEnabled value.
|
||||
func (r Route) EnableContentEncoding(enabled bool) {
|
||||
func (r *Route) EnableContentEncoding(enabled bool) {
|
||||
r.contentEncodingEnabled = &enabled
|
||||
}
|
||||
|
|
@ -12,7 +12,7 @@ import (
|
|||
"strings"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/emicklei/go-restful/log"
|
||||
"github.com/emicklei/go-restful/v3/log"
|
||||
)
|
||||
|
||||
// RouteBuilder is a helper to construct Routes.
|
||||
|
|
@ -25,6 +25,7 @@ type RouteBuilder struct {
|
|||
function RouteFunction // required
|
||||
filters []FilterFunction
|
||||
conditions []RouteSelectionConditionFunction
|
||||
allowedMethodsWithoutContentType []string // see Route
|
||||
|
||||
typeNameHandleFunc TypeNameHandleFunction // required
|
||||
|
||||
|
|
@ -37,6 +38,7 @@ type RouteBuilder struct {
|
|||
errorMap map[int]ResponseError
|
||||
defaultResponse *ResponseError
|
||||
metadata map[string]interface{}
|
||||
extensions map[string]interface{}
|
||||
deprecated bool
|
||||
contentEncodingEnabled *bool
|
||||
}
|
||||
|
|
@ -176,6 +178,15 @@ func (b *RouteBuilder) Returns(code int, message string, model interface{}) *Rou
|
|||
return b
|
||||
}
|
||||
|
||||
// ReturnsWithHeaders is similar to Returns, but can specify response headers
|
||||
func (b *RouteBuilder) ReturnsWithHeaders(code int, message string, model interface{}, headers map[string]Header) *RouteBuilder {
|
||||
b.Returns(code, message, model)
|
||||
err := b.errorMap[code]
|
||||
err.Headers = headers
|
||||
b.errorMap[code] = err
|
||||
return b
|
||||
}
|
||||
|
||||
// DefaultReturns is a special Returns call that sets the default of the response.
|
||||
func (b *RouteBuilder) DefaultReturns(message string, model interface{}) *RouteBuilder {
|
||||
b.defaultResponse = &ResponseError{
|
||||
|
|
@ -194,20 +205,57 @@ func (b *RouteBuilder) Metadata(key string, value interface{}) *RouteBuilder {
|
|||
return b
|
||||
}
|
||||
|
||||
// AddExtension adds or updates a key=value pair to the extensions map.
|
||||
func (b *RouteBuilder) AddExtension(key string, value interface{}) *RouteBuilder {
|
||||
if b.extensions == nil {
|
||||
b.extensions = map[string]interface{}{}
|
||||
}
|
||||
b.extensions[key] = value
|
||||
return b
|
||||
}
|
||||
|
||||
// Deprecate sets the value of deprecated to true. Deprecated routes have a special UI treatment to warn against use
|
||||
func (b *RouteBuilder) Deprecate() *RouteBuilder {
|
||||
b.deprecated = true
|
||||
return b
|
||||
}
|
||||
|
||||
// AllowedMethodsWithoutContentType overrides the default list GET,HEAD,OPTIONS,DELETE,TRACE
|
||||
// If a request does not include a content-type header then
|
||||
// depending on the method, it may return a 415 Unsupported Media.
|
||||
// Must have uppercase HTTP Method names such as GET,HEAD,OPTIONS,...
|
||||
func (b *RouteBuilder) AllowedMethodsWithoutContentType(methods []string) *RouteBuilder {
|
||||
b.allowedMethodsWithoutContentType = methods
|
||||
return b
|
||||
}
|
||||
|
||||
// ResponseError represents a response; not necessarily an error.
|
||||
type ResponseError struct {
|
||||
ExtensionProperties
|
||||
Code int
|
||||
Message string
|
||||
Model interface{}
|
||||
Headers map[string]Header
|
||||
IsDefault bool
|
||||
}
|
||||
|
||||
// Header describes a header for a response of the API
|
||||
//
|
||||
// For more information: http://goo.gl/8us55a#headerObject
|
||||
type Header struct {
|
||||
*Items
|
||||
Description string
|
||||
}
|
||||
|
||||
// Items describe swagger simple schemas for headers
|
||||
type Items struct {
|
||||
Type string
|
||||
Format string
|
||||
Items *Items
|
||||
CollectionFormat string
|
||||
Default interface{}
|
||||
}
|
||||
|
||||
func (b *RouteBuilder) servicePath(path string) *RouteBuilder {
|
||||
b.rootPath = path
|
||||
return b
|
||||
|
|
@ -296,7 +344,9 @@ func (b *RouteBuilder) Build() Route {
|
|||
Metadata: b.metadata,
|
||||
Deprecated: b.deprecated,
|
||||
contentEncodingEnabled: b.contentEncodingEnabled,
|
||||
allowedMethodsWithoutContentType: b.allowedMethodsWithoutContentType,
|
||||
}
|
||||
route.Extensions = b.extensions
|
||||
route.postBuild()
|
||||
return route
|
||||
}
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
package restful
|
||||
|
||||
// Copyright 2021 Ernest Micklei. All rights reserved.
|
||||
// Use of this source code is governed by a license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
type RouteReader interface {
|
||||
Method() string
|
||||
Consumes() []string
|
||||
Path() string
|
||||
Doc() string
|
||||
Notes() string
|
||||
Operation() string
|
||||
ParameterDocs() []*Parameter
|
||||
// Returns a copy
|
||||
Metadata() map[string]interface{}
|
||||
Deprecated() bool
|
||||
}
|
||||
|
||||
type routeAccessor struct {
|
||||
route *Route
|
||||
}
|
||||
|
||||
func (r routeAccessor) Method() string {
|
||||
return r.route.Method
|
||||
}
|
||||
func (r routeAccessor) Consumes() []string {
|
||||
return r.route.Consumes[:]
|
||||
}
|
||||
func (r routeAccessor) Path() string {
|
||||
return r.route.Path
|
||||
}
|
||||
func (r routeAccessor) Doc() string {
|
||||
return r.route.Doc
|
||||
}
|
||||
func (r routeAccessor) Notes() string {
|
||||
return r.route.Notes
|
||||
}
|
||||
func (r routeAccessor) Operation() string {
|
||||
return r.route.Operation
|
||||
}
|
||||
func (r routeAccessor) ParameterDocs() []*Parameter {
|
||||
return r.route.ParameterDocs[:]
|
||||
}
|
||||
|
||||
// Returns a copy
|
||||
func (r routeAccessor) Metadata() map[string]interface{} {
|
||||
return copyMap(r.route.Metadata)
|
||||
}
|
||||
func (r routeAccessor) Deprecated() bool {
|
||||
return r.route.Deprecated
|
||||
}
|
||||
|
||||
// https://stackoverflow.com/questions/23057785/how-to-copy-a-map
|
||||
func copyMap(m map[string]interface{}) map[string]interface{} {
|
||||
cp := make(map[string]interface{})
|
||||
for k, v := range m {
|
||||
vm, ok := v.(map[string]interface{})
|
||||
if ok {
|
||||
cp[k] = copyMap(vm)
|
||||
} else {
|
||||
cp[k] = v
|
||||
}
|
||||
}
|
||||
return cp
|
||||
}
|
||||
|
|
@ -4,12 +4,16 @@ package restful
|
|||
// Use of this source code is governed by a license
|
||||
// that can be found in the LICENSE file.
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// ServiceError is a transport object to pass information about a non-Http error occurred in a WebService while processing a request.
|
||||
type ServiceError struct {
|
||||
Code int
|
||||
Message string
|
||||
Header http.Header
|
||||
}
|
||||
|
||||
// NewError returns a ServiceError using the code and reason
|
||||
|
|
@ -17,6 +21,11 @@ func NewError(code int, message string) ServiceError {
|
|||
return ServiceError{Code: code, Message: message}
|
||||
}
|
||||
|
||||
// NewErrorWithHeader returns a ServiceError using the code, reason and header
|
||||
func NewErrorWithHeader(code int, message string, header http.Header) ServiceError {
|
||||
return ServiceError{Code: code, Message: message, Header: header}
|
||||
}
|
||||
|
||||
// Error returns a text representation of the service error
|
||||
func (s ServiceError) Error() string {
|
||||
return fmt.Sprintf("[ServiceError:%v] %v", s.Code, s.Message)
|
||||
|
|
@ -6,7 +6,7 @@ import (
|
|||
"reflect"
|
||||
"sync"
|
||||
|
||||
"github.com/emicklei/go-restful/log"
|
||||
"github.com/emicklei/go-restful/v3/log"
|
||||
)
|
||||
|
||||
// Copyright 2013 Ernest Micklei. All rights reserved.
|
||||
|
|
@ -181,14 +181,12 @@ func (w *WebService) RemoveRoute(path, method string) error {
|
|||
}
|
||||
w.routesLock.Lock()
|
||||
defer w.routesLock.Unlock()
|
||||
newRoutes := make([]Route, (len(w.routes) - 1))
|
||||
current := 0
|
||||
for ix := range w.routes {
|
||||
if w.routes[ix].Method == method && w.routes[ix].Path == path {
|
||||
newRoutes := []Route{}
|
||||
for _, route := range w.routes {
|
||||
if route.Method == method && route.Path == path {
|
||||
continue
|
||||
}
|
||||
newRoutes[current] = w.routes[ix]
|
||||
current = current + 1
|
||||
newRoutes = append(newRoutes, route)
|
||||
}
|
||||
w.routes = newRoutes
|
||||
return nil
|
||||
|
|
@ -288,3 +286,8 @@ func (w *WebService) PATCH(subPath string) *RouteBuilder {
|
|||
func (w *WebService) DELETE(subPath string) *RouteBuilder {
|
||||
return new(RouteBuilder).typeNameHandler(w.typeNameHandleFunc).servicePath(w.rootPath).Method("DELETE").Path(subPath)
|
||||
}
|
||||
|
||||
// OPTIONS is a shortcut for .Method("OPTIONS").Path(subPath)
|
||||
func (w *WebService) OPTIONS(subPath string) *RouteBuilder {
|
||||
return new(RouteBuilder).typeNameHandler(w.typeNameHandleFunc).servicePath(w.rootPath).Method("OPTIONS").Path(subPath)
|
||||
}
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
// Package diskcache provides an implementation of httpcache.Cache that uses the diskv package
|
||||
// to supplement an in-memory map with persistent storage
|
||||
//
|
||||
package diskcache
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"github.com/peterbourgon/diskv"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Cache is an implementation of httpcache.Cache that supplements the in-memory map with persistent storage
|
||||
type Cache struct {
|
||||
d *diskv.Diskv
|
||||
}
|
||||
|
||||
// Get returns the response corresponding to key if present
|
||||
func (c *Cache) Get(key string) (resp []byte, ok bool) {
|
||||
key = keyToFilename(key)
|
||||
resp, err := c.d.Read(key)
|
||||
if err != nil {
|
||||
return []byte{}, false
|
||||
}
|
||||
return resp, true
|
||||
}
|
||||
|
||||
// Set saves a response to the cache as key
|
||||
func (c *Cache) Set(key string, resp []byte) {
|
||||
key = keyToFilename(key)
|
||||
c.d.WriteStream(key, bytes.NewReader(resp), true)
|
||||
}
|
||||
|
||||
// Delete removes the response with key from the cache
|
||||
func (c *Cache) Delete(key string) {
|
||||
key = keyToFilename(key)
|
||||
c.d.Erase(key)
|
||||
}
|
||||
|
||||
func keyToFilename(key string) string {
|
||||
h := md5.New()
|
||||
io.WriteString(h, key)
|
||||
return hex.EncodeToString(h.Sum(nil))
|
||||
}
|
||||
|
||||
// New returns a new Cache that will store files in basePath
|
||||
func New(basePath string) *Cache {
|
||||
return &Cache{
|
||||
d: diskv.New(diskv.Options{
|
||||
BasePath: basePath,
|
||||
CacheSizeMax: 100 * 1024 * 1024, // 100MB
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
// NewWithDiskv returns a new Cache using the provided Diskv as underlying
|
||||
// storage.
|
||||
func NewWithDiskv(d *diskv.Diskv) *Cache {
|
||||
return &Cache{d}
|
||||
}
|
||||
|
|
@ -39,7 +39,7 @@ func (ServerStorageVersion) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_StorageVersion = map[string]string{
|
||||
"": "\n Storage version of a specific resource.",
|
||||
"": "Storage version of a specific resource.",
|
||||
"metadata": "The name is <group>.<resource>.",
|
||||
"spec": "Spec is an empty spec. It is here to comply with Kubernetes API style.",
|
||||
"status": "API server instances report the version they can decode and the version they encode objects to when persisting objects in the backend.",
|
||||
|
|
|
|||
|
|
@ -513,7 +513,6 @@ message RollingUpdateDaemonSet {
|
|||
// daemonset on any given node can double if the readiness check fails, and
|
||||
// so resource intensive daemonsets should take into account that they may
|
||||
// cause evictions during disruption.
|
||||
// This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.
|
||||
// +optional
|
||||
optional k8s.io.apimachinery.pkg.util.intstr.IntOrString maxSurge = 2;
|
||||
}
|
||||
|
|
@ -572,6 +571,7 @@ message RollingUpdateStatefulSetStrategy {
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
message StatefulSet {
|
||||
|
|
@ -702,7 +702,6 @@ message StatefulSetSpec {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 minReadySeconds = 9;
|
||||
|
||||
|
|
@ -758,7 +757,6 @@ message StatefulSetStatus {
|
|||
repeated StatefulSetCondition conditions = 10;
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 availableReplicas = 11;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -41,6 +41,7 @@ const (
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
type StatefulSet struct {
|
||||
|
|
@ -225,7 +226,6 @@ type StatefulSetSpec struct {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
MinReadySeconds int32 `json:"minReadySeconds,omitempty" protobuf:"varint,9,opt,name=minReadySeconds"`
|
||||
|
||||
|
|
@ -281,7 +281,6 @@ type StatefulSetStatus struct {
|
|||
Conditions []StatefulSetCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,10,rep,name=conditions"`
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
AvailableReplicas int32 `json:"availableReplicas" protobuf:"varint,11,opt,name=availableReplicas"`
|
||||
}
|
||||
|
|
@ -598,7 +597,6 @@ type RollingUpdateDaemonSet struct {
|
|||
// daemonset on any given node can double if the readiness check fails, and
|
||||
// so resource intensive daemonsets should take into account that they may
|
||||
// cause evictions during disruption.
|
||||
// This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.
|
||||
// +optional
|
||||
MaxSurge *intstr.IntOrString `json:"maxSurge,omitempty" protobuf:"bytes,2,opt,name=maxSurge"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -263,7 +263,7 @@ func (ReplicaSetStatus) SwaggerDoc() map[string]string {
|
|||
var map_RollingUpdateDaemonSet = map[string]string{
|
||||
"": "Spec to control the desired behavior of daemon set rolling update.",
|
||||
"maxUnavailable": "The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update.",
|
||||
"maxSurge": "The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption. This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.",
|
||||
"maxSurge": "The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption.",
|
||||
}
|
||||
|
||||
func (RollingUpdateDaemonSet) SwaggerDoc() map[string]string {
|
||||
|
|
@ -291,7 +291,7 @@ func (RollingUpdateStatefulSetStrategy) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_StatefulSet = map[string]string{
|
||||
"": "StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"": "StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\n\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
|
||||
"spec": "Spec defines the desired identities of pods in this set.",
|
||||
"status": "Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.",
|
||||
|
|
@ -344,7 +344,7 @@ var map_StatefulSetSpec = map[string]string{
|
|||
"podManagementPolicy": "podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is `OrderedReady`, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is `Parallel` which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.",
|
||||
"updateStrategy": "updateStrategy indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template.",
|
||||
"revisionHistoryLimit": "revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)",
|
||||
"persistentVolumeClaimRetentionPolicy": "persistentVolumeClaimRetentionPolicy describes the lifecycle of persistent volume claims created from volumeClaimTemplates. By default, all persistent volume claims are created as needed and retained until manually deleted. This policy allows the lifecycle to be altered, for example by deleting persistent volume claims when their stateful set is deleted, or when their pod is scaled down. This requires the StatefulSetAutoDeletePVC feature gate to be enabled, which is alpha. +optional",
|
||||
}
|
||||
|
||||
|
|
@ -363,7 +363,7 @@ var map_StatefulSetStatus = map[string]string{
|
|||
"updateRevision": "updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas)",
|
||||
"collisionCount": "collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision.",
|
||||
"conditions": "Represents the latest available observations of a statefulset's current state.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset. This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset.",
|
||||
}
|
||||
|
||||
func (StatefulSetStatus) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
|
|
@ -334,6 +334,7 @@ message ScaleStatus {
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
message StatefulSet {
|
||||
|
|
@ -460,7 +461,6 @@ message StatefulSetSpec {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 minReadySeconds = 9;
|
||||
|
||||
|
|
@ -513,7 +513,6 @@ message StatefulSetStatus {
|
|||
repeated StatefulSetCondition conditions = 10;
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 availableReplicas = 11;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -90,6 +90,7 @@ type Scale struct {
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
type StatefulSet struct {
|
||||
|
|
@ -267,7 +268,6 @@ type StatefulSetSpec struct {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
MinReadySeconds int32 `json:"minReadySeconds,omitempty" protobuf:"varint,9,opt,name=minReadySeconds"`
|
||||
|
||||
|
|
@ -320,7 +320,6 @@ type StatefulSetStatus struct {
|
|||
Conditions []StatefulSetCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,10,rep,name=conditions"`
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
AvailableReplicas int32 `json:"availableReplicas" protobuf:"varint,11,opt,name=availableReplicas"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -198,7 +198,7 @@ func (ScaleStatus) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_StatefulSet = map[string]string{
|
||||
"": "DEPRECATED - This group version of StatefulSet is deprecated by apps/v1beta2/StatefulSet. See the release notes for more information. StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"": "DEPRECATED - This group version of StatefulSet is deprecated by apps/v1beta2/StatefulSet. See the release notes for more information. StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\n\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"spec": "Spec defines the desired identities of pods in this set.",
|
||||
"status": "Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.",
|
||||
}
|
||||
|
|
@ -248,7 +248,7 @@ var map_StatefulSetSpec = map[string]string{
|
|||
"podManagementPolicy": "podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is `OrderedReady`, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is `Parallel` which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.",
|
||||
"updateStrategy": "updateStrategy indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template.",
|
||||
"revisionHistoryLimit": "revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)",
|
||||
"persistentVolumeClaimRetentionPolicy": "PersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. This requires the StatefulSetAutoDeletePVC feature gate to be enabled, which is alpha.",
|
||||
}
|
||||
|
||||
|
|
@ -267,7 +267,7 @@ var map_StatefulSetStatus = map[string]string{
|
|||
"updateRevision": "updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas)",
|
||||
"collisionCount": "collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision.",
|
||||
"conditions": "Represents the latest available observations of a statefulset's current state.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet. This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.",
|
||||
}
|
||||
|
||||
func (StatefulSetStatus) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
|
|
@ -519,7 +519,6 @@ message RollingUpdateDaemonSet {
|
|||
// daemonset on any given node can double if the readiness check fails, and
|
||||
// so resource intensive daemonsets should take into account that they may
|
||||
// cause evictions during disruption.
|
||||
// This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.
|
||||
// +optional
|
||||
optional k8s.io.apimachinery.pkg.util.intstr.IntOrString maxSurge = 2;
|
||||
}
|
||||
|
|
@ -622,6 +621,7 @@ message ScaleStatus {
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
message StatefulSet {
|
||||
|
|
@ -747,7 +747,6 @@ message StatefulSetSpec {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 minReadySeconds = 9;
|
||||
|
||||
|
|
@ -800,7 +799,6 @@ message StatefulSetStatus {
|
|||
repeated StatefulSetCondition conditions = 10;
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
optional int32 availableReplicas = 11;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -96,6 +96,7 @@ type Scale struct {
|
|||
// Identities are defined as:
|
||||
// - Network: A single stable DNS and hostname.
|
||||
// - Storage: As many VolumeClaims as requested.
|
||||
//
|
||||
// The StatefulSet guarantees that a given network identity will always
|
||||
// map to the same storage identity.
|
||||
type StatefulSet struct {
|
||||
|
|
@ -276,7 +277,6 @@ type StatefulSetSpec struct {
|
|||
// Minimum number of seconds for which a newly created pod should be ready
|
||||
// without any of its container crashing for it to be considered available.
|
||||
// Defaults to 0 (pod will be considered available as soon as it is ready)
|
||||
// This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
MinReadySeconds int32 `json:"minReadySeconds,omitempty" protobuf:"varint,9,opt,name=minReadySeconds"`
|
||||
|
||||
|
|
@ -329,7 +329,6 @@ type StatefulSetStatus struct {
|
|||
Conditions []StatefulSetCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,10,rep,name=conditions"`
|
||||
|
||||
// Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.
|
||||
// This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.
|
||||
// +optional
|
||||
AvailableReplicas int32 `json:"availableReplicas" protobuf:"varint,11,opt,name=availableReplicas"`
|
||||
}
|
||||
|
|
@ -650,7 +649,6 @@ type RollingUpdateDaemonSet struct {
|
|||
// daemonset on any given node can double if the readiness check fails, and
|
||||
// so resource intensive daemonsets should take into account that they may
|
||||
// cause evictions during disruption.
|
||||
// This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.
|
||||
// +optional
|
||||
MaxSurge *intstr.IntOrString `json:"maxSurge,omitempty" protobuf:"bytes,2,opt,name=maxSurge"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -263,7 +263,7 @@ func (ReplicaSetStatus) SwaggerDoc() map[string]string {
|
|||
var map_RollingUpdateDaemonSet = map[string]string{
|
||||
"": "Spec to control the desired behavior of daemon set rolling update.",
|
||||
"maxUnavailable": "The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update.",
|
||||
"maxSurge": "The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption. This is beta field and enabled/disabled by DaemonSetUpdateSurge feature gate.",
|
||||
"maxSurge": "The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption.",
|
||||
}
|
||||
|
||||
func (RollingUpdateDaemonSet) SwaggerDoc() map[string]string {
|
||||
|
|
@ -322,7 +322,7 @@ func (ScaleStatus) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_StatefulSet = map[string]string{
|
||||
"": "DEPRECATED - This group version of StatefulSet is deprecated by apps/v1/StatefulSet. See the release notes for more information. StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"": "DEPRECATED - This group version of StatefulSet is deprecated by apps/v1/StatefulSet. See the release notes for more information. StatefulSet represents a set of pods with consistent identities. Identities are defined as:\n - Network: A single stable DNS and hostname.\n - Storage: As many VolumeClaims as requested.\n\nThe StatefulSet guarantees that a given network identity will always map to the same storage identity.",
|
||||
"spec": "Spec defines the desired identities of pods in this set.",
|
||||
"status": "Status is the current status of Pods in this StatefulSet. This data may be out of date by some window of time.",
|
||||
}
|
||||
|
|
@ -372,7 +372,7 @@ var map_StatefulSetSpec = map[string]string{
|
|||
"podManagementPolicy": "podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is `OrderedReady`, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is `Parallel` which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once.",
|
||||
"updateStrategy": "updateStrategy indicates the StatefulSetUpdateStrategy that will be employed to update Pods in the StatefulSet when a revision is made to Template.",
|
||||
"revisionHistoryLimit": "revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate.",
|
||||
"minReadySeconds": "Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready)",
|
||||
"persistentVolumeClaimRetentionPolicy": "PersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. This requires the StatefulSetAutoDeletePVC feature gate to be enabled, which is alpha.",
|
||||
}
|
||||
|
||||
|
|
@ -391,7 +391,7 @@ var map_StatefulSetStatus = map[string]string{
|
|||
"updateRevision": "updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas)",
|
||||
"collisionCount": "collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision.",
|
||||
"conditions": "Represents the latest available observations of a statefulset's current state.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet. This is a beta field and enabled/disabled by StatefulSetMinReadySeconds feature gate.",
|
||||
"availableReplicas": "Total number of available pods (ready for at least minReadySeconds) targeted by this StatefulSet.",
|
||||
}
|
||||
|
||||
func (StatefulSetStatus) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
|
|
@ -74,7 +74,7 @@ message TokenRequest {
|
|||
// TokenRequestSpec contains client provided parameters of a token request.
|
||||
message TokenRequestSpec {
|
||||
// Audiences are the intendend audiences of the token. A recipient of a
|
||||
// token must identitfy themself with an identifier in the list of
|
||||
// token must identify themself with an identifier in the list of
|
||||
// audiences of the token, and otherwise should reject the token. A
|
||||
// token issued for multiple audiences may be used to authenticate
|
||||
// against any of the audiences listed but implies a high degree of
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ type TokenRequest struct {
|
|||
// TokenRequestSpec contains client provided parameters of a token request.
|
||||
type TokenRequestSpec struct {
|
||||
// Audiences are the intendend audiences of the token. A recipient of a
|
||||
// token must identitfy themself with an identifier in the list of
|
||||
// token must identify themself with an identifier in the list of
|
||||
// audiences of the token, and otherwise should reject the token. A
|
||||
// token issued for multiple audiences may be used to authenticate
|
||||
// against any of the audiences listed but implies a high degree of
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ func (TokenRequest) SwaggerDoc() map[string]string {
|
|||
|
||||
var map_TokenRequestSpec = map[string]string{
|
||||
"": "TokenRequestSpec contains client provided parameters of a token request.",
|
||||
"audiences": "Audiences are the intendend audiences of the token. A recipient of a token must identitfy themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences.",
|
||||
"audiences": "Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences.",
|
||||
"expirationSeconds": "ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response.",
|
||||
"boundObjectRef": "BoundObjectRef is a reference to an object that the token will be bound to. The token will only be valid for as long as the bound object exists. NOTE: The API server's TokenReview endpoint will validate the BoundObjectRef, but other audiences may not. Keep ExpirationSeconds small if you want prompt revocation.",
|
||||
}
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -63,9 +63,16 @@ message CronJobSpec {
|
|||
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
|
||||
optional string schedule = 1;
|
||||
|
||||
// The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will rely on the time zone of the kube-controller-manager process.
|
||||
// ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will default to the time zone of the kube-controller-manager process.
|
||||
// The set of valid time zone names and the time zone offset is loaded from the system-wide time zone
|
||||
// database by the API server during CronJob validation and the controller manager during execution.
|
||||
// If no system-wide time zone database can be found a bundled version of the database is used instead.
|
||||
// If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host
|
||||
// configuration, the controller will stop creating new new Jobs and will create a system event with the
|
||||
// reason UnknownTimeZone.
|
||||
// More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones
|
||||
// This is beta field and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// +optional
|
||||
optional string timeZone = 8;
|
||||
|
||||
|
|
@ -198,6 +205,19 @@ message JobSpec {
|
|||
// +optional
|
||||
optional int64 activeDeadlineSeconds = 3;
|
||||
|
||||
// Specifies the policy of handling failed pods. In particular, it allows to
|
||||
// specify the set of actions and conditions which need to be
|
||||
// satisfied to take the associated action.
|
||||
// If empty, the default behaviour applies - the counter of failed pods,
|
||||
// represented by the jobs's .status.failed field, is incremented and it is
|
||||
// checked against the backoffLimit. This field cannot be used in combination
|
||||
// with restartPolicy=OnFailure.
|
||||
//
|
||||
// This field is alpha-level. To use this field, you must enable the
|
||||
// `JobPodFailurePolicy` feature gate (disabled by default).
|
||||
// +optional
|
||||
optional PodFailurePolicy podFailurePolicy = 11;
|
||||
|
||||
// Specifies the number of retries before marking this job failed.
|
||||
// Defaults to 6
|
||||
// +optional
|
||||
|
|
@ -364,6 +384,92 @@ message JobTemplateSpec {
|
|||
optional JobSpec spec = 2;
|
||||
}
|
||||
|
||||
// PodFailurePolicy describes how failed pods influence the backoffLimit.
|
||||
message PodFailurePolicy {
|
||||
// A list of pod failure policy rules. The rules are evaluated in order.
|
||||
// Once a rule matches a Pod failure, the remaining of the rules are ignored.
|
||||
// When no rule matches the Pod failure, the default handling applies - the
|
||||
// counter of pod failures is incremented and it is checked against
|
||||
// the backoffLimit. At most 20 elements are allowed.
|
||||
// +listType=atomic
|
||||
repeated PodFailurePolicyRule rules = 1;
|
||||
}
|
||||
|
||||
// PodFailurePolicyOnExitCodesRequirement describes the requirement for handling
|
||||
// a failed pod based on its container exit codes. In particular, it lookups the
|
||||
// .state.terminated.exitCode for each app container and init container status,
|
||||
// represented by the .status.containerStatuses and .status.initContainerStatuses
|
||||
// fields in the Pod status, respectively. Containers completed with success
|
||||
// (exit code 0) are excluded from the requirement check.
|
||||
message PodFailurePolicyOnExitCodesRequirement {
|
||||
// Restricts the check for exit codes to the container with the
|
||||
// specified name. When null, the rule applies to all containers.
|
||||
// When specified, it should match one the container or initContainer
|
||||
// names in the pod template.
|
||||
// +optional
|
||||
optional string containerName = 1;
|
||||
|
||||
// Represents the relationship between the container exit code(s) and the
|
||||
// specified values. Containers completed with success (exit code 0) are
|
||||
// excluded from the requirement check. Possible values are:
|
||||
// - In: the requirement is satisfied if at least one container exit code
|
||||
// (might be multiple if there are multiple containers not restricted
|
||||
// by the 'containerName' field) is in the set of specified values.
|
||||
// - NotIn: the requirement is satisfied if at least one container exit code
|
||||
// (might be multiple if there are multiple containers not restricted
|
||||
// by the 'containerName' field) is not in the set of specified values.
|
||||
// Additional values are considered to be added in the future. Clients should
|
||||
// react to an unknown operator by assuming the requirement is not satisfied.
|
||||
optional string operator = 2;
|
||||
|
||||
// Specifies the set of values. Each returned container exit code (might be
|
||||
// multiple in case of multiple containers) is checked against this set of
|
||||
// values with respect to the operator. The list of values must be ordered
|
||||
// and must not contain duplicates. Value '0' cannot be used for the In operator.
|
||||
// At least one element is required. At most 255 elements are allowed.
|
||||
// +listType=set
|
||||
repeated int32 values = 3;
|
||||
}
|
||||
|
||||
// PodFailurePolicyOnPodConditionsPattern describes a pattern for matching
|
||||
// an actual pod condition type.
|
||||
message PodFailurePolicyOnPodConditionsPattern {
|
||||
// Specifies the required Pod condition type. To match a pod condition
|
||||
// it is required that specified type equals the pod condition type.
|
||||
optional string type = 1;
|
||||
|
||||
// Specifies the required Pod condition status. To match a pod condition
|
||||
// it is required that the specified status equals the pod condition status.
|
||||
// Defaults to True.
|
||||
optional string status = 2;
|
||||
}
|
||||
|
||||
// PodFailurePolicyRule describes how a pod failure is handled when the requirements are met.
|
||||
// One of OnExitCodes and onPodConditions, but not both, can be used in each rule.
|
||||
message PodFailurePolicyRule {
|
||||
// Specifies the action taken on a pod failure when the requirements are satisfied.
|
||||
// Possible values are:
|
||||
// - FailJob: indicates that the pod's job is marked as Failed and all
|
||||
// running pods are terminated.
|
||||
// - Ignore: indicates that the counter towards the .backoffLimit is not
|
||||
// incremented and a replacement pod is created.
|
||||
// - Count: indicates that the pod is handled in the default way - the
|
||||
// counter towards the .backoffLimit is incremented.
|
||||
// Additional values are considered to be added in the future. Clients should
|
||||
// react to an unknown action by skipping the rule.
|
||||
optional string action = 1;
|
||||
|
||||
// Represents the requirement on the container exit codes.
|
||||
// +optional
|
||||
optional PodFailurePolicyOnExitCodesRequirement onExitCodes = 2;
|
||||
|
||||
// Represents the requirement on the pod conditions. The requirement is represented
|
||||
// as a list of pod condition patterns. The requirement is satisfied if at
|
||||
// least one pattern matches an actual pod condition. At most 20 elements are allowed.
|
||||
// +listType=atomic
|
||||
repeated PodFailurePolicyOnPodConditionsPattern onPodConditions = 3;
|
||||
}
|
||||
|
||||
// UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't
|
||||
// been accounted in Job status counters.
|
||||
message UncountedTerminatedPods {
|
||||
|
|
|
|||
|
|
@ -87,6 +87,120 @@ const (
|
|||
IndexedCompletion CompletionMode = "Indexed"
|
||||
)
|
||||
|
||||
// PodFailurePolicyAction specifies how a Pod failure is handled.
|
||||
// +enum
|
||||
type PodFailurePolicyAction string
|
||||
|
||||
const (
|
||||
// This is an action which might be taken on a pod failure - mark the
|
||||
// pod's job as Failed and terminate all running pods.
|
||||
PodFailurePolicyActionFailJob PodFailurePolicyAction = "FailJob"
|
||||
|
||||
// This is an action which might be taken on a pod failure - the counter towards
|
||||
// .backoffLimit, represented by the job's .status.failed field, is not
|
||||
// incremented and a replacement pod is created.
|
||||
PodFailurePolicyActionIgnore PodFailurePolicyAction = "Ignore"
|
||||
|
||||
// This is an action which might be taken on a pod failure - the pod failure
|
||||
// is handled in the default way - the counter towards .backoffLimit,
|
||||
// represented by the job's .status.failed field, is incremented.
|
||||
PodFailurePolicyActionCount PodFailurePolicyAction = "Count"
|
||||
)
|
||||
|
||||
// +enum
|
||||
type PodFailurePolicyOnExitCodesOperator string
|
||||
|
||||
const (
|
||||
PodFailurePolicyOnExitCodesOpIn PodFailurePolicyOnExitCodesOperator = "In"
|
||||
PodFailurePolicyOnExitCodesOpNotIn PodFailurePolicyOnExitCodesOperator = "NotIn"
|
||||
)
|
||||
|
||||
// PodFailurePolicyOnExitCodesRequirement describes the requirement for handling
|
||||
// a failed pod based on its container exit codes. In particular, it lookups the
|
||||
// .state.terminated.exitCode for each app container and init container status,
|
||||
// represented by the .status.containerStatuses and .status.initContainerStatuses
|
||||
// fields in the Pod status, respectively. Containers completed with success
|
||||
// (exit code 0) are excluded from the requirement check.
|
||||
type PodFailurePolicyOnExitCodesRequirement struct {
|
||||
// Restricts the check for exit codes to the container with the
|
||||
// specified name. When null, the rule applies to all containers.
|
||||
// When specified, it should match one the container or initContainer
|
||||
// names in the pod template.
|
||||
// +optional
|
||||
ContainerName *string `json:"containerName" protobuf:"bytes,1,opt,name=containerName"`
|
||||
|
||||
// Represents the relationship between the container exit code(s) and the
|
||||
// specified values. Containers completed with success (exit code 0) are
|
||||
// excluded from the requirement check. Possible values are:
|
||||
// - In: the requirement is satisfied if at least one container exit code
|
||||
// (might be multiple if there are multiple containers not restricted
|
||||
// by the 'containerName' field) is in the set of specified values.
|
||||
// - NotIn: the requirement is satisfied if at least one container exit code
|
||||
// (might be multiple if there are multiple containers not restricted
|
||||
// by the 'containerName' field) is not in the set of specified values.
|
||||
// Additional values are considered to be added in the future. Clients should
|
||||
// react to an unknown operator by assuming the requirement is not satisfied.
|
||||
Operator PodFailurePolicyOnExitCodesOperator `json:"operator" protobuf:"bytes,2,req,name=operator"`
|
||||
|
||||
// Specifies the set of values. Each returned container exit code (might be
|
||||
// multiple in case of multiple containers) is checked against this set of
|
||||
// values with respect to the operator. The list of values must be ordered
|
||||
// and must not contain duplicates. Value '0' cannot be used for the In operator.
|
||||
// At least one element is required. At most 255 elements are allowed.
|
||||
// +listType=set
|
||||
Values []int32 `json:"values" protobuf:"varint,3,rep,name=values"`
|
||||
}
|
||||
|
||||
// PodFailurePolicyOnPodConditionsPattern describes a pattern for matching
|
||||
// an actual pod condition type.
|
||||
type PodFailurePolicyOnPodConditionsPattern struct {
|
||||
// Specifies the required Pod condition type. To match a pod condition
|
||||
// it is required that specified type equals the pod condition type.
|
||||
Type corev1.PodConditionType `json:"type" protobuf:"bytes,1,req,name=type"`
|
||||
|
||||
// Specifies the required Pod condition status. To match a pod condition
|
||||
// it is required that the specified status equals the pod condition status.
|
||||
// Defaults to True.
|
||||
Status corev1.ConditionStatus `json:"status" protobuf:"bytes,2,req,name=status"`
|
||||
}
|
||||
|
||||
// PodFailurePolicyRule describes how a pod failure is handled when the requirements are met.
|
||||
// One of OnExitCodes and onPodConditions, but not both, can be used in each rule.
|
||||
type PodFailurePolicyRule struct {
|
||||
// Specifies the action taken on a pod failure when the requirements are satisfied.
|
||||
// Possible values are:
|
||||
// - FailJob: indicates that the pod's job is marked as Failed and all
|
||||
// running pods are terminated.
|
||||
// - Ignore: indicates that the counter towards the .backoffLimit is not
|
||||
// incremented and a replacement pod is created.
|
||||
// - Count: indicates that the pod is handled in the default way - the
|
||||
// counter towards the .backoffLimit is incremented.
|
||||
// Additional values are considered to be added in the future. Clients should
|
||||
// react to an unknown action by skipping the rule.
|
||||
Action PodFailurePolicyAction `json:"action" protobuf:"bytes,1,req,name=action"`
|
||||
|
||||
// Represents the requirement on the container exit codes.
|
||||
// +optional
|
||||
OnExitCodes *PodFailurePolicyOnExitCodesRequirement `json:"onExitCodes" protobuf:"bytes,2,opt,name=onExitCodes"`
|
||||
|
||||
// Represents the requirement on the pod conditions. The requirement is represented
|
||||
// as a list of pod condition patterns. The requirement is satisfied if at
|
||||
// least one pattern matches an actual pod condition. At most 20 elements are allowed.
|
||||
// +listType=atomic
|
||||
OnPodConditions []PodFailurePolicyOnPodConditionsPattern `json:"onPodConditions" protobuf:"bytes,3,opt,name=onPodConditions"`
|
||||
}
|
||||
|
||||
// PodFailurePolicy describes how failed pods influence the backoffLimit.
|
||||
type PodFailurePolicy struct {
|
||||
// A list of pod failure policy rules. The rules are evaluated in order.
|
||||
// Once a rule matches a Pod failure, the remaining of the rules are ignored.
|
||||
// When no rule matches the Pod failure, the default handling applies - the
|
||||
// counter of pod failures is incremented and it is checked against
|
||||
// the backoffLimit. At most 20 elements are allowed.
|
||||
// +listType=atomic
|
||||
Rules []PodFailurePolicyRule `json:"rules" protobuf:"bytes,1,opt,name=rules"`
|
||||
}
|
||||
|
||||
// JobSpec describes how the job execution will look like.
|
||||
type JobSpec struct {
|
||||
|
||||
|
|
@ -115,6 +229,19 @@ type JobSpec struct {
|
|||
// +optional
|
||||
ActiveDeadlineSeconds *int64 `json:"activeDeadlineSeconds,omitempty" protobuf:"varint,3,opt,name=activeDeadlineSeconds"`
|
||||
|
||||
// Specifies the policy of handling failed pods. In particular, it allows to
|
||||
// specify the set of actions and conditions which need to be
|
||||
// satisfied to take the associated action.
|
||||
// If empty, the default behaviour applies - the counter of failed pods,
|
||||
// represented by the jobs's .status.failed field, is incremented and it is
|
||||
// checked against the backoffLimit. This field cannot be used in combination
|
||||
// with restartPolicy=OnFailure.
|
||||
//
|
||||
// This field is alpha-level. To use this field, you must enable the
|
||||
// `JobPodFailurePolicy` feature gate (disabled by default).
|
||||
// +optional
|
||||
PodFailurePolicy *PodFailurePolicy `json:"podFailurePolicy,omitempty" protobuf:"bytes,11,opt,name=podFailurePolicy"`
|
||||
|
||||
// Specifies the number of retries before marking this job failed.
|
||||
// Defaults to 6
|
||||
// +optional
|
||||
|
|
@ -297,6 +424,9 @@ const (
|
|||
JobComplete JobConditionType = "Complete"
|
||||
// JobFailed means the job has failed its execution.
|
||||
JobFailed JobConditionType = "Failed"
|
||||
// FailureTarget means the job is about to fail its execution.
|
||||
// The constant is to be renamed once the name is accepted within the KEP-3329.
|
||||
AlphaNoCompatGuaranteeJobFailureTarget JobConditionType = "FailureTarget"
|
||||
)
|
||||
|
||||
// JobCondition describes current state of a job.
|
||||
|
|
@ -375,9 +505,16 @@ type CronJobSpec struct {
|
|||
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
|
||||
Schedule string `json:"schedule" protobuf:"bytes,1,opt,name=schedule"`
|
||||
|
||||
// The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will rely on the time zone of the kube-controller-manager process.
|
||||
// ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will default to the time zone of the kube-controller-manager process.
|
||||
// The set of valid time zone names and the time zone offset is loaded from the system-wide time zone
|
||||
// database by the API server during CronJob validation and the controller manager during execution.
|
||||
// If no system-wide time zone database can be found a bundled version of the database is used instead.
|
||||
// If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host
|
||||
// configuration, the controller will stop creating new new Jobs and will create a system event with the
|
||||
// reason UnknownTimeZone.
|
||||
// More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones
|
||||
// This is beta field and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// +optional
|
||||
TimeZone *string `json:"timeZone,omitempty" protobuf:"bytes,8,opt,name=timeZone"`
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ func (CronJobList) SwaggerDoc() map[string]string {
|
|||
var map_CronJobSpec = map[string]string{
|
||||
"": "CronJobSpec describes how the job execution will look like and when it will actually run.",
|
||||
"schedule": "The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.",
|
||||
"timeZone": "The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. If not specified, this will rely on the time zone of the kube-controller-manager process. ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.",
|
||||
"timeZone": "The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. If not specified, this will default to the time zone of the kube-controller-manager process. The set of valid time zone names and the time zone offset is loaded from the system-wide time zone database by the API server during CronJob validation and the controller manager during execution. If no system-wide time zone database can be found a bundled version of the database is used instead. If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration, the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone. More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones This is beta field and must be enabled via the `CronJobTimeZone` feature gate.",
|
||||
"startingDeadlineSeconds": "Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones.",
|
||||
"concurrencyPolicy": "Specifies how to treat concurrent executions of a Job. Valid values are: - \"Allow\" (default): allows CronJobs to run concurrently; - \"Forbid\": forbids concurrent runs, skipping next run if previous run hasn't finished yet; - \"Replace\": cancels currently running job and replaces it with a new one",
|
||||
"suspend": "This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false.",
|
||||
|
|
@ -115,6 +115,7 @@ var map_JobSpec = map[string]string{
|
|||
"parallelism": "Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/",
|
||||
"completions": "Specifies the desired number of successfully finished pods the job should be run with. Setting to nil means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/",
|
||||
"activeDeadlineSeconds": "Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again.",
|
||||
"podFailurePolicy": "Specifies the policy of handling failed pods. In particular, it allows to specify the set of actions and conditions which need to be satisfied to take the associated action. If empty, the default behaviour applies - the counter of failed pods, represented by the jobs's .status.failed field, is incremented and it is checked against the backoffLimit. This field cannot be used in combination with restartPolicy=OnFailure.\n\nThis field is alpha-level. To use this field, you must enable the `JobPodFailurePolicy` feature gate (disabled by default).",
|
||||
"backoffLimit": "Specifies the number of retries before marking this job failed. Defaults to 6",
|
||||
"selector": "A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors",
|
||||
"manualSelector": "manualSelector controls generation of pod labels and pod selectors. Leave `manualSelector` unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see `manualSelector=true` in jobs that were created with the old `extensions/v1beta1` API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector",
|
||||
|
|
@ -155,6 +156,47 @@ func (JobTemplateSpec) SwaggerDoc() map[string]string {
|
|||
return map_JobTemplateSpec
|
||||
}
|
||||
|
||||
var map_PodFailurePolicy = map[string]string{
|
||||
"": "PodFailurePolicy describes how failed pods influence the backoffLimit.",
|
||||
"rules": "A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed.",
|
||||
}
|
||||
|
||||
func (PodFailurePolicy) SwaggerDoc() map[string]string {
|
||||
return map_PodFailurePolicy
|
||||
}
|
||||
|
||||
var map_PodFailurePolicyOnExitCodesRequirement = map[string]string{
|
||||
"": "PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check.",
|
||||
"containerName": "Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template.",
|
||||
"operator": "Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code\n (might be multiple if there are multiple containers not restricted\n by the 'containerName' field) is in the set of specified values.\n- NotIn: the requirement is satisfied if at least one container exit code\n (might be multiple if there are multiple containers not restricted\n by the 'containerName' field) is not in the set of specified values.\nAdditional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied.",
|
||||
"values": "Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed.",
|
||||
}
|
||||
|
||||
func (PodFailurePolicyOnExitCodesRequirement) SwaggerDoc() map[string]string {
|
||||
return map_PodFailurePolicyOnExitCodesRequirement
|
||||
}
|
||||
|
||||
var map_PodFailurePolicyOnPodConditionsPattern = map[string]string{
|
||||
"": "PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type.",
|
||||
"type": "Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type.",
|
||||
"status": "Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True.",
|
||||
}
|
||||
|
||||
func (PodFailurePolicyOnPodConditionsPattern) SwaggerDoc() map[string]string {
|
||||
return map_PodFailurePolicyOnPodConditionsPattern
|
||||
}
|
||||
|
||||
var map_PodFailurePolicyRule = map[string]string{
|
||||
"": "PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of OnExitCodes and onPodConditions, but not both, can be used in each rule.",
|
||||
"action": "Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all\n running pods are terminated.\n- Ignore: indicates that the counter towards the .backoffLimit is not\n incremented and a replacement pod is created.\n- Count: indicates that the pod is handled in the default way - the\n counter towards the .backoffLimit is incremented.\nAdditional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule.",
|
||||
"onExitCodes": "Represents the requirement on the container exit codes.",
|
||||
"onPodConditions": "Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed.",
|
||||
}
|
||||
|
||||
func (PodFailurePolicyRule) SwaggerDoc() map[string]string {
|
||||
return map_PodFailurePolicyRule
|
||||
}
|
||||
|
||||
var map_UncountedTerminatedPods = map[string]string{
|
||||
"": "UncountedTerminatedPods holds UIDs of Pods that have terminated but haven't been accounted in Job status counters.",
|
||||
"succeeded": "Succeeded holds UIDs of succeeded Pods.",
|
||||
|
|
|
|||
|
|
@ -257,6 +257,11 @@ func (in *JobSpec) DeepCopyInto(out *JobSpec) {
|
|||
*out = new(int64)
|
||||
**out = **in
|
||||
}
|
||||
if in.PodFailurePolicy != nil {
|
||||
in, out := &in.PodFailurePolicy, &out.PodFailurePolicy
|
||||
*out = new(PodFailurePolicy)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
if in.BackoffLimit != nil {
|
||||
in, out := &in.BackoffLimit, &out.BackoffLimit
|
||||
*out = new(int32)
|
||||
|
|
@ -360,6 +365,97 @@ func (in *JobTemplateSpec) DeepCopy() *JobTemplateSpec {
|
|||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PodFailurePolicy) DeepCopyInto(out *PodFailurePolicy) {
|
||||
*out = *in
|
||||
if in.Rules != nil {
|
||||
in, out := &in.Rules, &out.Rules
|
||||
*out = make([]PodFailurePolicyRule, len(*in))
|
||||
for i := range *in {
|
||||
(*in)[i].DeepCopyInto(&(*out)[i])
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodFailurePolicy.
|
||||
func (in *PodFailurePolicy) DeepCopy() *PodFailurePolicy {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PodFailurePolicy)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PodFailurePolicyOnExitCodesRequirement) DeepCopyInto(out *PodFailurePolicyOnExitCodesRequirement) {
|
||||
*out = *in
|
||||
if in.ContainerName != nil {
|
||||
in, out := &in.ContainerName, &out.ContainerName
|
||||
*out = new(string)
|
||||
**out = **in
|
||||
}
|
||||
if in.Values != nil {
|
||||
in, out := &in.Values, &out.Values
|
||||
*out = make([]int32, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodFailurePolicyOnExitCodesRequirement.
|
||||
func (in *PodFailurePolicyOnExitCodesRequirement) DeepCopy() *PodFailurePolicyOnExitCodesRequirement {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PodFailurePolicyOnExitCodesRequirement)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PodFailurePolicyOnPodConditionsPattern) DeepCopyInto(out *PodFailurePolicyOnPodConditionsPattern) {
|
||||
*out = *in
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodFailurePolicyOnPodConditionsPattern.
|
||||
func (in *PodFailurePolicyOnPodConditionsPattern) DeepCopy() *PodFailurePolicyOnPodConditionsPattern {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PodFailurePolicyOnPodConditionsPattern)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *PodFailurePolicyRule) DeepCopyInto(out *PodFailurePolicyRule) {
|
||||
*out = *in
|
||||
if in.OnExitCodes != nil {
|
||||
in, out := &in.OnExitCodes, &out.OnExitCodes
|
||||
*out = new(PodFailurePolicyOnExitCodesRequirement)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
if in.OnPodConditions != nil {
|
||||
in, out := &in.OnPodConditions, &out.OnPodConditions
|
||||
*out = make([]PodFailurePolicyOnPodConditionsPattern, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodFailurePolicyRule.
|
||||
func (in *PodFailurePolicyRule) DeepCopy() *PodFailurePolicyRule {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := new(PodFailurePolicyRule)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
|
||||
func (in *UncountedTerminatedPods) DeepCopyInto(out *UncountedTerminatedPods) {
|
||||
*out = *in
|
||||
|
|
|
|||
|
|
@ -64,9 +64,16 @@ message CronJobSpec {
|
|||
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
|
||||
optional string schedule = 1;
|
||||
|
||||
// The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will rely on the time zone of the kube-controller-manager process.
|
||||
// ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will default to the time zone of the kube-controller-manager process.
|
||||
// The set of valid time zone names and the time zone offset is loaded from the system-wide time zone
|
||||
// database by the API server during CronJob validation and the controller manager during execution.
|
||||
// If no system-wide time zone database can be found a bundled version of the database is used instead.
|
||||
// If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host
|
||||
// configuration, the controller will stop creating new new Jobs and will create a system event with the
|
||||
// reason UnknownTimeZone.
|
||||
// More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones
|
||||
// This is beta field and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// +optional
|
||||
optional string timeZone = 8;
|
||||
|
||||
|
|
|
|||
|
|
@ -104,9 +104,16 @@ type CronJobSpec struct {
|
|||
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
|
||||
Schedule string `json:"schedule" protobuf:"bytes,1,opt,name=schedule"`
|
||||
|
||||
// The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will rely on the time zone of the kube-controller-manager process.
|
||||
// ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones.
|
||||
// If not specified, this will default to the time zone of the kube-controller-manager process.
|
||||
// The set of valid time zone names and the time zone offset is loaded from the system-wide time zone
|
||||
// database by the API server during CronJob validation and the controller manager during execution.
|
||||
// If no system-wide time zone database can be found a bundled version of the database is used instead.
|
||||
// If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host
|
||||
// configuration, the controller will stop creating new new Jobs and will create a system event with the
|
||||
// reason UnknownTimeZone.
|
||||
// More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones
|
||||
// This is beta field and must be enabled via the `CronJobTimeZone` feature gate.
|
||||
// +optional
|
||||
TimeZone *string `json:"timeZone,omitempty" protobuf:"bytes,8,opt,name=timeZone"`
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ func (CronJobList) SwaggerDoc() map[string]string {
|
|||
var map_CronJobSpec = map[string]string{
|
||||
"": "CronJobSpec describes how the job execution will look like and when it will actually run.",
|
||||
"schedule": "The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.",
|
||||
"timeZone": "The time zone for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. If not specified, this will rely on the time zone of the kube-controller-manager process. ALPHA: This field is in alpha and must be enabled via the `CronJobTimeZone` feature gate.",
|
||||
"timeZone": "The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones. If not specified, this will default to the time zone of the kube-controller-manager process. The set of valid time zone names and the time zone offset is loaded from the system-wide time zone database by the API server during CronJob validation and the controller manager during execution. If no system-wide time zone database can be found a bundled version of the database is used instead. If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration, the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone. More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones This is beta field and must be enabled via the `CronJobTimeZone` feature gate.",
|
||||
"startingDeadlineSeconds": "Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones.",
|
||||
"concurrencyPolicy": "Specifies how to treat concurrent executions of a Job. Valid values are: - \"Allow\" (default): allows CronJobs to run concurrently; - \"Forbid\": forbids concurrent runs, skipping next run if previous run hasn't finished yet; - \"Replace\": cancels currently running job and replaces it with a new one",
|
||||
"suspend": "This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false.",
|
||||
|
|
|
|||
|
|
@ -275,7 +275,9 @@ type CertificateSigningRequestList struct {
|
|||
|
||||
// KeyUsage specifies valid usage contexts for keys.
|
||||
// See: https://tools.ietf.org/html/rfc5280#section-4.2.1.3
|
||||
//
|
||||
// https://tools.ietf.org/html/rfc5280#section-4.2.1.12
|
||||
//
|
||||
// +enum
|
||||
type KeyUsage string
|
||||
|
||||
|
|
|
|||
|
|
@ -230,6 +230,7 @@ type CertificateSigningRequestList struct {
|
|||
|
||||
// KeyUsages specifies valid usage contexts for keys.
|
||||
// See: https://tools.ietf.org/html/rfc5280#section-4.2.1.3
|
||||
//
|
||||
// https://tools.ietf.org/html/rfc5280#section-4.2.1.12
|
||||
type KeyUsage string
|
||||
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -220,11 +220,20 @@ message CSIPersistentVolumeSource {
|
|||
// controllerExpandSecretRef is a reference to the secret object containing
|
||||
// sensitive information to pass to the CSI driver to complete the CSI
|
||||
// ControllerExpandVolume call.
|
||||
// This is an alpha field and requires enabling ExpandCSIVolumes feature gate.
|
||||
// This is an beta field and requires enabling ExpandCSIVolumes feature gate.
|
||||
// This field is optional, and may be empty if no secret is required. If the
|
||||
// secret object contains more than one secret, all secrets are passed.
|
||||
// +optional
|
||||
optional SecretReference controllerExpandSecretRef = 9;
|
||||
|
||||
// nodeExpandSecretRef is a reference to the secret object containing
|
||||
// sensitive information to pass to the CSI driver to complete the CSI
|
||||
// NodeExpandVolume call.
|
||||
// This is an alpha field and requires enabling CSINodeExpandSecret feature gate.
|
||||
// This field is optional, may be omitted if no secret is required. If the
|
||||
// secret object contains more than one secret, all secrets are passed.
|
||||
// +optional
|
||||
optional SecretReference nodeExpandSecretRef = 10;
|
||||
}
|
||||
|
||||
// Represents a source location of a volume to mount, managed by an external CSI driver
|
||||
|
|
@ -647,12 +656,12 @@ message Container {
|
|||
// +optional
|
||||
optional string workingDir = 5;
|
||||
|
||||
// List of ports to expose from the container. Exposing a port here gives
|
||||
// the system additional information about the network connections a
|
||||
// container uses, but is primarily informational. Not specifying a port here
|
||||
// List of ports to expose from the container. Not specifying a port here
|
||||
// DOES NOT prevent that port from being exposed. Any port which is
|
||||
// listening on the default "0.0.0.0" address inside a container will be
|
||||
// accessible from the network.
|
||||
// Modifying this array with strategic merge patch may corrupt the data.
|
||||
// For more information See https://github.com/kubernetes/kubernetes/issues/108255.
|
||||
// Cannot be updated.
|
||||
// +optional
|
||||
// +patchMergeKey=containerPort
|
||||
|
|
@ -785,7 +794,7 @@ message Container {
|
|||
// Describe a container image
|
||||
message ContainerImage {
|
||||
// Names by which this image is known.
|
||||
// e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]
|
||||
// e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"]
|
||||
// +optional
|
||||
repeated string names = 1;
|
||||
|
||||
|
|
@ -1062,11 +1071,14 @@ message EndpointPort {
|
|||
// EndpointSubset is a group of addresses with a common set of ports. The
|
||||
// expanded set of endpoints is the Cartesian product of Addresses x Ports.
|
||||
// For example, given:
|
||||
//
|
||||
// {
|
||||
// Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}],
|
||||
// Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}]
|
||||
// }
|
||||
//
|
||||
// The resulting set of endpoints can be viewed as:
|
||||
//
|
||||
// a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
||||
// b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
||||
message EndpointSubset {
|
||||
|
|
@ -1087,6 +1099,7 @@ message EndpointSubset {
|
|||
}
|
||||
|
||||
// Endpoints is a collection of endpoints that implement the actual service. Example:
|
||||
//
|
||||
// Name: "mysvc",
|
||||
// Subsets: [
|
||||
// {
|
||||
|
|
@ -1192,8 +1205,6 @@ message EnvVarSource {
|
|||
//
|
||||
// To add an ephemeral container, use the ephemeralcontainers subresource of an existing
|
||||
// Pod. Ephemeral containers may not be removed or restarted.
|
||||
//
|
||||
// This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
message EphemeralContainer {
|
||||
// Ephemeral containers have all of the fields of Container, plus additional fields
|
||||
// specific to ephemeral containers. Fields in common with Container are in the
|
||||
|
|
@ -2535,6 +2546,7 @@ message ObjectFieldSelector {
|
|||
// and the version of the actual struct is irrelevant.
|
||||
// 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type
|
||||
// will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.
|
||||
//
|
||||
// Instead of using this type, create a locally provided and used type that is well-focused on your reference.
|
||||
// For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
|
||||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
|
||||
|
|
@ -2939,6 +2951,7 @@ message PersistentVolumeSpec {
|
|||
// claim.VolumeName is the authoritative bind between PV and PVC.
|
||||
// More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding
|
||||
// +optional
|
||||
// +structType=granular
|
||||
optional ObjectReference claimRef = 4;
|
||||
|
||||
// persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim.
|
||||
|
|
@ -3232,6 +3245,7 @@ message PodExecOptions {
|
|||
|
||||
// IP address information for entries in the (plural) PodIPs field.
|
||||
// Each entry includes:
|
||||
//
|
||||
// IP: An IP address allocated to the pod. Routable at least within the cluster.
|
||||
message PodIP {
|
||||
// ip is an IP address (IPv4 or IPv6) assigned to the pod
|
||||
|
|
@ -3474,7 +3488,6 @@ message PodSpec {
|
|||
// pod to perform user-initiated actions such as debugging. This list cannot be specified when
|
||||
// creating a pod, and it cannot be modified by updating the pod spec. In order to add an
|
||||
// ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
|
||||
// This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
// +optional
|
||||
// +patchMergeKey=name
|
||||
// +patchStrategy=merge
|
||||
|
|
@ -3700,6 +3713,7 @@ message PodSpec {
|
|||
// If the OS field is set to windows, following fields must be unset:
|
||||
// - spec.hostPID
|
||||
// - spec.hostIPC
|
||||
// - spec.hostUsers
|
||||
// - spec.securityContext.seLinuxOptions
|
||||
// - spec.securityContext.seccompProfile
|
||||
// - spec.securityContext.fsGroup
|
||||
|
|
@ -3719,8 +3733,20 @@ message PodSpec {
|
|||
// - spec.containers[*].securityContext.runAsUser
|
||||
// - spec.containers[*].securityContext.runAsGroup
|
||||
// +optional
|
||||
// This is a beta field and requires the IdentifyPodOS feature
|
||||
optional PodOS os = 36;
|
||||
|
||||
// Use the host's user namespace.
|
||||
// Optional: Default to true.
|
||||
// If set to true or not present, the pod will be run in the host user namespace, useful
|
||||
// for when the pod needs a feature only available to the host user namespace, such as
|
||||
// loading a kernel module with CAP_SYS_MODULE.
|
||||
// When set to false, a new userns is created for the pod. Setting false is useful for
|
||||
// mitigating container breakout vulnerabilities even allowing users to run their
|
||||
// containers as root without actually having root privileges on the host.
|
||||
// This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
|
||||
// +k8s:conversion-gen=false
|
||||
// +optional
|
||||
optional bool hostUsers = 37;
|
||||
}
|
||||
|
||||
// PodStatus represents information about the status of a pod. Status may trail the actual
|
||||
|
|
@ -3814,7 +3840,6 @@ message PodStatus {
|
|||
optional string qosClass = 9;
|
||||
|
||||
// Status for any ephemeral containers that have run in this pod.
|
||||
// This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
// +optional
|
||||
repeated ContainerStatus ephemeralContainerStatuses = 13;
|
||||
}
|
||||
|
|
@ -5083,12 +5108,19 @@ message ServiceSpec {
|
|||
// +optional
|
||||
optional string externalName = 10;
|
||||
|
||||
// externalTrafficPolicy denotes if this Service desires to route external
|
||||
// traffic to node-local or cluster-wide endpoints. "Local" preserves the
|
||||
// client source IP and avoids a second hop for LoadBalancer and Nodeport
|
||||
// type services, but risks potentially imbalanced traffic spreading.
|
||||
// "Cluster" obscures the client source IP and may cause a second hop to
|
||||
// another node, but should have good overall load-spreading.
|
||||
// externalTrafficPolicy describes how nodes distribute service traffic they
|
||||
// receive on one of the Service's "externally-facing" addresses (NodePorts,
|
||||
// ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure
|
||||
// the service in a way that assumes that external load balancers will take care
|
||||
// of balancing the service traffic between nodes, and so each node will deliver
|
||||
// traffic only to the node-local endpoints of the service, without masquerading
|
||||
// the client source IP. (Traffic mistakenly sent to a node with no endpoints will
|
||||
// be dropped.) The default value, "Cluster", uses the standard behavior of
|
||||
// routing to all endpoints evenly (possibly modified by topology and other
|
||||
// features). Note that traffic sent to an External IP or LoadBalancer IP from
|
||||
// within the cluster will always get "Cluster" semantics, but clients sending to
|
||||
// a NodePort from within the cluster may need to take traffic policy into account
|
||||
// when picking a node.
|
||||
// +optional
|
||||
optional string externalTrafficPolicy = 11;
|
||||
|
||||
|
|
@ -5101,6 +5133,7 @@ message ServiceSpec {
|
|||
// service or not. If this field is specified when creating a Service
|
||||
// which does not need it, creation will fail. This field will be wiped
|
||||
// when updating a Service to no longer need it (e.g. changing type).
|
||||
// This field cannot be updated once set.
|
||||
// +optional
|
||||
optional int32 healthCheckNodePort = 12;
|
||||
|
||||
|
|
@ -5174,12 +5207,12 @@ message ServiceSpec {
|
|||
// +optional
|
||||
optional string loadBalancerClass = 21;
|
||||
|
||||
// InternalTrafficPolicy specifies if the cluster internal traffic
|
||||
// should be routed to all endpoints or node-local endpoints only.
|
||||
// "Cluster" routes internal traffic to a Service to all endpoints.
|
||||
// "Local" routes traffic to node-local endpoints only, traffic is
|
||||
// dropped if no node-local endpoints are ready.
|
||||
// The default value is "Cluster".
|
||||
// InternalTrafficPolicy describes how nodes distribute service traffic they
|
||||
// receive on the ClusterIP. If set to "Local", the proxy will assume that pods
|
||||
// only want to talk to endpoints of the service on the same node as the pod,
|
||||
// dropping the traffic if there are no local endpoints. The default value,
|
||||
// "Cluster", uses the standard behavior of routing to all endpoints evenly
|
||||
// (possibly modified by topology and other features).
|
||||
// +featureGate=ServiceInternalTrafficPolicy
|
||||
// +optional
|
||||
optional string internalTrafficPolicy = 22;
|
||||
|
|
@ -5399,7 +5432,8 @@ message TopologySpreadConstraint {
|
|||
// We consider each <key, value> as a "bucket", and try to put balanced number
|
||||
// of pods into each bucket.
|
||||
// We define a domain as a particular instance of a topology.
|
||||
// Also, we define an eligible domain as a domain whose nodes match the node selector.
|
||||
// Also, we define an eligible domain as a domain whose nodes meet the requirements of
|
||||
// nodeAffinityPolicy and nodeTaintsPolicy.
|
||||
// e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
|
||||
// And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
|
||||
// It's a required field.
|
||||
|
|
@ -5457,9 +5491,40 @@ message TopologySpreadConstraint {
|
|||
// because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
|
||||
// it will violate MaxSkew.
|
||||
//
|
||||
// This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate.
|
||||
// This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).
|
||||
// +optional
|
||||
optional int32 minDomains = 5;
|
||||
|
||||
// NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
|
||||
// when calculating pod topology spread skew. Options are:
|
||||
// - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
|
||||
// - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
||||
//
|
||||
// If this value is nil, the behavior is equivalent to the Honor policy.
|
||||
// This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
|
||||
// +optional
|
||||
optional string nodeAffinityPolicy = 6;
|
||||
|
||||
// NodeTaintsPolicy indicates how we will treat node taints when calculating
|
||||
// pod topology spread skew. Options are:
|
||||
// - Honor: nodes without taints, along with tainted nodes for which the incoming pod
|
||||
// has a toleration, are included.
|
||||
// - Ignore: node taints are ignored. All nodes are included.
|
||||
//
|
||||
// If this value is nil, the behavior is equivalent to the Ignore policy.
|
||||
// This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
|
||||
// +optional
|
||||
optional string nodeTaintsPolicy = 7;
|
||||
|
||||
// MatchLabelKeys is a set of pod label keys to select the pods over which
|
||||
// spreading will be calculated. The keys are used to lookup values from the
|
||||
// incoming pod labels, those key-value labels are ANDed with labelSelector
|
||||
// to select the group of existing pods over which spreading will be calculated
|
||||
// for the incoming pod. Keys that don't exist in the incoming pod labels will
|
||||
// be ignored. A null or empty list means only match against labelSelector.
|
||||
// +listType=atomic
|
||||
// +optional
|
||||
repeated string matchLabelKeys = 8;
|
||||
}
|
||||
|
||||
// TypedLocalObjectReference contains enough information to let you locate the
|
||||
|
|
|
|||
|
|
@ -29,9 +29,12 @@ func (t *Toleration) MatchToleration(tolerationToMatch *Toleration) bool {
|
|||
// ToleratesTaint checks if the toleration tolerates the taint.
|
||||
// The matching follows the rules below:
|
||||
// (1) Empty toleration.effect means to match all taint effects,
|
||||
//
|
||||
// otherwise taint effect must equal to toleration.effect.
|
||||
//
|
||||
// (2) If toleration.operator is 'Exists', it means to match all taint values.
|
||||
// (3) Empty toleration.key means to match all taint keys.
|
||||
//
|
||||
// If toleration.key is empty, toleration.operator must be 'Exists';
|
||||
// this combination means to match all taint values and all taint keys.
|
||||
func (t *Toleration) ToleratesTaint(taint *Taint) bool {
|
||||
|
|
|
|||
|
|
@ -337,6 +337,7 @@ type PersistentVolumeSpec struct {
|
|||
// claim.VolumeName is the authoritative bind between PV and PVC.
|
||||
// More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding
|
||||
// +optional
|
||||
// +structType=granular
|
||||
ClaimRef *ObjectReference `json:"claimRef,omitempty" protobuf:"bytes,4,opt,name=claimRef"`
|
||||
// persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim.
|
||||
// Valid options are Retain (default for manually created PersistentVolumes), Delete (default
|
||||
|
|
@ -1800,11 +1801,20 @@ type CSIPersistentVolumeSource struct {
|
|||
// controllerExpandSecretRef is a reference to the secret object containing
|
||||
// sensitive information to pass to the CSI driver to complete the CSI
|
||||
// ControllerExpandVolume call.
|
||||
// This is an alpha field and requires enabling ExpandCSIVolumes feature gate.
|
||||
// This is an beta field and requires enabling ExpandCSIVolumes feature gate.
|
||||
// This field is optional, and may be empty if no secret is required. If the
|
||||
// secret object contains more than one secret, all secrets are passed.
|
||||
// +optional
|
||||
ControllerExpandSecretRef *SecretReference `json:"controllerExpandSecretRef,omitempty" protobuf:"bytes,9,opt,name=controllerExpandSecretRef"`
|
||||
|
||||
// nodeExpandSecretRef is a reference to the secret object containing
|
||||
// sensitive information to pass to the CSI driver to complete the CSI
|
||||
// NodeExpandVolume call.
|
||||
// This is an alpha field and requires enabling CSINodeExpandSecret feature gate.
|
||||
// This field is optional, may be omitted if no secret is required. If the
|
||||
// secret object contains more than one secret, all secrets are passed.
|
||||
// +optional
|
||||
NodeExpandSecretRef *SecretReference `json:"nodeExpandSecretRef,omitempty" protobuf:"bytes,10,opt,name=nodeExpandSecretRef"`
|
||||
}
|
||||
|
||||
// Represents a source location of a volume to mount, managed by an external CSI driver
|
||||
|
|
@ -2324,12 +2334,12 @@ type Container struct {
|
|||
// Cannot be updated.
|
||||
// +optional
|
||||
WorkingDir string `json:"workingDir,omitempty" protobuf:"bytes,5,opt,name=workingDir"`
|
||||
// List of ports to expose from the container. Exposing a port here gives
|
||||
// the system additional information about the network connections a
|
||||
// container uses, but is primarily informational. Not specifying a port here
|
||||
// List of ports to expose from the container. Not specifying a port here
|
||||
// DOES NOT prevent that port from being exposed. Any port which is
|
||||
// listening on the default "0.0.0.0" address inside a container will be
|
||||
// accessible from the network.
|
||||
// Modifying this array with strategic merge patch may corrupt the data.
|
||||
// For more information See https://github.com/kubernetes/kubernetes/issues/108255.
|
||||
// Cannot be updated.
|
||||
// +optional
|
||||
// +patchMergeKey=containerPort
|
||||
|
|
@ -2644,6 +2654,10 @@ const (
|
|||
PodReady PodConditionType = "Ready"
|
||||
// PodScheduled represents status of the scheduling process for this pod.
|
||||
PodScheduled PodConditionType = "PodScheduled"
|
||||
// AlphaNoCompatGuaranteeDisruptionTarget indicates the pod is about to be deleted due to a
|
||||
// disruption (such as preemption, eviction API or garbage-collection).
|
||||
// The constant is to be renamed once the name is accepted within the KEP-3329.
|
||||
AlphaNoCompatGuaranteeDisruptionTarget PodConditionType = "DisruptionTarget"
|
||||
)
|
||||
|
||||
// These are reasons for a pod's transition to a condition.
|
||||
|
|
@ -3081,7 +3095,6 @@ type PodSpec struct {
|
|||
// pod to perform user-initiated actions such as debugging. This list cannot be specified when
|
||||
// creating a pod, and it cannot be modified by updating the pod spec. In order to add an
|
||||
// ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
|
||||
// This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
// +optional
|
||||
// +patchMergeKey=name
|
||||
// +patchStrategy=merge
|
||||
|
|
@ -3277,6 +3290,7 @@ type PodSpec struct {
|
|||
// If the OS field is set to windows, following fields must be unset:
|
||||
// - spec.hostPID
|
||||
// - spec.hostIPC
|
||||
// - spec.hostUsers
|
||||
// - spec.securityContext.seLinuxOptions
|
||||
// - spec.securityContext.seccompProfile
|
||||
// - spec.securityContext.fsGroup
|
||||
|
|
@ -3296,8 +3310,19 @@ type PodSpec struct {
|
|||
// - spec.containers[*].securityContext.runAsUser
|
||||
// - spec.containers[*].securityContext.runAsGroup
|
||||
// +optional
|
||||
// This is a beta field and requires the IdentifyPodOS feature
|
||||
OS *PodOS `json:"os,omitempty" protobuf:"bytes,36,opt,name=os"`
|
||||
// Use the host's user namespace.
|
||||
// Optional: Default to true.
|
||||
// If set to true or not present, the pod will be run in the host user namespace, useful
|
||||
// for when the pod needs a feature only available to the host user namespace, such as
|
||||
// loading a kernel module with CAP_SYS_MODULE.
|
||||
// When set to false, a new userns is created for the pod. Setting false is useful for
|
||||
// mitigating container breakout vulnerabilities even allowing users to run their
|
||||
// containers as root without actually having root privileges on the host.
|
||||
// This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.
|
||||
// +k8s:conversion-gen=false
|
||||
// +optional
|
||||
HostUsers *bool `json:"hostUsers,omitempty" protobuf:"bytes,37,opt,name=hostUsers"`
|
||||
}
|
||||
|
||||
// OSName is the set of OS'es that can be used in OS.
|
||||
|
|
@ -3330,6 +3355,17 @@ const (
|
|||
ScheduleAnyway UnsatisfiableConstraintAction = "ScheduleAnyway"
|
||||
)
|
||||
|
||||
// NodeInclusionPolicy defines the type of node inclusion policy
|
||||
// +enum
|
||||
type NodeInclusionPolicy string
|
||||
|
||||
const (
|
||||
// NodeInclusionPolicyIgnore means ignore this scheduling directive when calculating pod topology spread skew.
|
||||
NodeInclusionPolicyIgnore NodeInclusionPolicy = "Ignore"
|
||||
// NodeInclusionPolicyHonor means use this scheduling directive when calculating pod topology spread skew.
|
||||
NodeInclusionPolicyHonor NodeInclusionPolicy = "Honor"
|
||||
)
|
||||
|
||||
// TopologySpreadConstraint specifies how to spread matching pods among the given topology.
|
||||
type TopologySpreadConstraint struct {
|
||||
// MaxSkew describes the degree to which pods may be unevenly distributed.
|
||||
|
|
@ -3358,7 +3394,8 @@ type TopologySpreadConstraint struct {
|
|||
// We consider each <key, value> as a "bucket", and try to put balanced number
|
||||
// of pods into each bucket.
|
||||
// We define a domain as a particular instance of a topology.
|
||||
// Also, we define an eligible domain as a domain whose nodes match the node selector.
|
||||
// Also, we define an eligible domain as a domain whose nodes meet the requirements of
|
||||
// nodeAffinityPolicy and nodeTaintsPolicy.
|
||||
// e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology.
|
||||
// And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology.
|
||||
// It's a required field.
|
||||
|
|
@ -3413,9 +3450,37 @@ type TopologySpreadConstraint struct {
|
|||
// because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones,
|
||||
// it will violate MaxSkew.
|
||||
//
|
||||
// This is an alpha field and requires enabling MinDomainsInPodTopologySpread feature gate.
|
||||
// This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default).
|
||||
// +optional
|
||||
MinDomains *int32 `json:"minDomains,omitempty" protobuf:"varint,5,opt,name=minDomains"`
|
||||
// NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector
|
||||
// when calculating pod topology spread skew. Options are:
|
||||
// - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations.
|
||||
// - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
|
||||
//
|
||||
// If this value is nil, the behavior is equivalent to the Honor policy.
|
||||
// This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
|
||||
// +optional
|
||||
NodeAffinityPolicy *NodeInclusionPolicy `json:"nodeAffinityPolicy,omitempty" protobuf:"bytes,6,opt,name=nodeAffinityPolicy"`
|
||||
// NodeTaintsPolicy indicates how we will treat node taints when calculating
|
||||
// pod topology spread skew. Options are:
|
||||
// - Honor: nodes without taints, along with tainted nodes for which the incoming pod
|
||||
// has a toleration, are included.
|
||||
// - Ignore: node taints are ignored. All nodes are included.
|
||||
//
|
||||
// If this value is nil, the behavior is equivalent to the Ignore policy.
|
||||
// This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
|
||||
// +optional
|
||||
NodeTaintsPolicy *NodeInclusionPolicy `json:"nodeTaintsPolicy,omitempty" protobuf:"bytes,7,opt,name=nodeTaintsPolicy"`
|
||||
// MatchLabelKeys is a set of pod label keys to select the pods over which
|
||||
// spreading will be calculated. The keys are used to lookup values from the
|
||||
// incoming pod labels, those key-value labels are ANDed with labelSelector
|
||||
// to select the group of existing pods over which spreading will be calculated
|
||||
// for the incoming pod. Keys that don't exist in the incoming pod labels will
|
||||
// be ignored. A null or empty list means only match against labelSelector.
|
||||
// +listType=atomic
|
||||
// +optional
|
||||
MatchLabelKeys []string `json:"matchLabelKeys,omitempty" protobuf:"bytes,8,opt,name=matchLabelKeys"`
|
||||
}
|
||||
|
||||
const (
|
||||
|
|
@ -3607,6 +3672,7 @@ type PodDNSConfigOption struct {
|
|||
|
||||
// IP address information for entries in the (plural) PodIPs field.
|
||||
// Each entry includes:
|
||||
//
|
||||
// IP: An IP address allocated to the pod. Routable at least within the cluster.
|
||||
type PodIP struct {
|
||||
// ip is an IP address (IPv4 or IPv6) assigned to the pod
|
||||
|
|
@ -3764,8 +3830,6 @@ var _ = Container(EphemeralContainerCommon{})
|
|||
//
|
||||
// To add an ephemeral container, use the ephemeralcontainers subresource of an existing
|
||||
// Pod. Ephemeral containers may not be removed or restarted.
|
||||
//
|
||||
// This is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
type EphemeralContainer struct {
|
||||
// Ephemeral containers have all of the fields of Container, plus additional fields
|
||||
// specific to ephemeral containers. Fields in common with Container are in the
|
||||
|
|
@ -3867,7 +3931,6 @@ type PodStatus struct {
|
|||
// +optional
|
||||
QOSClass PodQOSClass `json:"qosClass,omitempty" protobuf:"bytes,9,rep,name=qosClass"`
|
||||
// Status for any ephemeral containers that have run in this pod.
|
||||
// This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.
|
||||
// +optional
|
||||
EphemeralContainerStatuses []ContainerStatus `json:"ephemeralContainerStatuses,omitempty" protobuf:"bytes,13,rep,name=ephemeralContainerStatuses"`
|
||||
}
|
||||
|
|
@ -4168,29 +4231,34 @@ const (
|
|||
ServiceTypeExternalName ServiceType = "ExternalName"
|
||||
)
|
||||
|
||||
// ServiceInternalTrafficPolicyType describes the type of traffic routing for
|
||||
// internal traffic
|
||||
// ServiceInternalTrafficPolicyType describes how nodes distribute service traffic they
|
||||
// receive on the ClusterIP.
|
||||
// +enum
|
||||
type ServiceInternalTrafficPolicyType string
|
||||
|
||||
const (
|
||||
// ServiceInternalTrafficPolicyCluster routes traffic to all endpoints
|
||||
// ServiceInternalTrafficPolicyCluster routes traffic to all endpoints.
|
||||
ServiceInternalTrafficPolicyCluster ServiceInternalTrafficPolicyType = "Cluster"
|
||||
|
||||
// ServiceInternalTrafficPolicyLocal only routes to node-local
|
||||
// endpoints, otherwise drops the traffic
|
||||
// ServiceInternalTrafficPolicyLocal routes traffic only to endpoints on the same
|
||||
// node as the client pod (dropping the traffic if there are no local endpoints).
|
||||
ServiceInternalTrafficPolicyLocal ServiceInternalTrafficPolicyType = "Local"
|
||||
)
|
||||
|
||||
// Service External Traffic Policy Type string
|
||||
// ServiceExternalTrafficPolicyType describes how nodes distribute service traffic they
|
||||
// receive on one of the Service's "externally-facing" addresses (NodePorts, ExternalIPs,
|
||||
// and LoadBalancer IPs).
|
||||
// +enum
|
||||
type ServiceExternalTrafficPolicyType string
|
||||
|
||||
const (
|
||||
// ServiceExternalTrafficPolicyTypeLocal specifies node-local endpoints behavior.
|
||||
ServiceExternalTrafficPolicyTypeLocal ServiceExternalTrafficPolicyType = "Local"
|
||||
// ServiceExternalTrafficPolicyTypeCluster specifies node-global (legacy) behavior.
|
||||
// ServiceExternalTrafficPolicyTypeCluster routes traffic to all endpoints.
|
||||
ServiceExternalTrafficPolicyTypeCluster ServiceExternalTrafficPolicyType = "Cluster"
|
||||
|
||||
// ServiceExternalTrafficPolicyTypeLocal preserves the source IP of the traffic by
|
||||
// routing only to endpoints on the same node as the traffic was received on
|
||||
// (dropping the traffic if there are no local endpoints).
|
||||
ServiceExternalTrafficPolicyTypeLocal ServiceExternalTrafficPolicyType = "Local"
|
||||
)
|
||||
|
||||
// These are the valid conditions of a service.
|
||||
|
|
@ -4255,30 +4323,34 @@ const (
|
|||
IPv6Protocol IPFamily = "IPv6"
|
||||
)
|
||||
|
||||
// IPFamilyPolicyType represents the dual-stack-ness requested or required by a Service
|
||||
// IPFamilyPolicy represents the dual-stack-ness requested or required by a Service
|
||||
// +enum
|
||||
type IPFamilyPolicyType string
|
||||
type IPFamilyPolicy string
|
||||
|
||||
const (
|
||||
// IPFamilyPolicySingleStack indicates that this service is required to have a single IPFamily.
|
||||
// The IPFamily assigned is based on the default IPFamily used by the cluster
|
||||
// or as identified by service.spec.ipFamilies field
|
||||
IPFamilyPolicySingleStack IPFamilyPolicyType = "SingleStack"
|
||||
IPFamilyPolicySingleStack IPFamilyPolicy = "SingleStack"
|
||||
// IPFamilyPolicyPreferDualStack indicates that this service prefers dual-stack when
|
||||
// the cluster is configured for dual-stack. If the cluster is not configured
|
||||
// for dual-stack the service will be assigned a single IPFamily. If the IPFamily is not
|
||||
// set in service.spec.ipFamilies then the service will be assigned the default IPFamily
|
||||
// configured on the cluster
|
||||
IPFamilyPolicyPreferDualStack IPFamilyPolicyType = "PreferDualStack"
|
||||
IPFamilyPolicyPreferDualStack IPFamilyPolicy = "PreferDualStack"
|
||||
// IPFamilyPolicyRequireDualStack indicates that this service requires dual-stack. Using
|
||||
// IPFamilyPolicyRequireDualStack on a single stack cluster will result in validation errors. The
|
||||
// IPFamilies (and their order) assigned to this service is based on service.spec.ipFamilies. If
|
||||
// service.spec.ipFamilies was not provided then it will be assigned according to how they are
|
||||
// configured on the cluster. If service.spec.ipFamilies has only one entry then the alternative
|
||||
// IPFamily will be added by apiserver
|
||||
IPFamilyPolicyRequireDualStack IPFamilyPolicyType = "RequireDualStack"
|
||||
IPFamilyPolicyRequireDualStack IPFamilyPolicy = "RequireDualStack"
|
||||
)
|
||||
|
||||
// for backwards compat
|
||||
// +enum
|
||||
type IPFamilyPolicyType = IPFamilyPolicy
|
||||
|
||||
// ServiceSpec describes the attributes that a user creates on a service.
|
||||
type ServiceSpec struct {
|
||||
// The list of ports that are exposed by this service.
|
||||
|
|
@ -4405,12 +4477,19 @@ type ServiceSpec struct {
|
|||
// +optional
|
||||
ExternalName string `json:"externalName,omitempty" protobuf:"bytes,10,opt,name=externalName"`
|
||||
|
||||
// externalTrafficPolicy denotes if this Service desires to route external
|
||||
// traffic to node-local or cluster-wide endpoints. "Local" preserves the
|
||||
// client source IP and avoids a second hop for LoadBalancer and Nodeport
|
||||
// type services, but risks potentially imbalanced traffic spreading.
|
||||
// "Cluster" obscures the client source IP and may cause a second hop to
|
||||
// another node, but should have good overall load-spreading.
|
||||
// externalTrafficPolicy describes how nodes distribute service traffic they
|
||||
// receive on one of the Service's "externally-facing" addresses (NodePorts,
|
||||
// ExternalIPs, and LoadBalancer IPs). If set to "Local", the proxy will configure
|
||||
// the service in a way that assumes that external load balancers will take care
|
||||
// of balancing the service traffic between nodes, and so each node will deliver
|
||||
// traffic only to the node-local endpoints of the service, without masquerading
|
||||
// the client source IP. (Traffic mistakenly sent to a node with no endpoints will
|
||||
// be dropped.) The default value, "Cluster", uses the standard behavior of
|
||||
// routing to all endpoints evenly (possibly modified by topology and other
|
||||
// features). Note that traffic sent to an External IP or LoadBalancer IP from
|
||||
// within the cluster will always get "Cluster" semantics, but clients sending to
|
||||
// a NodePort from within the cluster may need to take traffic policy into account
|
||||
// when picking a node.
|
||||
// +optional
|
||||
ExternalTrafficPolicy ServiceExternalTrafficPolicyType `json:"externalTrafficPolicy,omitempty" protobuf:"bytes,11,opt,name=externalTrafficPolicy"`
|
||||
|
||||
|
|
@ -4423,6 +4502,7 @@ type ServiceSpec struct {
|
|||
// service or not. If this field is specified when creating a Service
|
||||
// which does not need it, creation will fail. This field will be wiped
|
||||
// when updating a Service to no longer need it (e.g. changing type).
|
||||
// This field cannot be updated once set.
|
||||
// +optional
|
||||
HealthCheckNodePort int32 `json:"healthCheckNodePort,omitempty" protobuf:"bytes,12,opt,name=healthCheckNodePort"`
|
||||
|
||||
|
|
@ -4476,7 +4556,7 @@ type ServiceSpec struct {
|
|||
// ipFamilies and clusterIPs fields depend on the value of this field. This
|
||||
// field will be wiped when updating a service to type ExternalName.
|
||||
// +optional
|
||||
IPFamilyPolicy *IPFamilyPolicyType `json:"ipFamilyPolicy,omitempty" protobuf:"bytes,17,opt,name=ipFamilyPolicy,casttype=IPFamilyPolicyType"`
|
||||
IPFamilyPolicy *IPFamilyPolicy `json:"ipFamilyPolicy,omitempty" protobuf:"bytes,17,opt,name=ipFamilyPolicy,casttype=IPFamilyPolicy"`
|
||||
|
||||
// allocateLoadBalancerNodePorts defines if NodePorts will be automatically
|
||||
// allocated for services with type LoadBalancer. Default is "true". It
|
||||
|
|
@ -4502,12 +4582,12 @@ type ServiceSpec struct {
|
|||
// +optional
|
||||
LoadBalancerClass *string `json:"loadBalancerClass,omitempty" protobuf:"bytes,21,opt,name=loadBalancerClass"`
|
||||
|
||||
// InternalTrafficPolicy specifies if the cluster internal traffic
|
||||
// should be routed to all endpoints or node-local endpoints only.
|
||||
// "Cluster" routes internal traffic to a Service to all endpoints.
|
||||
// "Local" routes traffic to node-local endpoints only, traffic is
|
||||
// dropped if no node-local endpoints are ready.
|
||||
// The default value is "Cluster".
|
||||
// InternalTrafficPolicy describes how nodes distribute service traffic they
|
||||
// receive on the ClusterIP. If set to "Local", the proxy will assume that pods
|
||||
// only want to talk to endpoints of the service on the same node as the pod,
|
||||
// dropping the traffic if there are no local endpoints. The default value,
|
||||
// "Cluster", uses the standard behavior of routing to all endpoints evenly
|
||||
// (possibly modified by topology and other features).
|
||||
// +featureGate=ServiceInternalTrafficPolicy
|
||||
// +optional
|
||||
InternalTrafficPolicy *ServiceInternalTrafficPolicyType `json:"internalTrafficPolicy,omitempty" protobuf:"bytes,22,opt,name=internalTrafficPolicy"`
|
||||
|
|
@ -4669,6 +4749,7 @@ type ServiceAccountList struct {
|
|||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
|
||||
|
||||
// Endpoints is a collection of endpoints that implement the actual service. Example:
|
||||
//
|
||||
// Name: "mysvc",
|
||||
// Subsets: [
|
||||
// {
|
||||
|
|
@ -4701,11 +4782,14 @@ type Endpoints struct {
|
|||
// EndpointSubset is a group of addresses with a common set of ports. The
|
||||
// expanded set of endpoints is the Cartesian product of Addresses x Ports.
|
||||
// For example, given:
|
||||
//
|
||||
// {
|
||||
// Addresses: [{"ip": "10.10.1.1"}, {"ip": "10.10.2.2"}],
|
||||
// Ports: [{"name": "a", "port": 8675}, {"name": "b", "port": 309}]
|
||||
// }
|
||||
//
|
||||
// The resulting set of endpoints can be viewed as:
|
||||
//
|
||||
// a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],
|
||||
// b: [ 10.10.1.1:309, 10.10.2.2:309 ]
|
||||
type EndpointSubset struct {
|
||||
|
|
@ -5058,7 +5142,7 @@ type PodSignature struct {
|
|||
// Describe a container image
|
||||
type ContainerImage struct {
|
||||
// Names by which this image is known.
|
||||
// e.g. ["k8s.gcr.io/hyperkube:v1.0.7", "dockerhub.io/google_containers/hyperkube:v1.0.7"]
|
||||
// e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"]
|
||||
// +optional
|
||||
Names []string `json:"names" protobuf:"bytes,1,rep,name=names"`
|
||||
// The size of the image in bytes.
|
||||
|
|
@ -5580,6 +5664,7 @@ type ServiceProxyOptions struct {
|
|||
// and the version of the actual struct is irrelevant.
|
||||
// 5. We cannot easily change it. Because this type is embedded in many locations, updates to this type
|
||||
// will affect numerous schemas. Don't make new APIs embed an underspecified API type they do not control.
|
||||
//
|
||||
// Instead of using this type, create a locally provided and used type that is well-focused on your reference.
|
||||
// For example, ServiceReferences for admission registration: https://github.com/kubernetes/api/blob/release-1.17/admissionregistration/v1/types.go#L533 .
|
||||
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
|
||||
|
|
|
|||
|
|
@ -126,7 +126,8 @@ var map_CSIPersistentVolumeSource = map[string]string{
|
|||
"controllerPublishSecretRef": "controllerPublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerPublishVolume and ControllerUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
"nodeStageSecretRef": "nodeStageSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeStageVolume and NodeStageVolume and NodeUnstageVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
"nodePublishSecretRef": "nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
"controllerExpandSecretRef": "controllerExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerExpandVolume call. This is an alpha field and requires enabling ExpandCSIVolumes feature gate. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
"controllerExpandSecretRef": "controllerExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI ControllerExpandVolume call. This is an beta field and requires enabling ExpandCSIVolumes feature gate. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
"nodeExpandSecretRef": "nodeExpandSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodeExpandVolume call. This is an alpha field and requires enabling CSINodeExpandSecret feature gate. This field is optional, may be omitted if no secret is required. If the secret object contains more than one secret, all secrets are passed.",
|
||||
}
|
||||
|
||||
func (CSIPersistentVolumeSource) SwaggerDoc() map[string]string {
|
||||
|
|
@ -331,7 +332,7 @@ var map_Container = map[string]string{
|
|||
"command": "Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell",
|
||||
"args": "Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. \"$$(VAR_NAME)\" will produce the string literal \"$(VAR_NAME)\". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell",
|
||||
"workingDir": "Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.",
|
||||
"ports": "List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Cannot be updated.",
|
||||
"ports": "List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default \"0.0.0.0\" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated.",
|
||||
"envFrom": "List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.",
|
||||
"env": "List of environment variables to set in the container. Cannot be updated.",
|
||||
"resources": "Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/",
|
||||
|
|
@ -356,7 +357,7 @@ func (Container) SwaggerDoc() map[string]string {
|
|||
|
||||
var map_ContainerImage = map[string]string{
|
||||
"": "Describe a container image",
|
||||
"names": "Names by which this image is known. e.g. [\"k8s.gcr.io/hyperkube:v1.0.7\", \"dockerhub.io/google_containers/hyperkube:v1.0.7\"]",
|
||||
"names": "Names by which this image is known. e.g. [\"kubernetes.example/hyperkube:v1.0.7\", \"cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7\"]",
|
||||
"sizeBytes": "The size of the image in bytes.",
|
||||
}
|
||||
|
||||
|
|
@ -514,7 +515,7 @@ func (EndpointPort) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_EndpointSubset = map[string]string{
|
||||
"": "EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given:\n {\n Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n }\nThe resulting set of endpoints can be viewed as:\n a: [ 10.10.1.1:8675, 10.10.2.2:8675 ],\n b: [ 10.10.1.1:309, 10.10.2.2:309 ]",
|
||||
"": "EndpointSubset is a group of addresses with a common set of ports. The expanded set of endpoints is the Cartesian product of Addresses x Ports. For example, given:\n\n\t{\n\t Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n\t Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n\t}\n\nThe resulting set of endpoints can be viewed as:\n\n\ta: [ 10.10.1.1:8675, 10.10.2.2:8675 ],\n\tb: [ 10.10.1.1:309, 10.10.2.2:309 ]",
|
||||
"addresses": "IP addresses which offer the related ports that are marked as ready. These endpoints should be considered safe for load balancers and clients to utilize.",
|
||||
"notReadyAddresses": "IP addresses which offer the related ports but are not currently marked as ready because they have not yet finished starting, have recently failed a readiness check, or have recently failed a liveness check.",
|
||||
"ports": "Port numbers available on the related IP addresses.",
|
||||
|
|
@ -525,7 +526,7 @@ func (EndpointSubset) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_Endpoints = map[string]string{
|
||||
"": "Endpoints is a collection of endpoints that implement the actual service. Example:\n Name: \"mysvc\",\n Subsets: [\n {\n Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n },\n {\n Addresses: [{\"ip\": \"10.10.3.3\"}],\n Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}]\n },\n ]",
|
||||
"": "Endpoints is a collection of endpoints that implement the actual service. Example:\n\n\t Name: \"mysvc\",\n\t Subsets: [\n\t {\n\t Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n\t Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n\t },\n\t {\n\t Addresses: [{\"ip\": \"10.10.3.3\"}],\n\t Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}]\n\t },\n\t]",
|
||||
"metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
|
||||
"subsets": "The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service.",
|
||||
}
|
||||
|
|
@ -579,7 +580,7 @@ func (EnvVarSource) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_EphemeralContainer = map[string]string{
|
||||
"": "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation.\n\nTo add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted.\n\nThis is a beta feature available on clusters that haven't disabled the EphemeralContainers feature gate.",
|
||||
"": "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation.\n\nTo add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted.",
|
||||
"targetContainerName": "If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec.\n\nThe container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined.",
|
||||
}
|
||||
|
||||
|
|
@ -1534,7 +1535,7 @@ func (PodExecOptions) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_PodIP = map[string]string{
|
||||
"": "IP address information for entries in the (plural) PodIPs field. Each entry includes:\n IP: An IP address allocated to the pod. Routable at least within the cluster.",
|
||||
"": "IP address information for entries in the (plural) PodIPs field. Each entry includes:\n\n\tIP: An IP address allocated to the pod. Routable at least within the cluster.",
|
||||
"ip": "ip is an IP address (IPv4 or IPv6) assigned to the pod",
|
||||
}
|
||||
|
||||
|
|
@ -1637,7 +1638,7 @@ var map_PodSpec = map[string]string{
|
|||
"volumes": "List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes",
|
||||
"initContainers": "List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/",
|
||||
"containers": "List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.",
|
||||
"ephemeralContainers": "List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.",
|
||||
"ephemeralContainers": "List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.",
|
||||
"restartPolicy": "Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy",
|
||||
"terminationGracePeriodSeconds": "Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.",
|
||||
"activeDeadlineSeconds": "Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer.",
|
||||
|
|
@ -1669,7 +1670,8 @@ var map_PodSpec = map[string]string{
|
|||
"overhead": "Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md",
|
||||
"topologySpreadConstraints": "TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.",
|
||||
"setHostnameAsFQDN": "If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false.",
|
||||
"os": "Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.\n\nIf the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions\n\nIf the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup This is a beta field and requires the IdentifyPodOS feature",
|
||||
"os": "Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set.\n\nIf the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions\n\nIf the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup",
|
||||
"hostUsers": "Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.",
|
||||
}
|
||||
|
||||
func (PodSpec) SwaggerDoc() map[string]string {
|
||||
|
|
@ -1690,7 +1692,7 @@ var map_PodStatus = map[string]string{
|
|||
"initContainerStatuses": "The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status",
|
||||
"containerStatuses": "The list has one entry per container in the manifest. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status",
|
||||
"qosClass": "The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md",
|
||||
"ephemeralContainerStatuses": "Status for any ephemeral containers that have run in this pod. This field is beta-level and available on clusters that haven't disabled the EphemeralContainers feature gate.",
|
||||
"ephemeralContainerStatuses": "Status for any ephemeral containers that have run in this pod.",
|
||||
}
|
||||
|
||||
func (PodStatus) SwaggerDoc() map[string]string {
|
||||
|
|
@ -2274,15 +2276,15 @@ var map_ServiceSpec = map[string]string{
|
|||
"loadBalancerIP": "Only applies to Service Type: LoadBalancer. This feature depends on whether the underlying cloud-provider supports specifying the loadBalancerIP when a load balancer is created. This field will be ignored if the cloud-provider does not support the feature. Deprecated: This field was under-specified and its meaning varies across implementations, and it cannot support dual-stack. As of Kubernetes v1.24, users are encouraged to use implementation-specific annotations when available. This field may be removed in a future API version.",
|
||||
"loadBalancerSourceRanges": "If specified and supported by the platform, this will restrict traffic through the cloud-provider load-balancer will be restricted to the specified client IPs. This field will be ignored if the cloud-provider does not support the feature.\" More info: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/",
|
||||
"externalName": "externalName is the external reference that discovery mechanisms will return as an alias for this service (e.g. a DNS CNAME record). No proxying will be involved. Must be a lowercase RFC-1123 hostname (https://tools.ietf.org/html/rfc1123) and requires `type` to be \"ExternalName\".",
|
||||
"externalTrafficPolicy": "externalTrafficPolicy denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. \"Local\" preserves the client source IP and avoids a second hop for LoadBalancer and Nodeport type services, but risks potentially imbalanced traffic spreading. \"Cluster\" obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading.",
|
||||
"healthCheckNodePort": "healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type).",
|
||||
"externalTrafficPolicy": "externalTrafficPolicy describes how nodes distribute service traffic they receive on one of the Service's \"externally-facing\" addresses (NodePorts, ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will configure the service in a way that assumes that external load balancers will take care of balancing the service traffic between nodes, and so each node will deliver traffic only to the node-local endpoints of the service, without masquerading the client source IP. (Traffic mistakenly sent to a node with no endpoints will be dropped.) The default value, \"Cluster\", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features). Note that traffic sent to an External IP or LoadBalancer IP from within the cluster will always get \"Cluster\" semantics, but clients sending to a NodePort from within the cluster may need to take traffic policy into account when picking a node.",
|
||||
"healthCheckNodePort": "healthCheckNodePort specifies the healthcheck nodePort for the service. This only applies when type is set to LoadBalancer and externalTrafficPolicy is set to Local. If a value is specified, is in-range, and is not in use, it will be used. If not specified, a value will be automatically allocated. External systems (e.g. load-balancers) can use this port to determine if a given node holds endpoints for this service or not. If this field is specified when creating a Service which does not need it, creation will fail. This field will be wiped when updating a Service to no longer need it (e.g. changing type). This field cannot be updated once set.",
|
||||
"publishNotReadyAddresses": "publishNotReadyAddresses indicates that any agent which deals with endpoints for this Service should disregard any indications of ready/not-ready. The primary use case for setting this field is for a StatefulSet's Headless Service to propagate SRV DNS records for its Pods for the purpose of peer discovery. The Kubernetes controllers that generate Endpoints and EndpointSlice resources for Services interpret this to mean that all endpoints are considered \"ready\" even if the Pods themselves are not. Agents which consume only Kubernetes generated endpoints through the Endpoints or EndpointSlice resources can safely assume this behavior.",
|
||||
"sessionAffinityConfig": "sessionAffinityConfig contains the configurations of session affinity.",
|
||||
"ipFamilies": "IPFamilies is a list of IP families (e.g. IPv4, IPv6) assigned to this service. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are \"IPv4\" and \"IPv6\". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to \"headless\" services. This field will be wiped when updating a Service to type ExternalName.\n\nThis field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.",
|
||||
"ipFamilyPolicy": "IPFamilyPolicy represents the dual-stack-ness requested or required by this Service. If there is no value provided, then this field will be set to SingleStack. Services can be \"SingleStack\" (a single IP family), \"PreferDualStack\" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or \"RequireDualStack\" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.",
|
||||
"allocateLoadBalancerNodePorts": "allocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is \"true\". It may be set to \"false\" if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type.",
|
||||
"loadBalancerClass": "loadBalancerClass is the class of the load balancer implementation this Service belongs to. If specified, the value of this field must be a label-style identifier, with an optional prefix, e.g. \"internal-vip\" or \"example.com/internal-vip\". Unprefixed names are reserved for end-users. This field can only be set when the Service type is 'LoadBalancer'. If not set, the default load balancer implementation is used, today this is typically done through the cloud provider integration, but should apply for any default implementation. If set, it is assumed that a load balancer implementation is watching for Services with a matching class. Any default load balancer implementation (e.g. cloud providers) should ignore Services that set this field. This field can only be set when creating or updating a Service to type 'LoadBalancer'. Once set, it can not be changed. This field will be wiped when a service is updated to a non 'LoadBalancer' type.",
|
||||
"internalTrafficPolicy": "InternalTrafficPolicy specifies if the cluster internal traffic should be routed to all endpoints or node-local endpoints only. \"Cluster\" routes internal traffic to a Service to all endpoints. \"Local\" routes traffic to node-local endpoints only, traffic is dropped if no node-local endpoints are ready. The default value is \"Cluster\".",
|
||||
"internalTrafficPolicy": "InternalTrafficPolicy describes how nodes distribute service traffic they receive on the ClusterIP. If set to \"Local\", the proxy will assume that pods only want to talk to endpoints of the service on the same node as the pod, dropping the traffic if there are no local endpoints. The default value, \"Cluster\", uses the standard behavior of routing to all endpoints evenly (possibly modified by topology and other features).",
|
||||
}
|
||||
|
||||
func (ServiceSpec) SwaggerDoc() map[string]string {
|
||||
|
|
@ -2401,10 +2403,13 @@ func (TopologySelectorTerm) SwaggerDoc() map[string]string {
|
|||
var map_TopologySpreadConstraint = map[string]string{
|
||||
"": "TopologySpreadConstraint specifies how to spread matching pods among the given topology.",
|
||||
"maxSkew": "MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. ",
|
||||
"topologyKey": "TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a \"bucket\", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes match the node selector. e.g. If TopologyKey is \"kubernetes.io/hostname\", each Node is a domain of that topology. And, if TopologyKey is \"topology.kubernetes.io/zone\", each zone is a domain of that topology. It's a required field.",
|
||||
"topologyKey": "TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a \"bucket\", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is \"kubernetes.io/hostname\", each Node is a domain of that topology. And, if TopologyKey is \"topology.kubernetes.io/zone\", each zone is a domain of that topology. It's a required field.",
|
||||
"whenUnsatisfiable": "WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location,\n but giving higher precedence to topologies that would help reduce the\n skew.\nA constraint is considered \"Unsatisfiable\" for an incoming pod if and only if every possible node assignment for that pod would violate \"MaxSkew\" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: ",
|
||||
"labelSelector": "LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain.",
|
||||
"minDomains": "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule.\n\nFor example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: ",
|
||||
"nodeAffinityPolicy": "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.\n\nIf this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.",
|
||||
"nodeTaintsPolicy": "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included.\n\nIf this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.",
|
||||
"matchLabelKeys": "MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector.",
|
||||
}
|
||||
|
||||
func (TopologySpreadConstraint) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
|
|
@ -243,6 +243,11 @@ func (in *CSIPersistentVolumeSource) DeepCopyInto(out *CSIPersistentVolumeSource
|
|||
*out = new(SecretReference)
|
||||
**out = **in
|
||||
}
|
||||
if in.NodeExpandSecretRef != nil {
|
||||
in, out := &in.NodeExpandSecretRef, &out.NodeExpandSecretRef
|
||||
*out = new(SecretReference)
|
||||
**out = **in
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -3949,6 +3954,11 @@ func (in *PodSpec) DeepCopyInto(out *PodSpec) {
|
|||
*out = new(PodOS)
|
||||
**out = **in
|
||||
}
|
||||
if in.HostUsers != nil {
|
||||
in, out := &in.HostUsers, &out.HostUsers
|
||||
*out = new(bool)
|
||||
**out = **in
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -5400,7 +5410,7 @@ func (in *ServiceSpec) DeepCopyInto(out *ServiceSpec) {
|
|||
}
|
||||
if in.IPFamilyPolicy != nil {
|
||||
in, out := &in.IPFamilyPolicy, &out.IPFamilyPolicy
|
||||
*out = new(IPFamilyPolicyType)
|
||||
*out = new(IPFamilyPolicy)
|
||||
**out = **in
|
||||
}
|
||||
if in.AllocateLoadBalancerNodePorts != nil {
|
||||
|
|
@ -5649,6 +5659,21 @@ func (in *TopologySpreadConstraint) DeepCopyInto(out *TopologySpreadConstraint)
|
|||
*out = new(int32)
|
||||
**out = **in
|
||||
}
|
||||
if in.NodeAffinityPolicy != nil {
|
||||
in, out := &in.NodeAffinityPolicy, &out.NodeAffinityPolicy
|
||||
*out = new(NodeInclusionPolicy)
|
||||
**out = **in
|
||||
}
|
||||
if in.NodeTaintsPolicy != nil {
|
||||
in, out := &in.NodeTaintsPolicy, &out.NodeTaintsPolicy
|
||||
*out = new(NodeInclusionPolicy)
|
||||
**out = **in
|
||||
}
|
||||
if in.MatchLabelKeys != nil {
|
||||
in, out := &in.MatchLabelKeys, &out.MatchLabelKeys
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -66,8 +66,7 @@ message Endpoint {
|
|||
map<string, string> deprecatedTopology = 5;
|
||||
|
||||
// nodeName represents the name of the Node hosting this endpoint. This can
|
||||
// be used to determine endpoints local to a Node. This field can be enabled
|
||||
// with the EndpointSliceNodeName feature gate.
|
||||
// be used to determine endpoints local to a Node.
|
||||
// +optional
|
||||
optional string nodeName = 6;
|
||||
|
||||
|
|
|
|||
|
|
@ -101,8 +101,7 @@ type Endpoint struct {
|
|||
DeprecatedTopology map[string]string `json:"deprecatedTopology,omitempty" protobuf:"bytes,5,opt,name=deprecatedTopology"`
|
||||
|
||||
// nodeName represents the name of the Node hosting this endpoint. This can
|
||||
// be used to determine endpoints local to a Node. This field can be enabled
|
||||
// with the EndpointSliceNodeName feature gate.
|
||||
// be used to determine endpoints local to a Node.
|
||||
// +optional
|
||||
NodeName *string `json:"nodeName,omitempty" protobuf:"bytes,6,opt,name=nodeName"`
|
||||
// zone is the name of the Zone this endpoint exists in.
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ var map_Endpoint = map[string]string{
|
|||
"hostname": "hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation.",
|
||||
"targetRef": "targetRef is a reference to a Kubernetes object that represents this endpoint.",
|
||||
"deprecatedTopology": "deprecatedTopology contains topology information part of the v1beta1 API. This field is deprecated, and will be removed when the v1beta1 API is removed (no sooner than kubernetes v1.24). While this field can hold values, it is not writable through the v1 API, and any attempts to write to it will be silently ignored. Topology information can be found in the zone and nodeName fields instead.",
|
||||
"nodeName": "nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node. This field can be enabled with the EndpointSliceNodeName feature gate.",
|
||||
"nodeName": "nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node.",
|
||||
"zone": "zone is the name of the Zone this endpoint exists in.",
|
||||
"hints": "hints contains information associated with how an endpoint should be consumed.",
|
||||
}
|
||||
|
|
|
|||
|
|
@ -73,8 +73,7 @@ message Endpoint {
|
|||
map<string, string> topology = 5;
|
||||
|
||||
// nodeName represents the name of the Node hosting this endpoint. This can
|
||||
// be used to determine endpoints local to a Node. This field can be enabled
|
||||
// with the EndpointSliceNodeName feature gate.
|
||||
// be used to determine endpoints local to a Node.
|
||||
// +optional
|
||||
optional string nodeName = 6;
|
||||
|
||||
|
|
|
|||
|
|
@ -109,8 +109,7 @@ type Endpoint struct {
|
|||
// +optional
|
||||
Topology map[string]string `json:"topology,omitempty" protobuf:"bytes,5,opt,name=topology"`
|
||||
// nodeName represents the name of the Node hosting this endpoint. This can
|
||||
// be used to determine endpoints local to a Node. This field can be enabled
|
||||
// with the EndpointSliceNodeName feature gate.
|
||||
// be used to determine endpoints local to a Node.
|
||||
// +optional
|
||||
NodeName *string `json:"nodeName,omitempty" protobuf:"bytes,6,opt,name=nodeName"`
|
||||
// hints contains information associated with how an endpoint should be
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ var map_Endpoint = map[string]string{
|
|||
"hostname": "hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation.",
|
||||
"targetRef": "targetRef is a reference to a Kubernetes object that represents this endpoint.",
|
||||
"topology": "topology contains arbitrary topology information associated with the endpoint. These key/value pairs must conform with the label format. https://kubernetes.io/docs/concepts/overview/working-with-objects/labels Topology may include a maximum of 16 key/value pairs. This includes, but is not limited to the following well known keys: * kubernetes.io/hostname: the value indicates the hostname of the node\n where the endpoint is located. This should match the corresponding\n node label.\n* topology.kubernetes.io/zone: the value indicates the zone where the\n endpoint is located. This should match the corresponding node label.\n* topology.kubernetes.io/region: the value indicates the region where the\n endpoint is located. This should match the corresponding node label.\nThis field is deprecated and will be removed in future api versions.",
|
||||
"nodeName": "nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node. This field can be enabled with the EndpointSliceNodeName feature gate.",
|
||||
"nodeName": "nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node.",
|
||||
"hints": "hints contains information associated with how an endpoint should be consumed.",
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -769,8 +769,6 @@ message NetworkPolicyPort {
|
|||
// should be allowed by the policy. This field cannot be defined if the port field
|
||||
// is not defined or if the port field is defined as a named (string) port.
|
||||
// The endPort must be equal or greater than port.
|
||||
// This feature is in Beta state and is enabled by default.
|
||||
// It can be disabled using the Feature Gate "NetworkPolicyEndPort".
|
||||
// +optional
|
||||
optional int32 endPort = 3;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1498,8 +1498,6 @@ type NetworkPolicyPort struct {
|
|||
// should be allowed by the policy. This field cannot be defined if the port field
|
||||
// is not defined or if the port field is defined as a named (string) port.
|
||||
// The endPort must be equal or greater than port.
|
||||
// This feature is in Beta state and is enabled by default.
|
||||
// It can be disabled using the Feature Gate "NetworkPolicyEndPort".
|
||||
// +optional
|
||||
EndPort *int32 `json:"endPort,omitempty" protobuf:"bytes,3,opt,name=endPort"`
|
||||
}
|
||||
|
|
|
|||
|
|
@ -417,7 +417,7 @@ var map_NetworkPolicyPort = map[string]string{
|
|||
"": "DEPRECATED 1.9 - This group version of NetworkPolicyPort is deprecated by networking/v1/NetworkPolicyPort.",
|
||||
"protocol": "Optional. The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP.",
|
||||
"port": "The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched.",
|
||||
"endPort": "If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. This feature is in Beta state and is enabled by default. It can be disabled using the Feature Gate \"NetworkPolicyEndPort\".",
|
||||
"endPort": "If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port.",
|
||||
}
|
||||
|
||||
func (NetworkPolicyPort) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
|
|
@ -153,8 +153,8 @@ message LimitResponse {
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
message LimitedPriorityLevelConfiguration {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -415,8 +415,8 @@ const (
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
type LimitedPriorityLevelConfiguration struct {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ func (LimitResponse) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_LimitedPriorityLevelConfiguration = map[string]string{
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n * How are requests for this priority level limited?\n * What should be done with requests that exceed the limit?",
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n - How are requests for this priority level limited?\n - What should be done with requests that exceed the limit?",
|
||||
"assuredConcurrencyShares": "`assuredConcurrencyShares` (ACS) configures the execution limit, which is a limit on the number of requests of this priority level that may be exeucting at a given time. ACS must be a positive number. The server's concurrency limit (SCL) is divided among the concurrency-controlled priority levels in proportion to their assured concurrency shares. This produces the assured concurrency value (ACV) ",
|
||||
"limitResponse": "`limitResponse` indicates what to do with requests that can not be executed right now",
|
||||
}
|
||||
|
|
|
|||
|
|
@ -153,8 +153,8 @@ message LimitResponse {
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
message LimitedPriorityLevelConfiguration {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -451,8 +451,8 @@ const (
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
type LimitedPriorityLevelConfiguration struct {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ func (LimitResponse) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_LimitedPriorityLevelConfiguration = map[string]string{
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n * How are requests for this priority level limited?\n * What should be done with requests that exceed the limit?",
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n - How are requests for this priority level limited?\n - What should be done with requests that exceed the limit?",
|
||||
"assuredConcurrencyShares": "`assuredConcurrencyShares` (ACS) configures the execution limit, which is a limit on the number of requests of this priority level that may be exeucting at a given time. ACS must be a positive number. The server's concurrency limit (SCL) is divided among the concurrency-controlled priority levels in proportion to their assured concurrency shares. This produces the assured concurrency value (ACV) ",
|
||||
"limitResponse": "`limitResponse` indicates what to do with requests that can not be executed right now",
|
||||
}
|
||||
|
|
|
|||
|
|
@ -153,8 +153,8 @@ message LimitResponse {
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
message LimitedPriorityLevelConfiguration {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -447,8 +447,8 @@ const (
|
|||
|
||||
// LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits.
|
||||
// It addresses two issues:
|
||||
// * How are requests for this priority level limited?
|
||||
// * What should be done with requests that exceed the limit?
|
||||
// - How are requests for this priority level limited?
|
||||
// - What should be done with requests that exceed the limit?
|
||||
type LimitedPriorityLevelConfiguration struct {
|
||||
// `assuredConcurrencyShares` (ACS) configures the execution
|
||||
// limit, which is a limit on the number of requests of this
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ func (LimitResponse) SwaggerDoc() map[string]string {
|
|||
}
|
||||
|
||||
var map_LimitedPriorityLevelConfiguration = map[string]string{
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n * How are requests for this priority level limited?\n * What should be done with requests that exceed the limit?",
|
||||
"": "LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues:\n - How are requests for this priority level limited?\n - What should be done with requests that exceed the limit?",
|
||||
"assuredConcurrencyShares": "`assuredConcurrencyShares` (ACS) configures the execution limit, which is a limit on the number of requests of this priority level that may be exeucting at a given time. ACS must be a positive number. The server's concurrency limit (SCL) is divided among the concurrency-controlled priority levels in proportion to their assured concurrency shares. This produces the assured concurrency value (ACV) ",
|
||||
"limitResponse": "`limitResponse` indicates what to do with requests that can not be executed right now",
|
||||
}
|
||||
|
|
|
|||
|
|
@ -265,16 +265,16 @@ message IngressServiceBackend {
|
|||
|
||||
// IngressSpec describes the Ingress the user wishes to exist.
|
||||
message IngressSpec {
|
||||
// IngressClassName is the name of the IngressClass cluster resource. The
|
||||
// associated IngressClass defines which controller will implement the
|
||||
// resource. This replaces the deprecated `kubernetes.io/ingress.class`
|
||||
// annotation. For backwards compatibility, when that annotation is set, it
|
||||
// must be given precedence over this field. The controller may emit a
|
||||
// warning if the field and annotation have different values.
|
||||
// Implementations of this API should ignore Ingresses without a class
|
||||
// specified. An IngressClass resource may be marked as default, which can
|
||||
// be used to set a default value for this field. For more information,
|
||||
// refer to the IngressClass documentation.
|
||||
// IngressClassName is the name of an IngressClass cluster resource. Ingress
|
||||
// controller implementations use this field to know whether they should be
|
||||
// serving this Ingress resource, by a transitive connection
|
||||
// (controller -> IngressClass -> Ingress resource). Although the
|
||||
// `kubernetes.io/ingress.class` annotation (simple constant name) was never
|
||||
// formally defined, it was widely supported by Ingress controllers to create
|
||||
// a direct binding between Ingress controller and Ingress resources. Newly
|
||||
// created Ingress resources should prefer using the field. However, even
|
||||
// though the annotation is officially deprecated, for backwards compatibility
|
||||
// reasons, ingress controllers should still honor that annotation if present.
|
||||
// +optional
|
||||
optional string ingressClassName = 4;
|
||||
|
||||
|
|
@ -441,8 +441,6 @@ message NetworkPolicyPort {
|
|||
// should be allowed by the policy. This field cannot be defined if the port field
|
||||
// is not defined or if the port field is defined as a named (string) port.
|
||||
// The endPort must be equal or greater than port.
|
||||
// This feature is in Beta state and is enabled by default.
|
||||
// It can be disabled using the Feature Gate "NetworkPolicyEndPort".
|
||||
// +optional
|
||||
optional int32 endPort = 3;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -158,8 +158,6 @@ type NetworkPolicyPort struct {
|
|||
// should be allowed by the policy. This field cannot be defined if the port field
|
||||
// is not defined or if the port field is defined as a named (string) port.
|
||||
// The endPort must be equal or greater than port.
|
||||
// This feature is in Beta state and is enabled by default.
|
||||
// It can be disabled using the Feature Gate "NetworkPolicyEndPort".
|
||||
// +optional
|
||||
EndPort *int32 `json:"endPort,omitempty" protobuf:"bytes,3,opt,name=endPort"`
|
||||
}
|
||||
|
|
@ -302,16 +300,16 @@ type IngressList struct {
|
|||
|
||||
// IngressSpec describes the Ingress the user wishes to exist.
|
||||
type IngressSpec struct {
|
||||
// IngressClassName is the name of the IngressClass cluster resource. The
|
||||
// associated IngressClass defines which controller will implement the
|
||||
// resource. This replaces the deprecated `kubernetes.io/ingress.class`
|
||||
// annotation. For backwards compatibility, when that annotation is set, it
|
||||
// must be given precedence over this field. The controller may emit a
|
||||
// warning if the field and annotation have different values.
|
||||
// Implementations of this API should ignore Ingresses without a class
|
||||
// specified. An IngressClass resource may be marked as default, which can
|
||||
// be used to set a default value for this field. For more information,
|
||||
// refer to the IngressClass documentation.
|
||||
// IngressClassName is the name of an IngressClass cluster resource. Ingress
|
||||
// controller implementations use this field to know whether they should be
|
||||
// serving this Ingress resource, by a transitive connection
|
||||
// (controller -> IngressClass -> Ingress resource). Although the
|
||||
// `kubernetes.io/ingress.class` annotation (simple constant name) was never
|
||||
// formally defined, it was widely supported by Ingress controllers to create
|
||||
// a direct binding between Ingress controller and Ingress resources. Newly
|
||||
// created Ingress resources should prefer using the field. However, even
|
||||
// though the annotation is officially deprecated, for backwards compatibility
|
||||
// reasons, ingress controllers should still honor that annotation if present.
|
||||
// +optional
|
||||
IngressClassName *string `json:"ingressClassName,omitempty" protobuf:"bytes,4,opt,name=ingressClassName"`
|
||||
|
||||
|
|
@ -564,7 +562,7 @@ const (
|
|||
// IngressClassParametersReferenceScopeNamespace indicates that the
|
||||
// referenced Parameters resource is namespace-scoped.
|
||||
IngressClassParametersReferenceScopeNamespace = "Namespace"
|
||||
// IngressClassParametersReferenceScopeNamespace indicates that the
|
||||
// IngressClassParametersReferenceScopeCluster indicates that the
|
||||
// referenced Parameters resource is cluster-scoped.
|
||||
IngressClassParametersReferenceScopeCluster = "Cluster"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ func (IngressServiceBackend) SwaggerDoc() map[string]string {
|
|||
|
||||
var map_IngressSpec = map[string]string{
|
||||
"": "IngressSpec describes the Ingress the user wishes to exist.",
|
||||
"ingressClassName": "IngressClassName is the name of the IngressClass cluster resource. The associated IngressClass defines which controller will implement the resource. This replaces the deprecated `kubernetes.io/ingress.class` annotation. For backwards compatibility, when that annotation is set, it must be given precedence over this field. The controller may emit a warning if the field and annotation have different values. Implementations of this API should ignore Ingresses without a class specified. An IngressClass resource may be marked as default, which can be used to set a default value for this field. For more information, refer to the IngressClass documentation.",
|
||||
"ingressClassName": "IngressClassName is the name of an IngressClass cluster resource. Ingress controller implementations use this field to know whether they should be serving this Ingress resource, by a transitive connection (controller -> IngressClass -> Ingress resource). Although the `kubernetes.io/ingress.class` annotation (simple constant name) was never formally defined, it was widely supported by Ingress controllers to create a direct binding between Ingress controller and Ingress resources. Newly created Ingress resources should prefer using the field. However, even though the annotation is officially deprecated, for backwards compatibility reasons, ingress controllers should still honor that annotation if present.",
|
||||
"defaultBackend": "DefaultBackend is the backend that should handle requests that don't match any rule. If Rules are not specified, DefaultBackend must be specified. If DefaultBackend is not set, the handling of requests that do not match any of the rules will be up to the Ingress controller.",
|
||||
"tls": "TLS configuration. Currently the Ingress only supports a single TLS port, 443. If multiple members of this list specify different hosts, they will be multiplexed on the same port according to the hostname specified through the SNI TLS extension, if the ingress controller fulfilling the ingress supports SNI.",
|
||||
"rules": "A list of host rules used to configure the Ingress. If unspecified, or no rule matches, all traffic is sent to the default backend.",
|
||||
|
|
@ -245,7 +245,7 @@ var map_NetworkPolicyPort = map[string]string{
|
|||
"": "NetworkPolicyPort describes a port to allow traffic on",
|
||||
"protocol": "The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP.",
|
||||
"port": "The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched.",
|
||||
"endPort": "If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. This feature is in Beta state and is enabled by default. It can be disabled using the Feature Gate \"NetworkPolicyEndPort\".",
|
||||
"endPort": "If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port.",
|
||||
}
|
||||
|
||||
func (NetworkPolicyPort) SwaggerDoc() map[string]string {
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue