Issue tracker and mirror of kubectl code
Go to file
Tatsuhiro Tsujikawa e7bb62301f Restore the ability to `kubectl apply --prune` without -n flag
Before https://github.com/kubernetes/kubernetes/pull/83084, `kubectl
apply --prune` can prune resources in all namespaces specified in
config files.  After that PR got merged, only a single namespace is
considered for pruning.  It is OK if namespace is explicitly specified
by --namespace option, but what the PR does is use the default
namespace (or from kubeconfig) if not overridden by command line flag.
That breaks the existing usage of `kubectl apply --prune` without
--namespace option.  If --namespace is not used, there is no error,
and no one notices this issue unless they actually check that pruning
happens.  This issue also prevents resources in multiple namespaces in
config file from being pruned.

kubectl 1.16 does not have this bug.  Let's see the difference between
kubectl 1.16 and kubectl 1.17.  Suppose the following config file:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: foo
  namespace: a
  labels:
    pl: foo
data:
  foo: bar
---
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: bar
  namespace: a
  labels:
    pl: foo
data:
  foo: bar
```

Apply it with `kubectl apply -f file`.  Then comment out ConfigMap foo
in this file.  kubectl 1.16 prunes ConfigMap foo with the following
command:

$ kubectl-1.16 apply -f file -l pl=foo --prune
configmap/bar configured
configmap/foo pruned

But kubectl 1.17 does not prune ConfigMap foo with the same command:

$ kubectl-1.17 apply -f file -l pl=foo --prune
configmap/bar configured

With this patch, kubectl once again can prune the resource as before.

Kubernetes-commit: 7af3b01f24edfde34e42640ee565a5a6bb66ce26
2020-03-27 01:11:36 +00:00
.github Adds staging directory for kubectl code 2019-05-29 23:31:23 -07:00
Godeps Merge pull request #89942 from dims/update-fsnotify-to-pick-up-bug-fixes 2020-04-09 02:49:34 +00:00
docs Add documentation around plugins 2020-02-26 14:50:50 +01:00
images Adds kubectl logo images 2019-06-21 11:34:42 -07:00
pkg Restore the ability to `kubectl apply --prune` without -n flag 2020-03-27 01:11:36 +00:00
testdata Update code comment for NetworkPolicyPeer 2020-03-10 14:18:22 -07:00
CONTRIBUTING.md Adds staging directory for kubectl code 2019-05-29 23:31:23 -07:00
LICENSE Adds staging directory for kubectl code 2019-05-29 23:31:23 -07:00
OWNERS Adds staging directory for kubectl code 2019-05-29 23:31:23 -07:00
README.md Remove doc reference to godep #782 2020-01-25 20:49:34 -05:00
SECURITY_CONTACTS Update security contacts for kubectl 2019-09-03 10:15:43 +02:00
code-of-conduct.md Adds staging directory for kubectl code 2019-05-29 23:31:23 -07:00
go.mod Merge pull request #89942 from dims/update-fsnotify-to-pick-up-bug-fixes 2020-04-09 02:49:34 +00:00
go.sum Merge pull request #89942 from dims/update-fsnotify-to-pick-up-bug-fixes 2020-04-09 02:49:34 +00:00

README.md

Kubectl

kubectl logo

Build Status GoDoc

The k8s.io/kubectl repo is used to track issues for the kubectl cli distributed with k8s.io/kubernetes. It also contains packages intended for use by client programs. E.g. these packages are vendored into k8s.io/kubernetes for use in the kubectl cli client. That client will eventually move here too.

Contribution Requirements

  • Full unit-test coverage.

  • Go tools compliant (go get, go test, etc.). It needs to be vendorable somewhere else.

  • No dependence on k8s.io/kubernetes. Dependence on other repositories is fine.

  • Code must be usefully commented. Not only for developers on the project, but also for external users of these packages.

  • When reviewing PRs, you are encouraged to use Golang's code review comments page.

  • Packages in this repository should aspire to implement sensible, small interfaces and import a limited set of dependencies.

Community, discussion, contribution, and support

See this document for how to reach the maintainers of this project.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.