fix typos
This commit is contained in:
parent
5a78c6867b
commit
cbe3eb6d8e
|
@ -380,7 +380,7 @@
|
|||
* Github Groups [Jorge Castro]
|
||||
* [https://github.com/kubernetes/community/issues/2323](https://github.com/kubernetes/community/issues/2323) working to make current 303 groups in the org easier to manage
|
||||
* Shoutouts this week (Check in #shoutouts on slack)
|
||||
* jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue.
|
||||
* jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue.
|
||||
* Maulion: And another to @liggitt for always helping anyone with a auth question in all the channels with kindness
|
||||
* jdumars: @paris - thank you for all of your work helping to keep our community safe and inclusive! I know that you've spent countless hours refining our Zoom usage, documenting, testing, and generally being super proactive on this.
|
||||
* Nikhita: shoutout to @cblecker for excellent meme skills!
|
||||
|
@ -612,7 +612,7 @@
|
|||
* GitHub:[ https://github.com/YugaByte/yugabyte-db](https://github.com/YugaByte/yugabyte-db)
|
||||
* Docs:[ https://docs.yugabyte.com/](https://docs.yugabyte.com/)
|
||||
* Slides: https://www.slideshare.net/YugaByte
|
||||
* Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database.
|
||||
* Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database.
|
||||
* No Kubernetes operator yet, but it's in progress.
|
||||
* Answers from Q&A:
|
||||
* @jberkus - For q1 - YB is optimized for small reads and writes, but can also perform batch reads and writes efficiently - mostly oriented towards modern OLTP/user-facing applications. Example is using spark or presto on top for use-cases like iot, fraud detection, alerting, user-personalization, etc.
|
||||
|
@ -1076,7 +1076,7 @@
|
|||
* 35k users with 5k weekly active users
|
||||
* Produced Quarterly
|
||||
* **SIG Updates:**
|
||||
* **Thanks to test infra folks for labels**
|
||||
* **Thanks to test infra folks for labels**
|
||||
* **Cluster Lifecycle [Tim St. Clair]**
|
||||
* Kubeadm
|
||||
* Steadily burning down against 1.11
|
||||
|
@ -1223,7 +1223,7 @@
|
|||
* Support for kubeadm and minikube
|
||||
* Create issues on crio project on github
|
||||
* sig-node does not have plans to choose one yet
|
||||
* Working on conformance to address implementations which should lead to choosing default implementation
|
||||
* Working on conformance to address implementations which should lead to choosing default implementation
|
||||
* Choice is important since it would be used under scalability testing
|
||||
* Test data? Plan to publish results to testgrid, will supply results ASAP
|
||||
* Previously blocked on dashboard issue
|
||||
|
@ -1322,7 +1322,7 @@
|
|||
* creating docker registry and helm repos, pushing helm chart
|
||||
* CLI and web UI
|
||||
* Caching upstream repositories
|
||||
* Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example)
|
||||
* Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example)
|
||||
* Questions
|
||||
* Difference between commercial and free (and what's the cost)
|
||||
* Free only has maven support, is open source, commercial supports everything (including Kubernetes-related technologies, like Helm)
|
||||
|
@ -2076,11 +2076,11 @@
|
|||
* contributing tests
|
||||
* cleaning up tests
|
||||
* what things are tested
|
||||
* e2e framework
|
||||
* e2e framework
|
||||
* Conformance
|
||||
* Please come participate
|
||||
* Kubernetes Documentation [User Journeys MVP](https://kubernetes.io/docs/home/) launched [Andrew Chen]
|
||||
* Please give SIG Docs for feedback, still adding things later
|
||||
* Please give SIG Docs for feedback, still adding things later
|
||||
* Can contribute normally (join SIG docs for more information)
|
||||
* New landing page incorporating personas (users, contributors, operators)
|
||||
* Levels of knowledge (foundational, advanced, etc)
|
||||
|
|
|
@ -89,7 +89,7 @@ Implementations that cannot offer consistent ranging (returning a set of results
|
|||
|
||||
#### etcd3
|
||||
|
||||
For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list.
|
||||
For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list.
|
||||
|
||||
The storage layer in the apiserver must apply consistency checking to the provided continue token to ensure that malicious users cannot trick the server into serving results outside of its range. The storage layer must perform defensive checking on the provided value, check for path traversal attacks, and have stable versioning for the continue token.
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ while
|
|||
|
||||
## Constraints and Assumptions
|
||||
|
||||
* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here.
|
||||
* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here.
|
||||
* dynamic loading of backends for new output formats are out of scope.
|
||||
|
||||
## Use Cases
|
||||
|
|
|
@ -12,7 +12,7 @@ Thanks: @dbsmith, @deads2k, @sttts, @liggit, @enisoc
|
|||
|
||||
### Summary
|
||||
|
||||
This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs.
|
||||
This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs.
|
||||
|
||||
No new resources are added; the [CRD resource](https://github.com/kubernetes/kubernetes/blob/34383aa0a49ab916d74ea897cebc79ce0acfc9dd/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types.go#L187) is extended to include conversion information as well as multiple schema definitions, one for each apiVersion that is to be served.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ admission controller that uses code, rather than configuration, to map the
|
|||
resource requests and limits of a pod to QoS, and attaches the corresponding
|
||||
annotation.)
|
||||
|
||||
We anticipate a number of other uses for `MetadataPolicy`, such as defaulting
|
||||
We anticipate a number of other uses for `MetadataPolicy`, such as defaulting
|
||||
for labels and annotations, prohibiting/requiring particular labels or
|
||||
annotations, or choosing a scheduling policy within a scheduler. We do not
|
||||
discuss them in this doc.
|
||||
|
|
|
@ -267,7 +267,7 @@ ControllerRevisions, this approach is reasonable.
|
|||
- A revision is considered to be live while any generated Object labeled
|
||||
with its `.Name` is live.
|
||||
- This method has the benefit of providing visibility, via the label, to
|
||||
users with respect to the historical provenance of a generated Object.
|
||||
users with respect to the historical provenance of a generated Object.
|
||||
- The primary drawback is the lack of support for using garbage collection
|
||||
to ensure that only non-live version snapshots are collected.
|
||||
1. Controllers may also use the `OwnerReferences` field of the
|
||||
|
|
|
@ -61,7 +61,7 @@ think about it.
|
|||
about uniqueness, just labeling for user's own reasons.
|
||||
- Defaulting logic sets `job.spec.selector` to
|
||||
`matchLabels["controller-uid"]="$UIDOFJOB"`
|
||||
- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`.
|
||||
- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`.
|
||||
- The first label is controller-uid=$UIDOFJOB.
|
||||
- The second label is "job-name=$NAMEOFJOB".
|
||||
|
||||
|
|
|
@ -304,7 +304,7 @@ as follows.
|
|||
should be consistent with the version indicated by `Status.UpdateRevision`.
|
||||
1. If the Pod does not meet either of the prior two conditions, and if
|
||||
ordinal is in the sequence `[0, .Spec.UpdateStrategy.Partition.Ordinal)`,
|
||||
it should be consistent with the version indicated by
|
||||
it should be consistent with the version indicated by
|
||||
`Status.CurrentRevision`.
|
||||
1. Otherwise, the Pod should be consistent with the version indicated
|
||||
by `Status.UpdateRevision`.
|
||||
|
@ -446,7 +446,7 @@ object if any of the following conditions are true.
|
|||
1. `.Status.UpdateReplicas` is negative or greater than `.Status.Replicas`.
|
||||
|
||||
## Kubectl
|
||||
Kubectl will use the `rollout` command to control and provide the status of
|
||||
Kubectl will use the `rollout` command to control and provide the status of
|
||||
StatefulSet updates.
|
||||
|
||||
- `kubectl rollout status statefulset <StatefulSet-Name>`: displays the status
|
||||
|
@ -648,7 +648,7 @@ spec:
|
|||
### Phased Roll Outs
|
||||
Users can create a canary using `kubectl apply`. The only difference between a
|
||||
[canary](#canaries) and a phased roll out is that the
|
||||
`.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than
|
||||
`.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than
|
||||
`.Spec.Replicas-1`.
|
||||
|
||||
```yaml
|
||||
|
@ -810,7 +810,7 @@ intermittent compaction as a form of garbage collection. Applications that use
|
|||
log structured merge trees with size tiered compaction (e.g Cassandra) or append
|
||||
only B(+/*) Trees (e.g Couchbase) can temporarily double their storage requirement
|
||||
during compaction. If there is insufficient space for compaction
|
||||
to progress, these applications will either fail or degrade until
|
||||
to progress, these applications will either fail or degrade until
|
||||
additional capacity is added. While, if the user is using AWS EBS or GCE PD,
|
||||
there are valid manual workarounds to expand the size of a PD, it would be
|
||||
useful to automate the resize via updates to the StatefulSet's
|
||||
|
|
|
@ -49,7 +49,7 @@ while creating containers, for example
|
|||
`docker run --security-opt=no_new_privs busybox`.
|
||||
|
||||
Docker provides via their Go api an object named `ContainerCreateConfig` to
|
||||
configure container creation parameters. In this object, there is a string
|
||||
configure container creation parameters. In this object, there is a string
|
||||
array `HostConfig.SecurityOpt` to specify the security options. Client can
|
||||
utilize this field to specify the arguments for security options while
|
||||
creating new containers.
|
||||
|
|
|
@ -42,7 +42,7 @@ containers.
|
|||
|
||||
In order to support external integration with shared storage, processes running
|
||||
in a Kubernetes cluster should be able to be uniquely identified by their Unix
|
||||
UID, such that a chain of ownership can be established. Processes in pods will
|
||||
UID, such that a chain of ownership can be established. Processes in pods will
|
||||
need to have consistent UID/GID/SELinux category labels in order to access
|
||||
shared disks.
|
||||
|
||||
|
|
|
@ -69,7 +69,7 @@ About 60% of scalability regressions are caught by these medium-scale jobs ([sou
|
|||
|
||||
### Testing / Post-submit phase
|
||||
|
||||
This phase constitutes the final layer of protection against regressions before cutting the release. We already have scalability CI jobs in place for this. The spectrum of scale they cover is quite wide, ranging from 100-node to 5000-node clusters (both for kubemark and real clusters). However, what what we need additionally is:
|
||||
This phase constitutes the final layer of protection against regressions before cutting the release. We already have scalability CI jobs in place for this. The spectrum of scale they cover is quite wide, ranging from 100-node to 5000-node clusters (both for kubemark and real clusters). However, what we need additionally is:
|
||||
|
||||
The ability for crucial scalability jobs to block submit-queue (with manual unblock ability)\
|
||||
([relevant feature request](https://github.com/kubernetes/kubernetes/issues/53255))\
|
||||
|
|
Loading…
Reference in New Issue