diff --git a/contributors/design-proposals/auth/image-provenance.md b/contributors/design-proposals/auth/image-provenance.md index 7a5580d9f..d80ec6841 100644 --- a/contributors/design-proposals/auth/image-provenance.md +++ b/contributors/design-proposals/auth/image-provenance.md @@ -322,7 +322,7 @@ It will not be a generic webhook. A generic webhook would need a lot more discus Additionally, just sending all the fields of just the Pod kind also has problems: - it exposes our whole API to a webhook backend without giving us (the project) any chance to review or understand how it is being used. - because we do not know which fields of an object are inspected by the backend, caching of decisions is not effective. Sending fewer fields allows caching. -- sending fewer fields makes it possible to rev the version of the webhook request slower than the version of our internal obejcts (e.g. pod v2 could still use imageReview v1.) +- sending fewer fields makes it possible to rev the version of the webhook request slower than the version of our internal objects (e.g. pod v2 could still use imageReview v1.) probably lots more reasons. diff --git a/contributors/design-proposals/auth/no-new-privs.md b/contributors/design-proposals/auth/no-new-privs.md index f764e399f..b467c35d4 100644 --- a/contributors/design-proposals/auth/no-new-privs.md +++ b/contributors/design-proposals/auth/no-new-privs.md @@ -24,7 +24,7 @@ is inherited across `fork`, `clone` and `execve` and can not be unset. With that could not have been done without the `execve` call. For more details about `no_new_privs`, please check the -[Linux kernel documention](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt). +[Linux kernel documentation](https://www.kernel.org/doc/Documentation/prctl/no_new_privs.txt). This is different from `NOSUID` in that `no_new_privs`can give permission to the container process to further restrict child processes with seccomp. This diff --git a/contributors/design-proposals/autoscaling/hpa-status-conditions.md b/contributors/design-proposals/autoscaling/hpa-status-conditions.md index efb5ad0bd..d33545826 100644 --- a/contributors/design-proposals/autoscaling/hpa-status-conditions.md +++ b/contributors/design-proposals/autoscaling/hpa-status-conditions.md @@ -2,7 +2,7 @@ Horizontal Pod Autoscaler Status Conditions =========================================== Currently, the HPA status conveys the last scale time, current and desired -replacas, and the last-retrieved values of the metrics used to autoscale. +replicas, and the last-retrieved values of the metrics used to autoscale. However, the status field conveys no information about whether or not the HPA controller encountered difficulties while attempting to fetch metrics, @@ -77,7 +77,7 @@ entirely. - *FailedRescale*: a scale update was needed and the HPA controller was unable to actually update the scale subresource of the target scalable. -- *SuccesfulRescale*: a scale update was needed and everything went +- *SuccessfulRescale*: a scale update was needed and everything went properly. - *FailedUpdateStatus*: the HPA controller failed to update the status of