doc: fix some typo
This commit is contained in:
parent
f2ad4746f7
commit
ec3d22e7c5
|
@ -130,7 +130,7 @@ GET /api/v1/pods?limit=500&continue=DEF...
|
|||
|
||||
Some clients may wish to follow a failed paged list with a full list attempt.
|
||||
|
||||
The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accomodate larger clusters.
|
||||
The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accommodate larger clusters.
|
||||
|
||||
|
||||
#### Types of clients and impact
|
||||
|
|
|
@ -160,7 +160,7 @@ type ExecAuthProviderConfig struct {
|
|||
// to pass argument to the plugin.
|
||||
Env []ExecEnvVar `json:"env"`
|
||||
|
||||
// Prefered input version of the ExecInfo. The returned ExecCredentials MUST use
|
||||
// Preferred input version of the ExecInfo. The returned ExecCredentials MUST use
|
||||
// the same encoding version as the input.
|
||||
APIVersion string `json:"apiVersion,omitempty"`
|
||||
|
||||
|
|
|
@ -336,7 +336,7 @@ type VerticalPodAutoscalerStatus {
|
|||
StatusMessage string
|
||||
}
|
||||
|
||||
// UpdateMode controls when autoscaler applies changes to the pod resoures.
|
||||
// UpdateMode controls when autoscaler applies changes to the pod resources.
|
||||
type UpdateMode string
|
||||
const (
|
||||
// UpdateModeOff means that autoscaler never changes Pod resources.
|
||||
|
@ -354,7 +354,7 @@ const (
|
|||
|
||||
// PodUpdatePolicy describes the rules on how changes are applied to the pods.
|
||||
type PodUpdatePolicy struct {
|
||||
// Controls when autoscaler applies changes to the pod resoures.
|
||||
// Controls when autoscaler applies changes to the pod resources.
|
||||
// +optional
|
||||
UpdateMode UpdateMode
|
||||
}
|
||||
|
|
|
@ -359,7 +359,7 @@ There's ongoing effort for adding Event deduplication and teeing to the server s
|
|||
Another effort to protect API server from too many Events by dropping requests servers side in admission plugin is worked on by @staebler.
|
||||
## Considered alternatives for API changes
|
||||
### Leaving current dedup mechanism but improve backoff behavior
|
||||
As we're going to move all semantic informations to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.
|
||||
As we're going to move all semantic information to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.
|
||||
|
||||
### Timestamp list as a dedup mechanism
|
||||
Another considered solution was to store timestamps of Events explicitly instead of only count. This gives users more information, as people complain that current dedup logic is too strong and it's hard to "decompress" Event if needed. This change has clearly worse performance characteristic, but fixes the problem of "decompressing" Events and generally making deduplication lossless. We believe that individual repeated events are not interesting per se, what's interesting is when given series started and when it finished, which is how we ended with the current proposal.
|
||||
|
|
|
@ -78,7 +78,7 @@ horizontally, though it’s rather complicated and is out of the scope of this d
|
|||
|
||||
Metrics server will be Kubernetes addon, create by kube-up script and managed by
|
||||
[addon-manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager).
|
||||
Since there is a number of dependant components, it will be marked as a critical addon.
|
||||
Since there is a number of dependent components, it will be marked as a critical addon.
|
||||
In the future when the priority/preemption feature is introduced we will migrate to use this
|
||||
proper mechanism for marking it as a high-priority, system component.
|
||||
|
||||
|
|
|
@ -77,5 +77,5 @@ The logic to determine if an object is sent to a Federated Cluster will have two
|
|||
|
||||
## Open Questions
|
||||
|
||||
1. Should there be any special considerations for when dependant resources would not be forwarded together to a Federated Cluster.
|
||||
1. Should there be any special considerations for when dependent resources would not be forwarded together to a Federated Cluster.
|
||||
1. How to improve usability of this feature long term. It will certainly help to give first class API support but easier ways to map labels or requirements to objects may be required.
|
||||
|
|
|
@ -335,7 +335,7 @@ only supports a simple list of acceptable clusters. Workloads will be
|
|||
evenly distributed on these acceptable clusters in phase one. After
|
||||
phase one we will define syntax to represent more advanced
|
||||
constraints, like cluster preference ordering, desired number of
|
||||
splitted workloads, desired ratio of workloads spread on different
|
||||
split workloads, desired ratio of workloads spread on different
|
||||
clusters, etc.
|
||||
|
||||
Besides this explicit “clusterSelector” filter, a workload may have
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
**Status**: Proposed
|
||||
|
||||
## Background
|
||||
Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only suppports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.
|
||||
Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only supports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.
|
||||
|
||||
The Linux & Windows schedulers differ in design and the units used, but can accomplish the same goal of limiting resource consumption of individual containers.
|
||||
|
||||
|
|
|
@ -118,7 +118,7 @@ The following formula is used to convert CPU in millicores to cgroup values:
|
|||
The `kubelet` will create a cgroup sandbox for each pod.
|
||||
|
||||
The naming convention for the cgroup sandbox is `pod<pod.UID>`. It enables
|
||||
the `kubelet` to associate a particular cgroup on the host filesytem
|
||||
the `kubelet` to associate a particular cgroup on the host filesystem
|
||||
with a corresponding pod without managing any additional state. This is useful
|
||||
when the `kubelet` restarts and needs to verify the cgroup filesystem.
|
||||
|
||||
|
@ -433,7 +433,7 @@ eviction decisions for the unbounded QoS tiers (Burstable, BestEffort).
|
|||
The following describes the cgroup representation of a node with pods
|
||||
across multiple QoS classes.
|
||||
|
||||
### Cgroup Hierachy
|
||||
### Cgroup Hierarchy
|
||||
|
||||
The following identifies a sample hierarchy based on the described design.
|
||||
|
||||
|
|
|
@ -115,7 +115,7 @@ supports setting a number of whitelisted sysctls during the container creation p
|
|||
Some real-world examples for the use of sysctls:
|
||||
|
||||
- PostgreSQL requires `kernel.shmmax` and `kernel.shmall` (among others) to be
|
||||
set to reasonable high values (compare [PostgresSQL Manual 17.4.1. Shared Memory
|
||||
set to reasonable high values (compare [PostgreSQL Manual 17.4.1. Shared Memory
|
||||
and Semaphores](http://www.postgresql.org/docs/9.1/static/kernel-resources.html)).
|
||||
The default of 32 MB for shared memory is not reasonable for a database.
|
||||
- RabbitMQ proposes a number of sysctl settings to optimize networking: https://www.rabbitmq.com/networking.html.
|
||||
|
@ -342,7 +342,7 @@ Issues:
|
|||
* [x] **namespaced** in net ns
|
||||
* [ ] **might have application influence** for high values as it limits the socket queue length
|
||||
* [?] **No real evidence found until now for accounting**. The limit is checked by `sk_acceptq_is_full` at http://lxr.free-electrons.com/source/net/ipv4/tcp_ipv4.c#L1276. After that a new socket is created. Probably, the tcp socket buffer sysctls apply then, with their accounting, see below.
|
||||
* [ ] **very unreliable** tcp memory accounting. There have a been a number of attemps to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
|
||||
* [ ] **very unreliable** tcp memory accounting. There have a been a number of attempts to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
|
||||
e - `net.ipv4.tcp_wmem`/`net.ipv4.tcp_wmem`/`net.core.rmem_max`/`net.core.wmem_max`: socket buffer sizes
|
||||
* [ ] **not namespaced in net ns**, and they are not even available under `/sys/net`
|
||||
- `net.ipv4.ip_local_port_range`: local tcp/udp port range
|
||||
|
|
|
@ -38,13 +38,13 @@ to Kubelet and monitor them without writing custom Kubernetes code.
|
|||
We also want to provide a consistent and portable solution for users to
|
||||
consume hardware devices across k8s clusters.
|
||||
|
||||
This document describes a vendor independant solution to:
|
||||
This document describes a vendor independent solution to:
|
||||
* Discovering and representing external devices
|
||||
* Making these devices available to the containers, using these devices,
|
||||
scrubbing and securely sharing these devices.
|
||||
* Health Check of these devices
|
||||
|
||||
Because devices are vendor dependant and have their own sets of problems
|
||||
Because devices are vendor dependent and have their own sets of problems
|
||||
and mechanisms, the solution we describe is a plugin mechanism that may run
|
||||
in a container deployed through the DaemonSets mechanism or in bare metal mode.
|
||||
|
||||
|
@ -187,7 +187,7 @@ sockets and follow this simple pattern:
|
|||
gRPC request)
|
||||
2. Kubelet answers to the `RegisterRequest` with a `RegisterResponse`
|
||||
containing any error Kubelet might have encountered
|
||||
3. The device plugin start it's gRPC server if it did not recieve an
|
||||
3. The device plugin start it's gRPC server if it did not receive an
|
||||
error
|
||||
|
||||
## Unix Socket
|
||||
|
@ -242,7 +242,7 @@ service Registration {
|
|||
// DevicePlugin is the service advertised by Device Plugins
|
||||
service DevicePlugin {
|
||||
// ListAndWatch returns a stream of List of Devices
|
||||
// Whenever a Device state change or a Device disapears, ListAndWatch
|
||||
// Whenever a Device state change or a Device disappears, ListAndWatch
|
||||
// returns the new list
|
||||
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
|
||||
|
||||
|
@ -282,7 +282,7 @@ message AllocateResponse {
|
|||
}
|
||||
|
||||
// ListAndWatch returns a stream of List of Devices
|
||||
// Whenever a Device state change or a Device disapears, ListAndWatch
|
||||
// Whenever a Device state change or a Device disappears, ListAndWatch
|
||||
// returns the new list
|
||||
message ListAndWatchResponse {
|
||||
repeated Device devices = 1;
|
||||
|
@ -485,7 +485,7 @@ spec:
|
|||
Currently we require exact version match between Kubelet and Device Plugin.
|
||||
API version is expected to be increased only upon incompatible API changes.
|
||||
|
||||
Follow protobuf guidelines on versionning:
|
||||
Follow protobuf guidelines on versioning:
|
||||
* Do not change ordering
|
||||
* Do not remove fields or change types
|
||||
* Add optional fields
|
||||
|
|
|
@ -165,7 +165,7 @@ type PriorityClass struct {
|
|||
metav1.ObjectMeta
|
||||
|
||||
// The value of this priority class. This is the actual priority that pods
|
||||
// recieve when they have the above name in their pod spec.
|
||||
// receive when they have the above name in their pod spec.
|
||||
Value int32
|
||||
GlobalDefault bool
|
||||
Description string
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine
|
||||
there is a bunch of addons which due to various reasons have to run on a regular cluster node, not the master.
|
||||
Some of them are critical to have fully functional cluster: Heapster, DNS, UI. Users can break their cluster
|
||||
by evicting a critical addon (either manually or as a side effect of an other operation like upgrade)
|
||||
by evicting a critical addon (either manually or as a side effect of another operation like upgrade)
|
||||
which possibly can become pending (for example when the cluster is highly utilized).
|
||||
To avoid such situation we want to have a mechanism which guarantees that
|
||||
critical addons are scheduled assuming the cluster is big enough.
|
||||
|
|
|
@ -25,7 +25,7 @@ This document presents a proposal for managing raw block storage in Kubernetes u
|
|||
# Value add to Kubernetes
|
||||
|
||||
By extending the API for volumes to specifically request a raw block device, we provide an explicit method for volume consumption,
|
||||
whereas previously any request for storage was always fulfilled with a formatted fileystem, even when the underlying storage was
|
||||
whereas previously any request for storage was always fulfilled with a formatted filesystem, even when the underlying storage was
|
||||
block. In addition, the ability to use a raw block device without a filesystem will allow
|
||||
Kubernetes better support of high performance applications that can utilize raw block devices directly for their storage.
|
||||
Block volumes are critical to applications like databases (MongoDB, Cassandra) that require consistent I/O performance
|
||||
|
@ -113,7 +113,7 @@ spec:
|
|||
|
||||
## Persistent Volume API Changes:
|
||||
For static provisioning the admin creates the volume and also is intentional about how the volume should be consumed. For backwards
|
||||
compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the fileystem path.
|
||||
compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the filesystem path.
|
||||
```
|
||||
kind: PersistentVolume
|
||||
apiVersion: v1
|
||||
|
@ -841,4 +841,4 @@ Feature: Discovery of block devices
|
|||
|
||||
Milestone 1: Dynamically provisioned PVs to dynamically allocated devices
|
||||
|
||||
Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS)
|
||||
Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS)
|
||||
|
|
|
@ -102,7 +102,7 @@ type VolumeNodeAffinity struct {
|
|||
The `Required` field is a hard constraint and indicates that the PersistentVolume
|
||||
can only be accessed from Nodes that satisfy the NodeSelector.
|
||||
|
||||
In the future, a `Preferred` field can be added to handle soft node contraints with
|
||||
In the future, a `Preferred` field can be added to handle soft node constraints with
|
||||
weights, but will not be included in the initial implementation.
|
||||
|
||||
The advantages of this NodeAffinity field vs the existing method of using zone labels
|
||||
|
@ -492,7 +492,7 @@ if the API update fails, the cached updates need to be reverted and restored
|
|||
with the actual API object. The cache will return either the cached-only
|
||||
object, or the informer object, whichever one is latest. Informer updates
|
||||
will always override the cached-only object. The new predicate and priority
|
||||
functions must get the objects from this cache intead of from the informer cache.
|
||||
functions must get the objects from this cache instead of from the informer cache.
|
||||
This cache only stores pointers to objects and most of the time will only
|
||||
point to the informer object, so the memory footprint per object is small.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ test results.
|
|||
Gubernator simplifies the debugging process and makes it easier to track down failures by automating many
|
||||
steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines.
|
||||
Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the
|
||||
failed pods and the corresponing pod UID, namespace, and container ID.
|
||||
failed pods and the corresponding pod UID, namespace, and container ID.
|
||||
It also allows for filtering of the log files to display relevant lines based on selected keywords, and
|
||||
allows for multiple logs to be woven together by timestamp.
|
||||
|
||||
|
|
|
@ -124,7 +124,7 @@ and Scheduler talk with API server using insecure port 8080.</sub>
|
|||
(We use gcr.io/ as our remote docker repository in GCE, should be different for other providers)
|
||||
3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo),
|
||||
which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we
|
||||
use it with a hollow config that esentially has an empty set of rules and conditions to be detected.
|
||||
use it with a hollow config that essentially has an empty set of rules and conditions to be detected.
|
||||
This step is required only for other cloud providers, as the docker image for GCE already exists on GCR.
|
||||
4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps
|
||||
for the HollowNode and the HollowNodeProblemDetector.
|
||||
|
|
|
@ -12,7 +12,7 @@ At the time of this writing, this includes the branches
|
|||
- release-1.8 / release-5.0,
|
||||
- and release-1.9 / release-6.0
|
||||
|
||||
of the follwing staging repos in the k8s.io org:
|
||||
of the following staging repos in the k8s.io org:
|
||||
|
||||
- api
|
||||
- apiextensions-apiserver
|
||||
|
|
|
@ -65,6 +65,6 @@ Call for topics and voting is now closed. You can view the complete list of prop
|
|||
|
||||
## Misc:
|
||||
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accomodate you.
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.
|
||||
|
||||
Further details to be updated on this doc. Please check back for a complete guide.
|
||||
|
|
|
@ -4,9 +4,9 @@ Note Takers: Josh Berkus(@jberkus), Aaron Crickenberger (@spiffxp)
|
|||
|
||||
Here's what we're covering: build process, issue tracking, logistics.
|
||||
|
||||
## Question: are we commited emotionally 100% to splitting the monolith
|
||||
## Question: are we committed emotionally 100% to splitting the monolith
|
||||
|
||||
Opinion: commited, but need to define what that means (past provides the impetus)
|
||||
Opinion: committed, but need to define what that means (past provides the impetus)
|
||||
|
||||
We've been trying [the monolith] for so long and it's too expensive.
|
||||
|
||||
|
@ -34,7 +34,7 @@ Assumption: contributor velocity is correlated with velocity, new contributor ex
|
|||
Assumption: "big tangled ball of pasta" is hard to contribute to
|
||||
|
||||
- thockin: our codebase is not organized well (need to make it easier to actually *find* what you want to contribute to) one of the things we could do is "move code around" until it looks effectively like there are multiple repos in our repo? eg if we centralized all of the kubelet logic in one directory except for one util directory, maybe that would be an improvement
|
||||
- jberkus: people who work on other large projects say that modularity is the way to go. Not necessarily seperate repos, but a modular architecture.
|
||||
- jberkus: people who work on other large projects say that modularity is the way to go. Not necessarily separate repos, but a modular architecture.
|
||||
- thockin: what we've done with github bots alleviates a lot of the pain we've had with github
|
||||
- dims: two problems
|
||||
- vendoring in all kinds of sdk's that we don't care about, they get dragged in (e.g. AWS SDK) and if you're only working on eg: openstack related stuff, it's hard to get signoff on getting other stuff in if it's in your own repo it's easier to do that than in the main repo
|
||||
|
@ -48,7 +48,7 @@ Assumption: "big tangled ball of pasta" is hard to contribute to
|
|||
- dchen1107: I heard a lot of benefits of the split. So there's a huge cost for release management, as the release manager, I don't know what's in the other repositiory. Clear APIs and interfaces could be built without needing separate repositories. Gave example of Docker and CRI, which got API without moving repos.
|
||||
- Example of Cloud Foundry, build process which took two weeks. How do we ensure that we're delivering security updates quickly?
|
||||
- thockin: We can put many things in APIs and cloud providers are a good example of that. Splitting stuff out into multiple repos comes down to the question of: are we a piece of software or a distribution? You'll have to come to my talk to see the rest.
|
||||
- spiffxp: Increasing our dependancy on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notificaitons instead?
|
||||
- spiffxp: Increasing our dependency on integration tools will add overhead and process we don't have now. But I understand that most OSS people expect multiple repos and it's easier for them. Github notifications are awful, having multiple repos would make this better. The bots have improved the automation situation, but not really triaging notifications. How many issues do we have that are based on Github. Maybe we should improve the routing of GH notificaitons instead?
|
||||
- solly: if we split, we really need to make it easier for tiny repos to plug into our automation and other tools. I shouldn't have to figure out who to ask about getting plugged in, we should have a doc. We need to make sure it's not possible for repos to fall through the cracks like Heapster, which I work on.
|
||||
- bgrant: We're already in the land of 90 repos. We don't need to debate splitting, we're alread split. We have incubator, and kubernates-client. I think client has helped a lot, we have 7 languages and that'll grow.
|
||||
- bgrant: the velocity of things in the monorepo are static
|
||||
|
@ -57,7 +57,7 @@ Assumption: "big tangled ball of pasta" is hard to contribute to
|
|||
- spiffxp: we have hundreds of directories with no owners files, which is one of the reasons for excessive notifications.
|
||||
- bgrant: kubernetes, as a API-driven system, you have to touch the api to do almost anything. we've added mechanisms like CRD to extend the API. We need SDKs to build kube-style APIs.
|
||||
- dims: I want to focus on the cost we're paying right now. First we had the google KMS provider. Then we had to kick out the KMS provider, and there was a PR to add the gRPC interface, but it didn't go into 1.9.
|
||||
- thockin: the cloud provider stuff is an obvious breeding ground for new functionality, how and if we should add a whole seperate grpc plugin interface is a seperate question
|
||||
- thockin: the cloud provider stuff is an obvious breeding ground for new functionality, how and if we should add a whole separate grpc plugin interface is a separate question
|
||||
- jdumars: the vault provider thing was one of the ebtter things that happened, it pushed us at MS to thing about genercizing the solution, it pushed us to think about what's better for the community vs. what's better for the provider
|
||||
- jdumars: flipside is we need to have a process where people can up with a well accepted / adopted solution, the vault provider thing was one way of doing that
|
||||
- lavalamp: I tend to think that most extension points are special snowflakes and you can't have a generic process for adding a new extension point
|
||||
|
@ -68,9 +68,9 @@ Assumption: "big tangled ball of pasta" is hard to contribute to
|
|||
- lavalamp: there are utility functions that people commonly use and there's no good common place
|
||||
- lavalamp: for kubectl at least it's sphaghetti code that pulls in lots of packages and makes it difficult to do technically
|
||||
- thockin: do we think that life would be better at the end of that tunnel, would things be better if kubectl was a different repository, etc.
|
||||
- timallclair: I'm worried about dependancy management, godeps is already a nightmare and with multiple repos it would be worse.
|
||||
- timallclair: I'm worried about dependency management, godeps is already a nightmare and with multiple repos it would be worse.
|
||||
- luxas: in the kubeadm attack plan, we need to get a release for multiple repos. We need the kubeadm repo to be authoritative, and be able to include it in a build.
|
||||
- pwittrock: how has "staging" improved development? can we see any of the perceived or hoped-for benefits by looking at staging repos as example use cases?
|
||||
- lavalamp: getting to the "staging" state and then stopping is because api-machinery was unblocked once we got there
|
||||
- thockin: the reason I consider "staging" solved, is you have to untangle a lot of the stuff already
|
||||
- erictune: I would make a plea that we finish some of our started-but-not-finished breakaparts
|
||||
- erictune: I would make a plea that we finish some of our started-but-not-finished breakaparts
|
||||
|
|
|
@ -28,7 +28,7 @@ TSC: The KEP has owners, you could have a reviewer field and designate a reviewe
|
|||
|
||||
Dhawal: many SIG meetings aren't really traceable because they're video meetings. Stuff in Issues/PRs are much more referencable for new contributors. If the feature is not searchable, then it's not available for anyone to check. If it is going to a SIG, then you need to update the issue, and summarize the discussions in the SIG.
|
||||
|
||||
TSC: Just because a feature is assigned to a SIG doesn't mean they'll acutally look at it. SIGs have their own priorities. There's so many issues in the backlog, nobody can deal with it. My search for sig/scheduling is 10 different searches to find all of the sig/scheduling issues. SIG labels aren't always applied. And then you have to prioritize the list.
|
||||
TSC: Just because a feature is assigned to a SIG doesn't mean they'll actually look at it. SIGs have their own priorities. There's so many issues in the backlog, nobody can deal with it. My search for sig/scheduling is 10 different searches to find all of the sig/scheduling issues. SIG labels aren't always applied. And then you have to prioritize the list.
|
||||
|
||||
???: Test plans also seem to be late in the game. This could be part of the KEP process. And user-facing-documentation.
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ Q: (maru) you’re still going to have to rebase feature branches the way that y
|
|||
- A (jago) the difference is that a whole team could work on a feature branch
|
||||
- I like the idea of trying a few projects and then reporting back
|
||||
- I think roughly the level of granularity of having a branch per KEP sounds about right
|
||||
- Multiple feature branches affecting the same area of code would be a very useful place to get some infromation
|
||||
- Multiple feature branches affecting the same area of code would be a very useful place to get some information
|
||||
|
||||
Q (???) if someone needs to rebase on a feature branch, could it instead be resolved by a merge commit instead?
|
||||
|
||||
|
|
|
@ -53,6 +53,6 @@ Combined social function 5-7pm so the two groups can comingle. Hard cut off at 7
|
|||
|
||||
## Misc:
|
||||
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accomodate you.
|
||||
A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you.
|
||||
|
||||
Further details to be updated on this doc. Please check back for a complete guide.
|
||||
|
|
|
@ -38,7 +38,7 @@ Specific questions about Kubernetes as pertaining to the topic. Since this is a
|
|||
|
||||
#### What’s Offtopic
|
||||
|
||||
Developer/Contributor questions: This event is for end users and operators, there is a seperate livestream called [Meet our Contributors](/mentoring/meet-our-contributors.md) where you can ask questions about getting started contributing, participating in peer reviews, and other development topics.
|
||||
Developer/Contributor questions: This event is for end users and operators, there is a separate livestream called [Meet our Contributors](/mentoring/meet-our-contributors.md) where you can ask questions about getting started contributing, participating in peer reviews, and other development topics.
|
||||
|
||||
Local installation and debugging: The participants don’t have access to your network or your hardware. The host can/should help the user transform a vague question into something answerable and reusable.
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ Metadata items:
|
|||
* **owning-sig** Required
|
||||
* The SIG that is most closely associated with this KEP. If there is code or
|
||||
other artifacts that will result from this KEP, then it is expected that
|
||||
this SIG will take responsiblity for the bulk of those artifacts.
|
||||
this SIG will take responsibility for the bulk of those artifacts.
|
||||
* Sigs are listed as `sig-abc-def` where the name matches up with the
|
||||
directory in the `kubernetes/community` repo.
|
||||
* **participating-sigs** Optional
|
||||
|
@ -359,4 +359,4 @@ and durable storage.
|
|||
- How reviewers and approvers are assigned to a KEP
|
||||
- Example schedule, deadline, and time frame for each stage of a KEP
|
||||
- Communication/notification mechanisms
|
||||
- Review meetings and escalation procedure
|
||||
- Review meetings and escalation procedure
|
||||
|
|
|
@ -33,7 +33,7 @@ In order to standardize Special Interest Group efforts, create maximum transpare
|
|||
1. Create a calendar on your own account. Make it public.
|
||||
2. Share it with all SIG leads with full ownership of the calendar - they can edit, rename, or even delete it.
|
||||
3. Share it with `sc1@kubernetes.io`, `sc2@kubernetes.io`, `sc3@kubernetes.io`, with full ownership. This is just in case SIG leads ever disappear.
|
||||
4. Share it with the SIG mailing list, lowest priviledges.
|
||||
4. Share it with the SIG mailing list, lowest privileges.
|
||||
5. Share individual events with `cgnt364vd8s86hr2phapfjc6uk@group.calendar.google.com` to publish on the universal calendar.
|
||||
* Use existing proposal and PR process (to be documented)
|
||||
* Announce new SIG on kubernetes-dev@googlegroups.com
|
||||
|
|
Loading…
Reference in New Issue