* As discussed in http://bit.ly/kf_kustomize_v3 we want to use better
patterns in our kustomize manifests.
* This PR uses kustomize to compose Kubeflow applications into a "stack" of
the applications to be installed on GCP.
* Note: This is only an initial set of applications not all applications
are installed.
* Define a "stack" a stack is an oppionated way of combining applications
* a stack is oppinionated about which applications to include and how
to configure them
* So we define a GCP stack that would contain the apps to be
included in GCP deployments.
* Stacks can be used to replace much of the functionality currently
achieved by the list of applications in KFDef
* Instead of listing applications in KFDef we could just point to a
kustomization file containing the list of applications.
* Define an example illustrating how alice would define an overlay
to add her kustomizations.
* Show how she can define a patch to modify the additional configuration
parameters for her deployment.
Start defining a KFDef that uses the new v3 manifests and the stacks
* We still need to have multiple applications (and not a single kustomize) application because the namespace for some applications are different
* e.g. ISTIO is installed in istio-system not kubeflow namespace so
we keep it as a separate kustomize package
* Create v3 version of a couple applications
* meta controller
* notebook controller
* certmanager
Fix some bugs in some v3 packages
* jupyter_web_app - base/deployment.yaml had some changes that needed to be removed because these are now in a patch because we don't want the changes to be included in the v3 version
Fix centraldashboard v3
* Move the resources back to base and make base_v3 depend on base rather than
the other way around.
* It also looks like we ended up duplicating resources between base_v3 and
base; probably because of bad merges and rebase
* In rolebindings don't use vars to substitute in the namespace just
hardcode the values as needed.
* Upgrade kustomize to v3.2.0 otherwise the tests don't pass
* In particular using v3.1.0 the jupyter-web-app doesn't have the unique
name of the kubeflow-config configmap (i.e. with the content hash)
substituted into the environment variable configmap ref names.
* unittest tests should test actual kustomizations.
* generate_legacy_tests.py is a onetime script intended to generate
kustomization.yaml files that are nearely identical to the ones that
get generated by kfctl. These kustomization.yaml files can be used to
generate the golden/expected output. We can use these to test that
we actually produce the expected output.
* This will make it easier to refactor our kustomize packages and verify
that we haven't changed the existing kustomizations.
* Prior to this PR; generate_tests.py looked for every kustomization.yaml
file in the manifests repo and generated a test for that kustomization.yaml
file. That doesn't really do what we want because many of those kustomization.yaml files are trivial so we don't end up testing any transformations.
* Right now the transformations we want to test are the ones applied by the
kustomization.yaml files created by kfctl
* Thats why we check in kustomization.yaml files corresponding to the
kustomization.yaml files generated by kfctl.
* Once we start using kustomize to compose applications into packages
we will test that those stacks generate the expected output.
* Related to #1014
* No longer try to infer what tests were modified; instead just rerun
generate all. Now that we are testing derived kustomizations its no
longer easy to infer based on which files changed which tests to update.
* Address comments.
* Address comments.
* unittests should compare result of kustomize build to golden set of YAML resources.
* Per kubeflow/manifests#306 to allow reviewers to verify that the expected
output is correct we should check in the result of "kustomize build -o"
so that reviewers can review the diff and verify that it is correct.
* This also simplifies the test generation code; the python script
generate_tests.py just recourses over the directory tree and runs "kustomize build -o" and checks in the output into the test_data directory.
* This is different from what the tests are currently doing.
* Currently what the generation scripts do is generate "kustomization.yaml" files and then generate the expected output from that when the test is run.
* This makes it very difficult to validate the expected output and to
debug whether the expected output is correct.
* Going forward, per #1014, I think what we want to do is check in test cases
corresponding to kustomization.yaml files corresponding to various
kustomizations that we want to validate are working correctly
* Our generate scripts would then run "kustomize build" to generate expected
output and check that in so that we can validate that the expected output
is correct.
* Also change the tests data structure so that it mirrors the kustomize directory tree rather than flattening the tests into the "tests" directory.
* Fix#683
* Right now running the unittests takes a long time
* The problem is that we generate unittests for every "kustomization.yaml"
file
* Per #1014 this is kind of pointless/redundant because most of these
tests aren't actually testing kustomizations.
* We will address this in follow on PRs which will add more appropriate
tests and remove some of these unnecessary/redundant tests.
* Cherry pick AWS fixes.
* Regenerate the tests.
* Fix the unittests; need to update the generate logic to remove unused tests
to remove tests that aren't part of this PR.
* Address comments.
* Rebase on master and regenerate the tests.
* ISTIO rbac roles need to include the API group networking.istio.io
* Otherwise we won't be able to create virtualservices inside notebooks.
We want to do this to deploy things like the mnist frontend and tensorboard
from notebooks.
* Add an option to the regenerate tests script to use an environment
variable to explicitly set the name of the origin repository.
* Fix computation of changed files in generate-changed-only rule
* use git diff --name-only @{upstream} to compute the diff against the
upstream branch. This should be better then the current branch
which is making assumptions about the remote repo names.
* Furthermore we need to make sure that when the base kustomization package
changes that we also regenerate the tests for the overlay packages.
* to support that we replace gen-test-targets.sh with a python script.
* The bash scripts are pretty impenetrable; migrating to python should
make the code easier to maintain.
* The name of the go test files generated is slightly different from what
the shell scripts were generating.
* This is intended to make the naming more consistent
* specifically a/b/c/kustimazation.yaml results in
tests/a-b-c_test.go
* It looks like the shell script was sometimes not including a in the name.
* The python script also checks that for _test.go file there is a corresponding
kustomization.yaml file; otherwise it will remove the test. This
ensures if we move or remove a kustomize package we will end up
removing the test.
* Fix: kubeflow/manifests#509
* Update the pull request template with the command to only generate
tests for changed files
* Fixkubeflow/manifests#171 - gen-test-target.sh should function regardless
of checked out name for the repository. We can use git to get the
base directory and then do an appropriate string replace.
* Need to update the test target generation
to not assume the repository is named manifests
* Update the github pull_request template to tell users to run `make generate-changed-only`
* Address comments.
* Latest.