* unittests should compare result of kustomize build to golden set of YAML resources.
* Per kubeflow/manifests#306 to allow reviewers to verify that the expected
output is correct we should check in the result of "kustomize build -o"
so that reviewers can review the diff and verify that it is correct.
* This also simplifies the test generation code; the python script
generate_tests.py just recourses over the directory tree and runs "kustomize build -o" and checks in the output into the test_data directory.
* This is different from what the tests are currently doing.
* Currently what the generation scripts do is generate "kustomization.yaml" files and then generate the expected output from that when the test is run.
* This makes it very difficult to validate the expected output and to
debug whether the expected output is correct.
* Going forward, per #1014, I think what we want to do is check in test cases
corresponding to kustomization.yaml files corresponding to various
kustomizations that we want to validate are working correctly
* Our generate scripts would then run "kustomize build" to generate expected
output and check that in so that we can validate that the expected output
is correct.
* Also change the tests data structure so that it mirrors the kustomize directory tree rather than flattening the tests into the "tests" directory.
* Fix#683
* Right now running the unittests takes a long time
* The problem is that we generate unittests for every "kustomization.yaml"
file
* Per #1014 this is kind of pointless/redundant because most of these
tests aren't actually testing kustomizations.
* We will address this in follow on PRs which will add more appropriate
tests and remove some of these unnecessary/redundant tests.
* Cherry pick AWS fixes.
* Regenerate the tests.
* Fix the unittests; need to update the generate logic to remove unused tests
to remove tests that aren't part of this PR.
* Address comments.
* Rebase on master and regenerate the tests.