* SDK - Components - Fixed python components that use \n
The escape sequence was being replaced by the `echo` command.
Apparently, unlike in the `bash` shell, the `echo` command of the `sh` shell expands the escape sequences by default and does not support an option to turn it off. (For some reason the -n option works properly even though it should not).
Fixes https://github.com/kubeflow/pipelines/issues/4939
* Fixed the test data
* Fixed the deprecated container component builder
* Fixed the new compiler test case
* Added test
* add placeholder to spec
* add output_directory to pipeline
* respect uri placeholder in file outputs
* wip: add data passing rewriting logic to respect the uri semantics
* merge input_uri and paths when instantiating ContainerOp
* fix
* fix workflow rewriting
* Add topology rewriting
* add a test case, and various fixes
* make the test case more complex
* Fix the case when working with OpsGroup
* Fix test case
* fix resolving test
* fix redundant cmd lines
* fix redundant cmd lines
* resolve comments
* fix file outputs
* resolve comments
* copy file outputs instead of modifying inplace.
Currently were running the python code inline using `python -c <code>`.
This has two issues:
1) Python does not show source code line in exception stack traces
2) inspect.getsource does not work. This method is used in PyTorch JIT for example.
We solve these issues by writing the code into a file before executing it.
The disadvantage of the new approach is that it adds complexity, a filesystem write operation and also requires the `sh` executable to be present (we could replace it with python-based program if needed).
* Prepare SDK docs environment so its easier to understand how to build the docs locally so theyre consistent with ReadTheDocs.
* Clean up docstrings for kfp.Client
* Add in updates to the docs for compiler and components
* Update components area to add in code references and make formatting a little more consistent.
* Clean up containers, add in custom CSS to ensure we do not overflow on inline code blocks
* Clean up containers, add in custom CSS to ensure we do not overflow on inline code blocks
* Remove unused kfp.notebook package links
* Clean up a few more errant references
* Clean up the DSL docs some more
* Update SDK docs for KFP extensions to follow Sphinx guidelines
* Clean up formatting of docstrings after Ark-Kuns comments
* Add kfp-container-builder sa
* Allow service account to be configurable
* Fix tests
* Fix test
* Use documentation for service account to introduce compatibility with different types of installation
* updated doc
* clean up
* Update container_builder_test.py
* Update _build_image_api.py
* Update kustomization.yaml
* Add executable permission for presubmit tests mkp.sh
* Script to set up workload identity for standalone deployment
* Migrate tests to run on standalone + workload identity
* Fix test script
* Switch to static GSAs for testing, because they have name length limit
* Add workload identity binding for argo
* Fix argo workload identity bindings
* Remove user-gcp-sa from tests
* Remove use_gcp_secret from xgboost sample
* Allow debugging tests locally
* Wait for policies to take effect
* Update deploy-pipeline-lite.sh
* Update deploy-pipeline-lite.sh
* [WIP] test gcloud auth list with test-runner sa
* Add namespace
* test again
* Use new image builder
* test again
* Remove debug code
* Remove usages of use_gcp_secret
* Fix unit test and tensorboard pod template
* Add debug code again to test
* Try waiting until workload identity bindings are ready
* Fix some other samples
* Fix parameterized tfx oss sample
* Add retry to image building
* Try fixing tfx oss sample
* Fix compiled tfx oss sample
* Update all google/cloud-sdk to latest
* Try fixing parameterized tfx oss sample again
* Also verify pipeline-runner ksa is working
* Fix parameterized_tfx_oss sample
* Update gcp-workload-identity-setup.sh
* Revert unneeded change
* Pin to new google/cloud-sdk
* Remove wrongly commited binaries
* SDK - Refactoring - Split the K8sHelper class
One part was only used by container builder and provided higher-level API over K8s Client.
Another was used by the compiler and did not use the kubernetes library.
* Updated the license year.
* SDK - Containers - Added support for container image cache
This change makes `build_image_from_working_dir` fast when the working directory has not changed between invocations.
We cache pushed container images using specially-calculated context directory hash as the cache key.
* Moved the import to the top
* SDK - Moved the _container_builder from kfp.compiler to kfp.containers
This only moves the files. The imports remain the same for now.
* Simplified the imports.
* SDK - Containers - Build container image from current environment
* Removed the ability to capture the active python environment (as requested by @hongye-sun)
* Added the type hint and docstring to for the return type.
* Renamed `build_image_from_env` function to `build_image_from_working_dir`
as requested by @hongye-sun
* Explained the function behavior in the documentation.
* Removed extra empty line
* Improved caching by copying python files only after installing python packages
* Made test more portable
* Added support for specifying the base_image
`kfp.containers.default_base_image = ...`
The image can also be a callable returning the image name.
* Renamed `get_python_image` to `get_python_image_for_current_version`
* Switched the default base image to Google Deep Learning container image as requested by @hongye-sun
The size of this image is 4.35GB which really concerns me. The GPU image size is 6.45GB.
* Stopped importing kfp.containers.* into kfp.*
* Fixed test
* Fixed the regex string
* Fixed the type annotation style
* Addressed @hongye-sun feedback
* Removed the container image size warning
* Fixed import failure