refactor `Components / Kubeflow Pipelines` section (#3063)

* refactor pipelines section

* fix links & move "introduction" to root

* revert rename of "TFX Compatibility Matrix"
This commit is contained in:
Mathew Wicks 2021-11-24 11:37:47 +11:00 committed by GitHub
parent e7f5353024
commit 9397746ebe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 134 additions and 124 deletions

View File

@ -167,6 +167,16 @@ docs/started/requirements/ /docs/started/getting-started/
/docs/components/notebooks/troubleshoot /docs/components/notebooks/troubleshooting
/docs/components/notebooks/why-use-jupyter-notebook /docs/components/notebooks/overview
# Refactor Pipelines section
/docs/components/pipelines/caching /docs/components/pipelines/overview/caching
/docs/components/pipelines/caching-v2 /docs/components/pipelines/overview/caching-v2
/docs/components/pipelines/multi-user /docs/components/pipelines/overview/multi-user
/docs/components/pipelines/pipeline-root /docs/components/pipelines/overview/pipeline-root
/docs/components/pipelines/pipelines-overview /docs/components/pipelines/introduction
/docs/components/pipelines/pipelines-quickstart /docs/components/pipelines/overview/quickstart
/docs/components/pipelines/overview/concepts/* /docs/components/pipelines/concepts/:splat
/docs/components/pipelines/sdk/v2/* /docs/components/pipelines/sdk-v2/:splat
# ===============
# IMPORTANT NOTE:
# Catch-all redirects should be added at the end of this file as redirects happen from top to bottom

View File

@ -0,0 +1,5 @@
+++
title = "Concepts"
description = "Concepts used in Kubeflow Pipelines"
weight = 30
+++

View File

@ -57,8 +57,8 @@ deserialize the data for use in the downstream component.
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.
* Build your own

View File

@ -8,11 +8,11 @@ weight = 40
An *experiment* is a workspace where you can try different configurations of
your pipelines. You can use experiments to organize your runs into logical
groups. Experiments can contain arbitrary runs, including
[recurring runs](/docs/components/pipelines/overview/concepts/run/).
[recurring runs](/docs/components/pipelines/concepts/run/).
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/overview/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -24,7 +24,7 @@ parent contains a conditional clause.)
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -15,8 +15,8 @@ data to rich interactive visualizations.
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/overview/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.
* Read more about the available

View File

@ -6,19 +6,19 @@ weight = 10
+++
A *pipeline* is a description of a machine learning (ML) workflow, including all
of the [components](/docs/components/pipelines/overview/concepts/component/) in the workflow and how the components relate to each other in
the form of a [graph](/docs/components/pipelines/overview/concepts/graph/). The pipeline
of the [components](/docs/components/pipelines/concepts/component/) in the workflow and how the components relate to each other in
the form of a [graph](/docs/components/pipelines/concepts/graph/). The pipeline
configuration includes the definition of the inputs (parameters) required to run
the pipeline and the inputs and outputs of each component.
When you run a pipeline, the system launches one or more Kubernetes Pods
corresponding to the [steps](/docs/components/pipelines/overview/concepts/step/) (components) in your workflow (pipeline). The Pods
corresponding to the [steps](/docs/components/pipelines/concepts/step/) (components) in your workflow (pipeline). The Pods
start Docker containers, and the containers in turn start your programs.
After developing your pipeline, you can upload your pipeline using the Kubeflow Pipelines UI or the Kubeflow Pipelines SDK.
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/overview/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -15,7 +15,7 @@ available:
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/overview/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -15,7 +15,7 @@ output artifacts, and logs for each step in the run.
A *recurring run*, or job in the Kubeflow Pipelines [backend APIs](https://github.com/kubeflow/pipelines/tree/06e4dc660498ce10793d566ca50b8d0425b39981/backend/api/go_http_client/job_client), is a repeatable run of
a pipeline. The configuration for a recurring run includes a copy of a pipeline
with all parameter values specified and a
[run trigger](/docs/components/pipelines/overview/concepts/run-trigger/).
[run trigger](/docs/components/pipelines/concepts/run-trigger/).
You can start a recurring run inside any experiment, and it will periodically
start a new copy of the run configuration. You can enable/disable the recurring
run from the Kubeflow Pipelines UI. You can also specify the maximum number of
@ -25,7 +25,7 @@ triggered to run frequently.
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -13,7 +13,7 @@ an if/else like clause in the pipeline code.
## Next steps
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/pipelines-overview/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart/)
* Read an [overview of Kubeflow Pipelines](/docs/components/pipelines/introduction/).
* Follow the [pipelines quickstart guide](/docs/components/pipelines/overview/quickstart/)
to deploy Kubeflow and run a sample pipeline directly from the Kubeflow
Pipelines UI.

View File

@ -1,5 +1,5 @@
+++
title = "Installing Pipelines"
title = "Installation"
description = "Options for installing Kubeflow Pipelines"
weight = 15
weight = 35
+++

View File

@ -1,7 +1,7 @@
+++
title = "Argo Workflow Executors"
description = "How to choose and configure the Argo Workflow Executor?"
weight = 40
title = "Choosing an Argo Workflows Executor"
description = "How to choose an Argo Workflows Executor"
weight = 80
+++
An Argo workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container lifecycles, etc.
@ -81,7 +81,7 @@ Pipelines test infrastructure has been running stably with the emissary executor
* Cannot escape the privileges of the pod's service account.
* Migration: `command` must be specified in [Kubeflow Pipelines component specification](https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/).
Note, the same migration requirement is required by [Kubeflow Pipelines v2 compatible mode](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/), refer to
Note, the same migration requirement is required by [Kubeflow Pipelines v2 compatible mode](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/), refer to
[known caveats & breaking changes](https://github.com/kubeflow/pipelines/issues/6133).
#### Migrate to Emissary Executor

View File

@ -1,7 +1,7 @@
+++
title = "Compatibility Matrix"
description = "Kubeflow Pipelines compatibility matrix with TensorFlow Extended (TFX)"
weight = 50
weight = 100
+++
## Kubeflow Pipelines Backend and TFX compatibility

View File

@ -1,7 +1,7 @@
+++
title = "Deploying Kubeflow Pipelines on a local cluster"
description = "How to deploy Kubeflow Pipelines locally with kind, K3s, and K3ai for testing purposes"
weight = 30
title = "Local Deployment"
description = "Information about local Deployment of Kubeflow Pipelines (kind, K3s, K3ai)"
weight = 20
+++
This guide shows how to deploy Kubeflow Pipelines standalone on a local

View File

@ -1,5 +1,5 @@
+++
title = "Installation Options for Kubeflow Pipelines"
title = "Installation Options"
description = "Overview of the ways to deploy Kubeflow Pipelines"
weight = 10

View File

@ -1,7 +1,7 @@
+++
title = "Kubeflow Pipelines Standalone Deployment"
description = "Instructions to deploy Kubeflow Pipelines standalone to a cluster"
weight = 20
title = "Standalone Deployment"
description = "Information about Standalone Deployment of Kubeflow Pipelines"
weight = 30
+++
As an alternative to deploying Kubeflow Pipelines (KFP) as part of the

View File

@ -1,7 +1,7 @@
+++
title = "Upgrade Notes"
description = "Notices and breaking changes when you upgrade Kubeflow Pipelines Backend"
weight = 35
weight = 90
+++
This page introduces notices and breaking changes you need to know when upgrading Kubeflow Pipelines Backend.
@ -24,4 +24,4 @@ For upgrade instructions, refer to distribution specific documentations:
For detailed configuration and migration instructions for both options, refer to [Argo Workflow Executors](https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/).
* **Notice**: [Kubeflow Pipelines SDK v2 compatibility mode](/docs/components/pipelines/sdk/v2/v2-compatibility/) (Beta) was recently released. The new mode adds support for tracking pipeline runs and artifacts using ML Metadata. In v1.7 backend, complete UI support and caching capabilities for v2 compatibility mode are newly added. We welcome any [feedback](https://github.com/kubeflow/pipelines/issues/6451) on positive experiences or issues you encounter.
* **Notice**: [Kubeflow Pipelines SDK v2 compatibility mode](/docs/components/pipelines/sdk-v2/v2-compatibility/) (Beta) was recently released. The new mode adds support for tracking pipeline runs and artifacts using ML Metadata. In v1.7 backend, complete UI support and caching capabilities for v2 compatibility mode are newly added. We welcome any [feedback](https://github.com/kubeflow/pipelines/issues/6451) on positive experiences or issues you encounter.

View File

@ -1,6 +1,6 @@
+++
title = "Overview of Kubeflow Pipelines"
description = "Understanding the goals and main concepts of Kubeflow Pipelines"
title = "Introduction"
description = "An introduction to the goals and main concepts of Kubeflow Pipelines"
weight = 10
+++
@ -13,7 +13,7 @@ scalable machine learning (ML) workflows based on Docker containers.
## Quickstart
Run your first pipeline by following the
[pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart).
[pipelines quickstart guide](/docs/components/pipelines/overview/quickstart).
## What is Kubeflow Pipelines?
@ -56,7 +56,7 @@ A _pipeline component_ is a self-contained set of user code, packaged as a
performs one step in the pipeline. For example, a component can be responsible
for data preprocessing, data transformation, model training, and so on.
See the conceptual guides to [pipelines](/docs/components/pipelines/overview/concepts/pipeline/)
See the conceptual guides to [pipelines](/docs/components/pipelines/concepts/pipeline/)
and [components](/docs/components/pipelines/concepts/component/).
## Example of a pipeline
@ -275,7 +275,7 @@ At a high level, the execution of a pipeline proceeds as follows:
## Next steps
* Follow the
[pipelines quickstart guide](/docs/components/pipelines/pipelines-quickstart) to
[pipelines quickstart guide](/docs/components/pipelines/overview/quickstart) to
deploy Kubeflow and run a sample pipeline directly from the
Kubeflow Pipelines UI.
* Build machine-learning pipelines with the [Kubeflow Pipelines

View File

@ -1,5 +1,5 @@
+++
title = "Understanding Pipelines"
description = "Overview and concepts in Kubeflow Pipelines"
title = "Overview"
description = "Overview of Kubeflow Pipelines"
weight = 20
+++

View File

@ -1,13 +1,13 @@
+++
title = "Caching v2"
description = "Getting started with Kubeflow Pipelines caching v2"
weight = 50
weight = 41
+++
{{% beta-status
feedbacklink="https://github.com/kubeflow/pipelines/issues" %}}
Starting from [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/) and Kubeflow Pipelines 1.7.0, Kubeflow Pipelines supports step caching capabilities in both [standalone deployment](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/) and [AI platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs).
Starting from [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/) and Kubeflow Pipelines 1.7.0, Kubeflow Pipelines supports step caching capabilities in both [standalone deployment](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/) and [AI platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs).
## Before you start
This guide tells you the basic concepts of Kubeflow Pipelines caching and how to use it.
@ -17,7 +17,7 @@ guide](/docs/components/pipelines/installation/) to deploy Kubeflow Pipelines.
## What is step caching?
Kubeflow Pipelines caching provides step-level output caching, a process that helps to reduce costs by skipping computations that were completed in a previous pipeline run.
Caching is enabled by default for all tasks of pipelines built with [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/) using `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode.
Caching is enabled by default for all tasks of pipelines built with [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/) using `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode.
When Kubeflow Pipeline runs a pipeline, it checks to see whether
an execution exists in Kubeflow Pipeline with the interface of each pipeline task.
The task's interface is defined as the combination of the pipeline task specification (base image, command, args), the pipeline task's inputs (the name and id of artifacts, the name and value of parameters),
@ -33,6 +33,6 @@ If there is a matching execution in Kubeflow Pipelines, the outputs of that exec
## Disabling/enabling caching
Cache is enabled by default with [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/) using `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode.
Cache is enabled by default with [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/) using `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode.
You can turn off execution caching for pipeline runs that are created using Python. When you run a pipeline using [create_run_from_pipeline_func](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.create_run_from_pipeline_func) or [create_run_from_pipeline_package](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.create_run_from_pipeline_package) or [run_pipeline](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.run_pipeline,) you can use the `enable_caching` argument to specify that this pipeline run does not use caching.

View File

@ -1,7 +1,7 @@
+++
title = "Caching"
description = "Getting started with Kubeflow Pipelines step caching"
weight = 50
weight = 40
+++
{{% alpha-status

View File

@ -1,5 +0,0 @@
+++
title = "Concepts"
description = "Understand the terminology used in Kubeflow Pipelines"
weight = 40
+++

View File

@ -1,5 +1,5 @@
+++
title = "Introduction to the Pipelines Interfaces"
title = "Pipelines Interfaces"
description = "The ways you can interact with the Kubeflow Pipelines system"
weight = 20
@ -25,15 +25,15 @@ From the Kubeflow Pipelines UI you can perform the following tasks:
that someone has shared with you.
* Create an *experiment* to group one or more of your pipeline runs.
See the [definition of an
experiment](/docs/components/pipelines/overview/concepts/experiment/).
experiment](/docs/components/pipelines/concepts/experiment/).
* Create and start a *run* within the experiment. A run is a single execution
of a pipeline. See the [definition of a
run](/docs/components/pipelines/overview/concepts/run/).
run](/docs/components/pipelines/concepts/run/).
* Explore the configuration, graph, and output of your pipeline run.
* Compare the results of one or more runs within an experiment.
* Schedule runs by creating a recurring run.
See the [quickstart guide](/docs/components/pipelines/pipelines-quickstart/) for more
See the [quickstart guide](/docs/components/pipelines/overview/quickstart/) for more
information about accessing the Kubeflow Pipelines UI and running the samples.
When building a pipeline component, you can write out information for display

View File

@ -1,7 +1,7 @@
+++
title = "Multi-user Isolation for Pipelines"
description = "Getting started with Kubeflow Pipelines multi-user isolation"
weight = 35
weight = 30
+++
Multi-user isolation for Kubeflow Pipelines is an integration to [Kubeflow multi-user isolation](/docs/components/multi-tenancy/).

View File

@ -7,7 +7,7 @@ weight = 50
{{% beta-status
feedbacklink="https://github.com/kubeflow/pipelines/issues" %}}
Starting from [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/) and Kubeflow Pipelines 1.7.0, Kubeflow Pipelines supports a new intermediate artifact repository feature: pipeline root in both [standalone deployment](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/) and [AI platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs).
Starting from [Kubeflow Pipelines SDK v2](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/) and Kubeflow Pipelines 1.7.0, Kubeflow Pipelines supports a new intermediate artifact repository feature: pipeline root in both [standalone deployment](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/) and [AI platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs).
## Before you start
This guide tells you the basic concepts of Kubeflow Pipelines pipeline root and how to use it.
@ -56,7 +56,7 @@ kubectl edit configMap kfp-launcher -n ${namespace}
This pipeline root will be the default pipeline root for all pipelines running in the Kubernetes namespace unless you override it using one of the following options:
#### Via Building Pipelines
You can configure a pipeline root through the `kfp.dsl.pipeline` annotation when [building pipelines](https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#build-your-pipeline)
You can configure a pipeline root through the `kfp.dsl.pipeline` annotation when [building pipelines](https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#build-your-pipeline)
#### Via Submitting a Pipeline through SDK
You can configure pipeline root via `pipeline_root` argument when you submit a Pipeline using one of the following:

View File

@ -1,5 +1,5 @@
+++
title = "Pipelines Quickstart"
title = "Quickstart"
description = "Getting started with Kubeflow Pipelines"
weight = 10

View File

@ -1,5 +1,5 @@
+++
title = "Reference"
description = "Reference docs for Kubeflow Pipelines"
weight = 70
weight = 100
+++

View File

@ -0,0 +1,5 @@
+++
title = "Pipelines SDK (v2)"
description = "Information about the Kubeflow Pipelines SDK (v2)"
weight = 41
+++

View File

@ -22,7 +22,7 @@
"\n",
"[Learn more about Pipelines SDK v2][kfpv2].\n",
"\n",
"[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility\n",
"[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility\n",
"\n",
"## Before you begin\n",
"\n",
@ -270,9 +270,9 @@
"\n",
"[container-op]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.ContainerOp\n",
"[component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/\n",
"[python-function-component]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/\n",
"[component-dev]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/component-development/\n",
"[python-function-component-data-passing]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/#understanding-how-data-is-passed-between-components\n",
"[python-function-component]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/\n",
"[component-dev]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/component-development/\n",
"[python-function-component-data-passing]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/#understanding-how-data-is-passed-between-components\n",
"[prebuilt-components]: https://www.kubeflow.org/docs/examples/shared-resources/\n"
]
},
@ -412,7 +412,7 @@
"The following example shows the updated `merge_csv` function.\n",
"\n",
"[web-download-component]: https://github.com/kubeflow/pipelines/blob/master/components/web/Download/component.yaml\n",
"[python-function-components]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/\n",
"[python-function-components]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/\n",
"[input]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py\n",
"[output]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py\n",
"[dsl-component]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/_component.py"
@ -557,7 +557,7 @@
"2. Upload and run your `pipeline.yaml` using the Kubeflow Pipelines user interface.\n",
"See the guide to [getting started with the UI][quickstart].\n",
"\n",
"[quickstart]: https://www.kubeflow.org/docs/components/pipelines/pipelines-quickstart"
"[quickstart]: https://www.kubeflow.org/docs/components/pipelines/overview/quickstart"
]
},
{

View File

@ -5,7 +5,7 @@ weight = 30
+++
<!--
AUTOGENERATED FROM content/en/docs/components/pipelines/sdk/v2/build-pipeline.ipynb
AUTOGENERATED FROM content/en/docs/components/pipelines/sdk-v2/build-pipeline.ipynb
PLEASE UPDATE THE JUPYTER NOTEBOOK AND REGENERATE THIS FILE USING scripts/nb_to_md.py.-->
<style>
@ -26,8 +26,8 @@ background-position: left center;
}
</style>
<div class="notebook-links">
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/build-pipeline.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/build-pipeline.ipynb">View source on GitHub</a>
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/build-pipeline.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/build-pipeline.ipynb">View source on GitHub</a>
</div>
@ -44,7 +44,7 @@ building and running pipelines that are compatible with the Pipelines SDK v2.
[Learn more about Pipelines SDK v2][kfpv2].
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility
## Before you begin
@ -266,9 +266,9 @@ when designing a pipeline.
[container-op]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.ContainerOp
[component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/
[python-function-component]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/
[component-dev]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/component-development/
[python-function-component-data-passing]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/#understanding-how-data-is-passed-between-components
[python-function-component]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/
[component-dev]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/component-development/
[python-function-component-data-passing]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/#understanding-how-data-is-passed-between-components
[prebuilt-components]: https://www.kubeflow.org/docs/examples/shared-resources/
@ -374,7 +374,7 @@ Learn more about [building Python function-based components][python-function-com
The following example shows the updated `merge_csv` function.
[web-download-component]: https://github.com/kubeflow/pipelines/blob/master/components/web/Download/component.yaml
[python-function-components]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/python-function-components/
[python-function-components]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/python-function-components/
[input]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py
[output]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/io_types.py
[dsl-component]: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/dsl/_component.py
@ -470,7 +470,7 @@ kfp.compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile(
2. Upload and run your `pipeline.yaml` using the Kubeflow Pipelines user interface.
See the guide to [getting started with the UI][quickstart].
[quickstart]: https://www.kubeflow.org/docs/components/pipelines/pipelines-quickstart
[quickstart]: https://www.kubeflow.org/docs/components/pipelines/overview/quickstart
#### Option 2: run the pipeline using Kubeflow Pipelines SDK client
@ -510,6 +510,6 @@ client.create_run_from_pipeline_func(
<div class="notebook-links">
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/build-pipeline.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/build-pipeline.ipynb">View source on GitHub</a>
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/build-pipeline.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/build-pipeline.ipynb">View source on GitHub</a>
</div>

View File

@ -14,7 +14,7 @@ building and running pipelines that are compatible with the Pipelines SDK v2.
[Learn more about Pipelines SDK v2][kfpv2].
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/
## Before you begin
@ -592,7 +592,7 @@ See this [sample component][org-sample] for a real-life component example.
[best practices](/docs/components/pipelines/sdk/best-practices) for designing and
writing components.
* For quick iteration,
[build lightweight Python function-based components](/docs/components/pipelines/sdk/v2/python-function-components/)
[build lightweight Python function-based components](/docs/components/pipelines/sdk-v2/python-function-components/)
directly from Python functions.
* Use SDK APIs to visualize pipeline result, follow
[Visualize Results in the Pipelines UI](/docs/components/pipelines/sdk/output-viewer/#v2-visualization)

View File

@ -29,7 +29,7 @@
"\n",
"[Learn more about Pipelines SDK v2][kfpv2].\n",
"\n",
"[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/\n",
"[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/\n",
"\n",
"[component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/\n",
"\n",
@ -131,7 +131,7 @@
"source": [
"2. Create and run your pipeline. [Learn more about creating and running pipelines][build-pipelines].\n",
"\n",
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline"
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline"
]
},
{
@ -169,7 +169,7 @@
"source": [
"3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].\n",
"\n",
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline"
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline"
]
},
{
@ -574,7 +574,7 @@
"source": [
"3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].\n",
"\n",
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline"
"[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline"
]
},
{

View File

@ -5,7 +5,7 @@ weight = 50
+++
<!--
AUTOGENERATED FROM content/en/docs/components/pipelines/sdk/v2/python-function-components.ipynb
AUTOGENERATED FROM content/en/docs/components/pipelines/sdk-v2/python-function-components.ipynb
PLEASE UPDATE THE JUPYTER NOTEBOOK AND REGENERATE THIS FILE USING scripts/nb_to_md.py.-->
<style>
@ -26,8 +26,8 @@ background-position: left center;
}
</style>
<div class="notebook-links">
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/python-function-components.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/python-function-components.ipynb">View source on GitHub</a>
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/python-function-components.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/python-function-components.ipynb">View source on GitHub</a>
</div>
@ -53,7 +53,7 @@ building and running pipelines that are compatible with the Pipelines SDK v2.
[Learn more about Pipelines SDK v2][kfpv2].
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/v2-compatibility/
[kfpv2]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/
[component-spec]: https://www.kubeflow.org/docs/components/pipelines/reference/component-spec/
@ -119,7 +119,7 @@ def add(a: float, b: float) -> float:
2. Create and run your pipeline. [Learn more about creating and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```python
@ -148,7 +148,7 @@ arguments = {'a': 7, 'b': 8}
3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```python
@ -504,7 +504,7 @@ def calc_pipeline(
3. Compile and run your pipeline. [Learn more about compiling and running pipelines][build-pipelines].
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline
[build-pipelines]: https://www.kubeflow.org/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline
```python
@ -520,6 +520,6 @@ client.create_run_from_pipeline_func(
<div class="notebook-links">
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/python-function-components.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk/v2/python-function-components.ipynb">View source on GitHub</a>
<a class="colab-link" href="https://colab.research.google.com/github/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/python-function-components.ipynb">Run in Google Colab</a>
<a class="github-link" href="https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/sdk-v2/python-function-components.ipynb">View source on GitHub</a>
</div>

View File

@ -151,8 +151,8 @@ client.create_run_from_pipeline_func(
Kubeflow Pipelines v2 compatible mode is currently in Beta stage. It is under active development and some features may not be complete. To find out its current caveats, refer to [v2 compatible mode -- known caveats & breaking changes #6133](https://github.com/kubeflow/pipelines/issues/6133).
[build-pipeline]: /docs/components/pipelines/sdk/v2/build-pipeline/
[build-component]: /docs/components/pipelines/sdk/v2/component-development/
[python-component]: /docs/components/pipelines/sdk/v2/python-function-components/
[build-pipeline]: /docs/components/pipelines/sdk-v2/build-pipeline/
[build-component]: /docs/components/pipelines/sdk-v2/component-development/
[python-component]: /docs/components/pipelines/sdk-v2/python-function-components/
[kfp-client]: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client
[connect-api]: /docs/components/pipelines/sdk/connect-api/

View File

@ -1,5 +1,5 @@
+++
title = "Building Pipelines with the SDK"
description = "Use the Kubeflow Pipelines SDK to build components and pipelines"
weight = 30
title = "Pipelines SDK"
description = "Information about the Kubeflow Pipelines SDK"
weight = 40
+++

View File

@ -516,7 +516,7 @@
"2. Upload and run your `pipeline.yaml` using the Kubeflow Pipelines user interface.\n",
"See the guide to [getting started with the UI][quickstart].\n",
"\n",
"[quickstart]: https://www.kubeflow.org/docs/components/pipelines/pipelines-quickstart"
"[quickstart]: https://www.kubeflow.org/docs/components/pipelines/overview/quickstart"
]
},
{

View File

@ -415,7 +415,7 @@ kfp.compiler.Compiler().compile(
2. Upload and run your `pipeline.yaml` using the Kubeflow Pipelines user interface.
See the guide to [getting started with the UI][quickstart].
[quickstart]: https://www.kubeflow.org/docs/components/pipelines/pipelines-quickstart
[quickstart]: https://www.kubeflow.org/docs/components/pipelines/overview/quickstart
#### Option 2: run the pipeline using Kubeflow Pipelines SDK client

View File

@ -117,5 +117,5 @@ The response should be something like this:
* [See how to use the SDK](/docs/components/pipelines/sdk/sdk-overview/).
* [Build a component and a pipeline](/docs/components/pipelines/sdk/component-development/).
* [Get started with the UI](/docs/components/pipelines/pipelines-quickstart).
* [Get started with the UI](/docs/components/pipelines/overview/quickstart).
* [Understand pipeline concepts](/docs/components/pipelines/concepts/).

View File

@ -57,7 +57,7 @@ See the [sample description and links below](#example-source).
<a id="v2-visualization"></a>
## v2 SDK: Use SDK visualization APIs
For KFP [SDK v2 and v2 compatible mode](/docs/components/pipelines/sdk/v2/), you can use
For KFP [SDK v2 and v2 compatible mode](/docs/components/pipelines/sdk-v2/), you can use
convenient SDK APIs and system artifact types for metrics visualization. Currently KFP
supports ROC Curve, Confusion Matrix and Scalar Metrics formats. Full pipeline example
of all metrics visualizations can be found in [metrics_visualization_v2.py](https://github.com/kubeflow/pipelines/blob/master/samples/test/metrics_visualization_v2.py).
@ -65,7 +65,7 @@ of all metrics visualizations can be found in [metrics_visualization_v2.py](http
### Requirements
* Use Kubeflow Pipelines v1.7.0 or above: [upgrade Kubeflow Pipelines](/docs/components/pipelines/installation/standalone-deployment/#upgrading-kubeflow-pipelines).
* Use `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode when [compile and run your pipelines](/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline).
* Use `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` mode when [compile and run your pipelines](/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline).
* Make sure to use the latest environment kustomize manifest [pipelines/manifests/kustomize/env/dev/kustomization.yaml](https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/env/dev/kustomization.yaml).
@ -276,7 +276,7 @@ def html_visualization(html_artifact: Output[HTML]):
The metric visualization in V2 or V2 compatible mode depends on SDK visualization APIs,
refer to [metrics_visualization_v2.py](https://github.com/kubeflow/pipelines/blob/master/samples/test/metrics_visualization_v2.py)
for a complete pipeline example. Follow instruction
[Compile and run your pipeline](/docs/components/pipelines/sdk/v2/build-pipeline/#compile-and-run-your-pipeline)
[Compile and run your pipeline](/docs/components/pipelines/sdk-v2/build-pipeline/#compile-and-run-your-pipeline)
to compile in V2 compatible mode.
## v1 SDK: Writing out metadata for the output viewers
@ -735,7 +735,7 @@ pre-installed when you deploy Kubeflow.
You can run the sample by selecting
**[Sample] ML - TFX - Taxi Tip Prediction Model Trainer** from the
Kubeflow Pipelines UI. For help getting started with the UI, follow the
[Kubeflow Pipelines quickstart](/docs/components/pipelines/pipelines-quickstart/).
[Kubeflow Pipelines quickstart](/docs/components/pipelines/overview/quickstart/).
<!--- TODO: Will replace the tfx cab with tfx oss when it is ready.-->
The pipeline uses a number of prebuilt, reusable components, including:

View File

@ -1,5 +0,0 @@
+++
title = "Kubeflow Pipelines SDK v2"
description = "Learn how to get started with Kubeflow Pipelines SDK v2"
weight = 5
+++

View File

@ -1,7 +1,7 @@
+++
title = "Troubleshooting"
description = "Finding and fixing problems in your Kubeflow Pipelines deployment"
weight = 65
description = "Troubleshooting guide for Kubeflow Pipelines"
weight = 90
+++

View File

@ -1,5 +1,5 @@
+++
title = "Samples and Tutorials"
description = "Try the samples and follow detailed tutorials for Kubeflow Pipelines"
weight = 60
description = "Samples and tutorials for Kubeflow Pipelines"
weight = 90
+++

View File

@ -64,7 +64,7 @@ dsl-compile --py ${DIR}/sequential.py --output ${DIR}/sequential.tar.gz
### Deploy the pipeline
Upload the generated `.tar.gz` file through the Kubeflow Pipelines UI. See the
guide to [getting started with the UI](/docs/components/pipelines/pipelines-quickstart).
guide to [getting started with the UI](/docs/components/pipelines/overview/quickstart).
## Building a pipeline in a Jupyter notebook

View File

@ -105,7 +105,7 @@ You can find more details on workload identity in the [GKE documentation](https:
### Authentication from Kubeflow Pipelines
Starting from Kubeflow v1.1, Kubeflow Pipelines [supports multi-user isolation](/docs/components/pipelines/multi-user/). Therefore, pipeline runs are executed in user namespaces also using the `default-editor` KSA.
Starting from Kubeflow v1.1, Kubeflow Pipelines [supports multi-user isolation](/docs/components/pipelines/overview/multi-user/). Therefore, pipeline runs are executed in user namespaces also using the `default-editor` KSA.
Additionally, the Kubeflow Pipelines UI, visualization, and TensorBoard server instances are deployed in your user namespace using the `default-editor` KSA. Therefore, to [visualize results in the Pipelines UI](/docs/components/pipelines/sdk/output-viewer/), they can fetch artifacts in Google Cloud Storage using permissions of the same GSA you configured for this namespace.

View File

@ -79,7 +79,7 @@ You can also continue to use `use_gcp_secret` in a cluster with Workload Identit
#### Cluster setup to use Workload Identity for Full Kubeflow
Starting from Kubeflow 1.1, Kubeflow Pipelines [supports multi-user isolation](/docs/components/pipelines/multi-user/). Therefore, pipeline runs are executed in user namespaces using the `default-editor` KSA. The `default-editor` KSA is auto-bound to the GSA specified in the user profile, which defaults to a shared GSA `${KFNAME}-user@${PROJECT}.iam.gserviceaccount.com`.
Starting from Kubeflow 1.1, Kubeflow Pipelines [supports multi-user isolation](/docs/components/pipelines/overview/multi-user/). Therefore, pipeline runs are executed in user namespaces using the `default-editor` KSA. The `default-editor` KSA is auto-bound to the GSA specified in the user profile, which defaults to a shared GSA `${KFNAME}-user@${PROJECT}.iam.gserviceaccount.com`.
If you want to bind the `default-editor` KSA with a different GSA for a specific namespace, refer to the [In-cluster authentication to Google Cloud](/docs/gke/authentication/#in-cluster-authentication) guide.

View File

@ -108,5 +108,5 @@ authentication through IAP.
However, it fails authorization checks for Kubeflow Pipelines with multi-user
isolation in the full Kubeflow deployment starting from Kubeflow 1.1.
Multi-user isolation requires all API access to authenticate as a user. Refer to [Kubeflow Pipelines Multi-user isolation documentation](/docs/components/pipelines/multi-user/#in-cluster-request-authentication)
Multi-user isolation requires all API access to authenticate as a user. Refer to [Kubeflow Pipelines Multi-user isolation documentation](/docs/components/pipelines/overview/multi-user/#in-cluster-request-authentication)
for more details.

View File

@ -39,9 +39,9 @@ Once Feast is installed within the same Kubernetes cluster as Kubeflow, users ca
Feast APIs can roughly be grouped into the following sections:
* __Feature definition and management__: Feast provides both a [Python SDK](https://docs.feast.dev/getting-started/quickstart) and [CLI](https://docs.feast.dev/reference/feast-cli-commands) for interacting with Feast Core. Feast Core allows users to define and register features and entities and their associated metadata and schemas. The Python SDK is typically used from within a Jupyter notebook by end users to administer Feast, but ML teams may opt to version control feature specifications in order to follow a GitOps based approach.
* __Model training__: The Feast Python SDK can be used to trigger the [creation of training datasets](https://docs.feast.dev/how-to-guides/feast-gcp-aws/build-a-training-dataset). The most natural place to use this SDK is to create a training dataset as part of a [Kubeflow Pipeline](/docs/components/pipelines/overview/pipelines-overview) prior to model training.
* __Model training__: The Feast Python SDK can be used to trigger the [creation of training datasets](https://docs.feast.dev/how-to-guides/feast-gcp-aws/build-a-training-dataset). The most natural place to use this SDK is to create a training dataset as part of a [Kubeflow Pipeline](/docs/components/pipelines/introduction) prior to model training.
* __Model serving__: The Feast Python SDK can also be used for [online feature retrieval](https://docs.feast.dev/how-to-guides/feast-gcp-aws/read-features-from-the-online-store). This client is used to retrieve feature values for inference with [Model Serving](/docs/components/pipelines/overview/pipelines-overview) systems like KFServing, TFX, or Seldon.
* __Model serving__: The Feast Python SDK can also be used for [online feature retrieval](https://docs.feast.dev/how-to-guides/feast-gcp-aws/read-features-from-the-online-store). This client is used to retrieve feature values for inference with [Model Serving](/docs/components/pipelines/introduction) systems like KFServing, TFX, or Seldon.
## Examples

View File

@ -93,7 +93,7 @@ To learn more, read the following guides to the Kubeflow components:
[Jupyter notebooks](/docs/components/notebooks/). Use notebooks for interactive data
science and experimenting with ML workflows.
* [Kubeflow Pipelines](/docs/components/pipelines/overview/pipelines-overview/) is a platform for
* [Kubeflow Pipelines](/docs/components/pipelines/introduction/) is a platform for
building, deploying, and managing multi-step ML workflows based on Docker
containers.