Change bigger to big, fix issues introduced by PR 170 (#173)
This commit is contained in:
parent
171b920163
commit
81f578a602
|
|
@ -43,7 +43,7 @@
|
|||
"\n",
|
||||
"Some examples of typical types of small data are: number, URL, small string (e.g. column name).\n",
|
||||
"\n",
|
||||
"Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods taht are more suitable for bigger data (more than several kilobytes) or binary data.\n",
|
||||
"Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods that are more suitable for big data (more than several kilobytes) or binary data.\n",
|
||||
"\n",
|
||||
"All small data outputs will be at some point serialized to strings and all small data input values will be at some point deserialized from strings (passed as command-line argumants). There are built-in serializers and deserializers for several common types (e.g. `str`, `int`, `float`, `bool`, `list`, `dict`). All other types of data need to be serialized manually before returning the data. Make sure to properly specify type annotations, otherwize there would be no automatic deserialization and the component function will receive strings instead of deserialized objects."
|
||||
]
|
||||
|
|
@ -181,7 +181,7 @@
|
|||
"🌡️ \u001b[4;1mStatus\r\n",
|
||||
"\u001b[0m\r\n",
|
||||
"STARTED DURATION STATUS\r\n",
|
||||
"-206 milliseconds ago 5 seconds \u001b[92mSucceeded\u001b[0m\r\n",
|
||||
"206 milliseconds ago 5 seconds \u001b[92mSucceeded\u001b[0m\r\n",
|
||||
"\r\n",
|
||||
"📦 \u001b[4;1mResources\r\n",
|
||||
"\u001b[0m\r\n",
|
||||
|
|
@ -195,7 +195,7 @@
|
|||
"🗂 \u001b[4;1mTaskruns\r\n",
|
||||
"\u001b[0m\r\n",
|
||||
" NAME TASK NAME STARTED DURATION STATUS\r\n",
|
||||
" ∙ pipeline-parameter-to-consumer-pipeline-run-print-small-t-mr7fl print-small-text -206 milliseconds ago 4 seconds \u001b[92mSucceeded\u001b[0m\r\n"
|
||||
" ∙ pipeline-parameter-to-consumer-pipeline-run-print-small-t-mr7fl print-small-text 206 milliseconds ago 4 seconds \u001b[92mSucceeded\u001b[0m\r\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
|
@ -490,9 +490,9 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Bigger data (files)\n",
|
||||
"## big data (files)\n",
|
||||
"\n",
|
||||
"Bigger data should be read from files and written to files.\n",
|
||||
"big data should be read from files and written to files.\n",
|
||||
"\n",
|
||||
"The paths for the input and output files are chosen by the system and are passed into the function (as strings).\n",
|
||||
"\n",
|
||||
|
|
@ -504,7 +504,7 @@
|
|||
"\n",
|
||||
"Note on input/output names: When the function is converted to component, the input and output names generally follow the parameter names, but the \"\\_path\" and \"\\_file\" suffixes are stripped from file/path inputs and outputs. E.g. the `number_file_path: InputPath(int)` parameter becomes the `number: int` input. This makes the argument passing look more natural: `number=42` instead of `number_file_path=42`.\n",
|
||||
"\n",
|
||||
"Notes: As we used 'workspaces' in tekton pipelines to handle the bigger data processing, the complier will generator the PVC definations,it needs the volume to store the data.\n",
|
||||
"Notes: As we used 'workspaces' in Tekton pipelines to handle big data processing, the compiler will generate the PVC definitions and needs the volume to store the data.\n",
|
||||
"User need to create volume manually, or enable dynamic volume provisioning, refer to the link of:\n",
|
||||
"https://kubernetes.io/docs/concepts/storage/dynamic-provisioning"
|
||||
]
|
||||
|
|
@ -514,7 +514,7 @@
|
|||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"### Writing and reading bigger data"
|
||||
"### Writing and reading big data"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
@ -523,7 +523,7 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Writing bigger data\n",
|
||||
"# Writing big data\n",
|
||||
"@func_to_container_op\n",
|
||||
"def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10):\n",
|
||||
" '''Repeat the line specified number of times'''\n",
|
||||
|
|
@ -532,7 +532,7 @@
|
|||
" writer.write(line + '\\n')\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Reading bigger data\n",
|
||||
"# Reading big data\n",
|
||||
"@func_to_container_op\n",
|
||||
"def print_text(text_path: InputPath()): # The \"text\" input is untyped so that any data can be printed\n",
|
||||
" '''Print text'''\n",
|
||||
|
|
@ -612,7 +612,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Processing bigger data"
|
||||
"### Processing big data"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
@ -712,7 +712,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Processing bigger data with pre-opened files"
|
||||
"### Processing big data with pre-opened files"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
@ -43,6 +43,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: constant-to-consumer-pipeline-run
|
||||
spec:
|
||||
pipelineRef:
|
||||
|
|
@ -51,6 +51,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: pipeline-parameter-to-consumer-pipeline-run
|
||||
spec:
|
||||
params:
|
||||
|
|
@ -101,6 +101,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: print-repeating-lines-pipeline-run
|
||||
spec:
|
||||
pipelineRef:
|
||||
|
|
@ -68,13 +68,13 @@ spec:
|
|||
- -u
|
||||
- -c
|
||||
- "def get_item_from_list(list_of_strings , index ) :\n return list_of_strings[index]\n\
|
||||
\ndef _serialize_str(str_value: str) -> str:\n if not isinstance(str_value,\
|
||||
\nimport json\ndef _serialize_str(str_value: str) -> str:\n if not isinstance(str_value,\
|
||||
\ str):\n raise TypeError('Value \"{}\" has type \"{}\" instead of str.'.format(str(str_value),\
|
||||
\ str(type(str_value))))\n return str_value\n\nimport json\nimport argparse\n\
|
||||
_parser = argparse.ArgumentParser(prog='Get item from list', description='')\n\
|
||||
_parser.add_argument(\"--list-of-strings\", dest=\"list_of_strings\", type=json.loads,\
|
||||
\ required=True, default=argparse.SUPPRESS)\n_parser.add_argument(\"--index\"\
|
||||
, dest=\"index\", type=int, required=True, default=argparse.SUPPRESS)\n_parser.add_argument(\"\
|
||||
\ str(type(str_value))))\n return str_value\n\nimport argparse\n_parser =\
|
||||
\ argparse.ArgumentParser(prog='Get item from list', description='')\n_parser.add_argument(\"\
|
||||
--list-of-strings\", dest=\"list_of_strings\", type=json.loads, required=True,\
|
||||
\ default=argparse.SUPPRESS)\n_parser.add_argument(\"--index\", dest=\"index\"\
|
||||
, type=int, required=True, default=argparse.SUPPRESS)\n_parser.add_argument(\"\
|
||||
----output-paths\", dest=\"_output_paths\", type=str, nargs=1)\n_parsed_args\
|
||||
\ = vars(_parser.parse_args())\n_output_files = _parsed_args.pop(\"_output_paths\"\
|
||||
, [])\n\n_outputs = get_item_from_list(**_parsed_args)\n\n_outputs = [_outputs]\n\
|
||||
|
|
@ -150,6 +150,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: processing-pipeline-run
|
||||
spec:
|
||||
params:
|
||||
|
|
@ -54,13 +54,13 @@ spec:
|
|||
- -u
|
||||
- -c
|
||||
- "def produce_two_small_outputs() :\n return (\"data 1\", 42)\n\ndef\
|
||||
\ _serialize_str(str_value: str) -> str:\n if not isinstance(str_value, str):\n\
|
||||
\ raise TypeError('Value \"{}\" has type \"{}\" instead of str.'.format(str(str_value),\
|
||||
\ str(type(str_value))))\n return str_value\n\ndef _serialize_int(int_value:\
|
||||
\ int) -> str:\n if isinstance(int_value, str):\n return int_value\n\
|
||||
\ if not isinstance(int_value, int):\n raise TypeError('Value \"{}\"\
|
||||
\ has type \"{}\" instead of int.'.format(str(int_value), str(type(int_value))))\n\
|
||||
\ return str(int_value)\n\nimport argparse\n_parser = argparse.ArgumentParser(prog='Produce\
|
||||
\ _serialize_int(int_value: int) -> str:\n if isinstance(int_value, str):\n\
|
||||
\ return int_value\n if not isinstance(int_value, int):\n raise\
|
||||
\ TypeError('Value \"{}\" has type \"{}\" instead of int.'.format(str(int_value),\
|
||||
\ str(type(int_value))))\n return str(int_value)\n\ndef _serialize_str(str_value:\
|
||||
\ str) -> str:\n if not isinstance(str_value, str):\n raise TypeError('Value\
|
||||
\ \"{}\" has type \"{}\" instead of str.'.format(str(str_value), str(type(str_value))))\n\
|
||||
\ return str_value\n\nimport argparse\n_parser = argparse.ArgumentParser(prog='Produce\
|
||||
\ two small outputs', description='')\n_parser.add_argument(\"----output-paths\"\
|
||||
, dest=\"_output_paths\", type=str, nargs=2)\n_parsed_args = vars(_parser.parse_args())\n\
|
||||
_output_files = _parsed_args.pop(\"_output_paths\", [])\n\n_outputs = produce_two_small_outputs(**_parsed_args)\n\
|
||||
|
|
@ -218,6 +218,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: producers-to-consumers-pipeline-run
|
||||
spec:
|
||||
params:
|
||||
|
|
@ -191,6 +191,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: sum-pipeline-run
|
||||
spec:
|
||||
params:
|
||||
|
|
@ -120,6 +120,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: task-output-to-consumer-pipeline-run
|
||||
spec:
|
||||
pipelineRef:
|
||||
|
|
@ -177,6 +177,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: text-splitting-pipeline-run
|
||||
spec:
|
||||
pipelineRef:
|
||||
|
|
@ -178,6 +178,9 @@ spec:
|
|||
apiVersion: tekton.dev/v1beta1
|
||||
kind: PipelineRun
|
||||
metadata:
|
||||
annotation:
|
||||
tekton.dev/input_artifacts: '{}'
|
||||
tekton.dev/output_artifacts: '{}'
|
||||
name: text-splitting-pipeline2-run
|
||||
spec:
|
||||
pipelineRef:
|
||||
|
|
@ -61,9 +61,9 @@ def fix_big_data_passing(
|
|||
3. Propagate the consumption information upstream to all inputs/outputs all
|
||||
the way up to the data producers.
|
||||
4. Convert the inputs, outputs based on how they're consumed downstream.
|
||||
5. Use workspaces instead of result and params for bigger data passing.
|
||||
6. Added workspaces to tasks, pipelines, pipelineruns, if the parmas is bigger data.
|
||||
7. A PVC named with pipelinerun name will be created if bigger data passing, as workspaces need to use it.
|
||||
5. Use workspaces instead of result and params for big data passing.
|
||||
6. Added workspaces to tasks, pipelines, pipelineruns, if the parmas is big data.
|
||||
7. A PVC named with pipelinerun name will be created if big data is passed, as workspaces need to use it.
|
||||
User need to define proper volume or enable dynamic volume provisioning refer to the link of:
|
||||
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning
|
||||
"""
|
||||
|
|
@ -72,7 +72,8 @@ def fix_big_data_passing(
|
|||
resource_templates = []
|
||||
for template in workflow:
|
||||
resource_params = [
|
||||
param.get('name') for param in template.get('spec', {}).get('params', [])
|
||||
param.get('name')
|
||||
for param in template.get('spec', {}).get('params', [])
|
||||
if param.get('name') == 'action'
|
||||
or param.get('name') == 'success-condition'
|
||||
]
|
||||
|
|
@ -84,9 +85,8 @@ def fix_big_data_passing(
|
|||
for template in resource_templates)
|
||||
|
||||
container_templates = [
|
||||
template for template in workflow
|
||||
if template['kind'] == 'Task' and template.get('metadata', {}).get(
|
||||
'name') not in resource_template_names
|
||||
template for template in workflow if template['kind'] == 'Task' and
|
||||
template.get('metadata', {}).get('name') not in resource_template_names
|
||||
]
|
||||
|
||||
pipeline_templates = [
|
||||
|
|
@ -339,7 +339,7 @@ def fix_big_data_passing(
|
|||
outputs_consumed_as_parameters)
|
||||
|
||||
# 4. Convert the inputs, outputs and arguments based on how they're consumed downstream.
|
||||
# Add workspaces to pipeline and pipeline task_ref if bigger data passing
|
||||
# Add workspaces to pipeline and pipeline task_ref if big data passing
|
||||
pipeline_workspaces = set()
|
||||
pipelinerun_workspaces = set()
|
||||
output_tasks_consumed_as_artifacts = {
|
||||
|
|
@ -353,24 +353,24 @@ def fix_big_data_passing(
|
|||
pipeline, inputs_consumed_as_artifacts,
|
||||
output_tasks_consumed_as_artifacts)
|
||||
|
||||
# Add workspaces to pipelinerun if bigger data passing
|
||||
# Add workspaces to pipelinerun if big data passing
|
||||
# Check whether pipelinerun was generated, through error if not.
|
||||
if pipeline_workspaces:
|
||||
if not pipelinerun_templates:
|
||||
raise AssertionError(
|
||||
'Found bigger data passing, please enable generate_pipelinerun for your complier'
|
||||
'Found big data passing, please enable generate_pipelinerun for your complier'
|
||||
)
|
||||
for pipelinerun in pipelinerun_templates:
|
||||
pipeline, pipelinerun_workspaces = big_data_passing_pipelinerun(
|
||||
pipelinerun, pipeline_workspaces)
|
||||
|
||||
# Use workspaces to tasks if bigger data passing instead of 'results', 'copy-inputs'
|
||||
# Use workspaces to tasks if big data passing instead of 'results', 'copy-inputs'
|
||||
for task_template in container_templates:
|
||||
task_template = big_data_passing_tasks(
|
||||
task_template, inputs_consumed_as_artifacts,
|
||||
outputs_consumed_as_artifacts)
|
||||
task_template = big_data_passing_tasks(task_template,
|
||||
inputs_consumed_as_artifacts,
|
||||
outputs_consumed_as_artifacts)
|
||||
|
||||
# Create pvc for pipelinerun if bigger data passing.
|
||||
# Create pvc for pipelinerun if big data passing.
|
||||
# As we used workspaces in tekton pipelines which depends on it.
|
||||
# User need to create PV manually, or enable dynamic volume provisioning, refer to the link of:
|
||||
# https://kubernetes.io/docs/concepts/storage/dynamic-provisioning
|
||||
|
|
@ -478,8 +478,8 @@ def deconstruct_tekton_single_placeholder(s: str) -> List[str]:
|
|||
return s.lstrip('$(').rstrip(')').split('.')
|
||||
|
||||
|
||||
def replace_bigger_data_placeholder(template: dict, old_str: str,
|
||||
new_str: str) -> dict:
|
||||
def replace_big_data_placeholder(template: dict, old_str: str,
|
||||
new_str: str) -> dict:
|
||||
template_str = json.dumps(template)
|
||||
template_str = template_str.replace(old_str, new_str)
|
||||
template = json.loads(template_str)
|
||||
|
|
@ -487,7 +487,7 @@ def replace_bigger_data_placeholder(template: dict, old_str: str,
|
|||
|
||||
|
||||
def big_data_passing_pipeline(template: dict, inputs_tasks: set(),
|
||||
outputs_tasks: set):
|
||||
outputs_tasks: set):
|
||||
pipeline_workspaces = set()
|
||||
pipeline_name = template.get('metadata', {}).get('name')
|
||||
pipeline_spec = template.get('spec', {})
|
||||
|
|
@ -499,7 +499,7 @@ def big_data_passing_pipeline(template: dict, inputs_tasks: set(),
|
|||
if (task.get('taskRef',
|
||||
{}).get('name'), input_name) in inputs_tasks:
|
||||
pipeline_workspaces.add(pipeline_name)
|
||||
# Add workspaces instead of parmas, for tasks of bigger data inputs
|
||||
# Add workspaces instead of parmas, for tasks of big data inputs
|
||||
if not task.setdefault('workspaces', []):
|
||||
task['workspaces'].append({
|
||||
"name": task.get('name'),
|
||||
|
|
@ -516,7 +516,7 @@ def big_data_passing_pipeline(template: dict, inputs_tasks: set(),
|
|||
task.setdefault('runAfter', [])
|
||||
task['runAfter'].append(dependency_task)
|
||||
if task.get('taskRef', {}).get('name') in outputs_tasks:
|
||||
# Add workspaces for tasks of bigger data outputs
|
||||
# Add workspaces for tasks of big data outputs
|
||||
if not task.setdefault('workspaces', []):
|
||||
task['workspaces'].append({
|
||||
"name": task.get('name'),
|
||||
|
|
@ -546,7 +546,7 @@ def big_data_passing_pipelinerun(pr: dict, pw: set):
|
|||
|
||||
|
||||
def big_data_passing_tasks(task: dict, inputs_tasks: set,
|
||||
outputs_tasks: set) -> dict:
|
||||
outputs_tasks: set) -> dict:
|
||||
task_name = task.get('metadata', {}).get('name')
|
||||
task_spec = task.get('spec', {})
|
||||
# Data passing for the task outputs
|
||||
|
|
@ -561,7 +561,7 @@ def big_data_passing_tasks(task: dict, inputs_tasks: set,
|
|||
placeholder = '$(results.%s.path)' % (task_output.get('name'))
|
||||
workspaces_parameter = '$(workspaces.%s.path)/%s-%s' % (
|
||||
task_name, task_name, task_output.get('name'))
|
||||
task['spec'] = replace_bigger_data_placeholder(
|
||||
task['spec'] = replace_big_data_placeholder(
|
||||
task['spec'], placeholder, workspaces_parameter)
|
||||
|
||||
# Remove artifacts outputs from results
|
||||
|
|
@ -587,7 +587,7 @@ def big_data_passing_tasks(task: dict, inputs_tasks: set,
|
|||
placeholder = task_artifact.get('path')
|
||||
workspaces_parameter = '$(workspaces.%s.path)/%s' % (
|
||||
task_name, task_parma.get('name'))
|
||||
task['spec'] = replace_bigger_data_placeholder(
|
||||
task['spec'] = replace_big_data_placeholder(
|
||||
task_spec, placeholder, workspaces_parameter)
|
||||
# Handle the case of input artifact without dependent the output of other tasks
|
||||
for task_artifact in task_artifacts:
|
||||
|
|
@ -608,7 +608,7 @@ def big_data_passing_tasks(task: dict, inputs_tasks: set,
|
|||
return task
|
||||
|
||||
|
||||
# Create pvc for pipelinerun if bigger data passing.
|
||||
# Create pvc for pipelinerun if using big data passing.
|
||||
# As we used workspaces in tekton pipelines which depends on it.
|
||||
# User need to create PV manually, or enable dynamic volume provisioning, refer to the link of:
|
||||
# https://kubernetes.io/docs/concepts/storage/dynamic-provisioning
|
||||
|
|
|
|||
|
|
@ -241,7 +241,7 @@ class TestTektonCompiler(unittest.TestCase):
|
|||
from .testdata.input_artifact_raw_value import input_artifact_pipeline
|
||||
self._test_pipeline_workflow(input_artifact_pipeline, 'input_artifact_raw_value.yaml')
|
||||
|
||||
def test_bigger_data_workflow(self):
|
||||
def test_big_data_workflow(self):
|
||||
"""
|
||||
Test compiling a big data passing workflow.
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ from kfp.components import func_to_container_op, InputPath, OutputPath
|
|||
#
|
||||
# Small lists, dictionaries and JSON structures are fine, but keep an eye on the size
|
||||
# and consider switching to file-based data passing methods taht are more suitable for
|
||||
# bigger data(more than several kilobytes) or binary data.
|
||||
# big data(more than several kilobytes) or binary data.
|
||||
#
|
||||
# All small data outputs will be at some point serialized to strings
|
||||
# and all small data input values will be at some point deserialized
|
||||
|
|
@ -56,9 +56,9 @@ from kfp.components import func_to_container_op, InputPath, OutputPath
|
|||
# and the component function will receive strings instead of deserialized objects.
|
||||
|
||||
# %% [markdown]
|
||||
# ## Bigger data (files)
|
||||
# ## big data (files)
|
||||
#
|
||||
# Bigger data should be read from files and written to files.
|
||||
# big data should be read from files and written to files.
|
||||
#
|
||||
# The paths for the input and output files are chosen by the system and are passed into the function (as strings).
|
||||
#
|
||||
|
|
@ -86,11 +86,11 @@ from kfp.components import func_to_container_op, InputPath, OutputPath
|
|||
# This makes the argument passing look more natural: `number=42` instead of `number_file_path=42`.
|
||||
# %% [markdown]
|
||||
#
|
||||
# ### Writing and reading bigger data
|
||||
# ### Writing and reading big data
|
||||
|
||||
|
||||
# %%
|
||||
# Writing bigger data
|
||||
# Writing big data
|
||||
@func_to_container_op
|
||||
def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10):
|
||||
'''Repeat the line specified number of times'''
|
||||
|
|
@ -99,7 +99,7 @@ def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10):
|
|||
writer.write(line + '\n')
|
||||
|
||||
|
||||
# Reading bigger data
|
||||
# Reading big data
|
||||
@func_to_container_op
|
||||
def print_text(
|
||||
text_path: InputPath()
|
||||
|
|
@ -116,7 +116,7 @@ def print_repeating_lines_pipeline():
|
|||
|
||||
|
||||
# %% [markdown]
|
||||
# ### Processing bigger data
|
||||
# ### Processing big data
|
||||
|
||||
|
||||
# %%
|
||||
|
|
|
|||
Loading…
Reference in New Issue