Discovered in #1721
```
./contrib/components/openvino/ovms-deployer/containers/evaluate.py:62:16: F821 undefined name 'e'
except e:
^
./contrib/components/openvino/ovms-deployer/containers/evaluate.py:63:50: F821 undefined name 'e'
print("Can not read the image file", e)
^
```
Your review please @Ark-kun
* Created extensible code editor based on react-ace
* Installed dependencies
* Updated unit tests for Editor.tsx to test placeholder and value in simplified manner
* Updated Editor unit tests to use snapshot testing where applicable
* Refactor presubmit-tests-with-pipeline-deployment.sh so that it can be run from a different project
* Simplify getting service account from cluster.
* Migrate presubmit-tests-with-pipeline-deployment.sh to use kfp
lightweight deployment.
* Add option to cache built images to make debugging faster.
* Fix cluster set up
* Copy image builder image instead of granting permission
* Add missed yes command
* fix stuff
* Let other usages of image-builder image become configurable
* let test workflow use image builder image
* Fix permission issue
* Hide irrelevant error logs
* Use shared service account key instead
* Move test manifest to test folder
* Move build-images.sh to a different script file
* Update README.md
* add cluster info dump
* Use the same cluster resources as kubeflow deployment
* Remove cluster info dump
* Add timing to test log
* cleaned up code
* fix tests
* address cr comments
* Address cr comments
* Enable image caching to improve retest speed
The data stored in artifact storage are usually small. Using multi-part is not strictly a requirement.
Change the default to true to better support more platform out of box.
Discovered in #1721
__xrange()__ was removed in Python 3 in favor of an improved version of __range()__. This PR ensures equivalent functionality in both Python 2 and Python 3.
```
./samples/contrib/ibm-samples/watson/source/model-source-code/tf-model/input_data.py💯40: F821 undefined name 'xrange'
fake_image = [1.0 for _ in xrange(784)]
^
./samples/contrib/ibm-samples/watson/source/model-source-code/tf-model/input_data.py:102:41: F821 undefined name 'xrange'
return [fake_image for _ in xrange(batch_size)], [
^
./samples/contrib/ibm-samples/watson/source/model-source-code/tf-model/input_data.py:103:37: F821 undefined name 'xrange'
fake_label for _ in xrange(batch_size)]
^
```
@gaoning777 @Ark-kun Your reviews please.
* 'core' folder included to parameters related On-Premise cluster
Update is required because this sample was migrated to the samples/core folder
* Update README.md
Change to add base framework for cleaning up resources in a GCP project.
The resource specification is specified declaratively using a YAML file.
As per current requirements this change only adds cleaning up of GKE
clusters.
* Refactor presubmit-tests-with-pipeline-deployment.sh so that it can be run from a different project
* Simplify getting service account from cluster.
* Copy image builder image instead of granting permission
* Add missed yes command
* fix stuff
* Let other usages of image-builder image become configurable
* let test workflow use image builder image
* Adding a sample for serving component
* removed typo / updated based on PR feedback
* fixing the jupyter rendering issue
* adding pip3 for tensorflow
* Fixed spelling error in VERSION
* fix indentation based on review feedback
* SDK - Refactoring - Serialized PipelineParam does not need type
Only the types in non-serialized PipelineParams are ever used.
* SDK - Refactoring - Serialized PipelineParam does not need value
Default values are only relevant when PipelineParam is used in the pipeline function signature and even in this case compiler captures them explicitly from the pipelineParam objects in the signature.
There is no other uses for them.
* Regenerated run api for frontend
* Added support for reference name to resource reference API in frontend
* Revert "Regenerated run api for frontend"
* Addressed PR comments
* Removed extra if statement by setting default value of parameter
* Removed the whole comment
* Addressed PR feedback
* Addressed PR feedback
* Simplified logic after offline discussion
* Changed way visualization variables are passed from request to NotebookNode
Visualization variables are now saved to a json file and loaded by a NotebookNode upon execution.
* Updated roc_curve visualization to reflect changes made to dependency injection
* Fixed bug where checking if is_generated is provided to roc_curve visualization would crash visualizaiton
Also changed ' -> "
* Changed text_exporter to always sort variables by key for testing
* Addressed PR suggestions
* Remove redundant import.
* Simplify sample_test.yaml by using withItem syntax.
* Simplify sample_test.yaml by using withItem syntax.
* Change dict to str in withItems.
* Add back coveralls.
* Skips calling getPipeline in RunList if the pipeline name is in the pipeline_spec
* Update fixed data to include pipeline names in pipeline specs
* Remove redundant getRuns call
It would improve the list runs call which contains filtering on [ResourceType, ReferenceUUID, ReferenceType]
We've seen cases list runs take long to run when resource_reference table is large.
```
SELECT
subq.*,
CONCAT("[", GROUP_CONCAT(r.Payload SEPARATOR ", "), "]") AS refs
FROM
(
SELECT
rd.*,
CONCAT("[", GROUP_CONCAT(m.Payload SEPARATOR ", "), "]") AS metrics
FROM
(
SELECT
UUID,
DisplayName,
Name,
StorageState,
Namespace,
Description,
CreatedAtInSec,
ScheduledAtInSec,
FinishedAtInSec,
Conditions,
PipelineId,
PipelineSpecManifest,
WorkflowSpecManifest,
Parameters,
pipelineRuntimeManifest,
WorkflowRuntimeManifest
FROM
run_details
WHERE
UUID in
(
SELECT
ResourceUUID
FROM
resource_references as rf
WHERE
(
rf.ResourceType = 'Run'
AND rf.ReferenceUUID = '488b0263-f4ee-4398-b7dc-768ffe967372'
AND rf.ReferenceType = 'Experiment'
)
)
AND StorageState <> 'STORAGESTATE_ARCHIVED'
ORDER BY
CreatedAtInSec DESC,
UUID DESC LIMIT 6
)
AS rd
LEFT JOIN
run_metrics AS m
ON rd.UUID = m.RunUUID
GROUP BY
rd.UUID
)
AS subq
LEFT JOIN
(
select
*
from
resource_references
where
ResourceType = 'Run'
)
AS r
ON subq.UUID = r.ResourceUUID
GROUP BY
subq.UUID
ORDER BY
CreatedAtInSec DESC,
UUID DESC
```
/assign @hongye-sun
Consistently getting the following error
```
/home/travis/build/kubeflow/pipelines/frontend/node_modules/coveralls/bin/coveralls.js:18
throw err;
^
Bad response: 405 <html>
<head><title>405 Not Allowed</title></head>
<body bgcolor="white">
<center><h1>405 Not Allowed</h1></center>
<hr><center>nginx</center>
</body>
</html>
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! pipelines-frontend@0.1.0 test:coveralls: `npm run test:coverage && cat ./coverage/lcov.info | ./node_modules/coveralls/bin/coveralls.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the pipelines-frontend@0.1.0 test:coveralls script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/travis/.npm/_logs/2019-08-13T06_28_02_279Z-debug.log
The command "npm run test:coveralls" exited with 1.
```
https://travis-ci.com/kubeflow/pipelines/jobs/224697449
* Remove redundant import.
* Simplify sample_test.yaml by using withItem syntax.
* Simplify sample_test.yaml by using withItem syntax.
* Change dict to str in withItems.
* Move tensorflow installation into notebooks.