* Disable firefox browserstack tests temporarily * Remove unused WebGL scripts * Update test-ci.sh * Always use port 9876 for karma * Always use port 9200 for karma Browserstack supports ports 9200-9400 for safari testing (https://www.browserstack.com/question/39572), and when karma launches and its port is in use, it automatically increments the port. Starting at 9200 ensures the port is always supported by BrowserStack. * Fix bash npm-run-all script * Fix tfjs-vis test script * Update karma.conf.js * Update automl macos test version --------- Co-authored-by: Ping Yu <4018+pyu10055@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| README.md | ||
| SpecRunner.html | ||
| app.js | ||
| app_node_test.js | ||
| app_test.js | ||
| benchmark_models.js | ||
| benchmark_results.html | ||
| browser_list.json | ||
| cloudbuild.yml | ||
| firestore.js | ||
| firestore_test_value.json | ||
| index.html | ||
| index.js | ||
| index_test.js | ||
| karma.conf.js | ||
| main.css | ||
| package.json | ||
| preconfigured_browser.json | ||
| promise_queue.js | ||
| test_config.json | ||
| yarn.lock | ||
README.md
Benchmark on multiple devices
⚠️ To use this tool, you need to sign up for BrowserStack's Automate service.
The Multi-device benchmark tool can benchmark the performance (time, memory) of model inference on a collection of remote devices. Using this tool you will be able to:
- Select a collection of BrowserStack devices, based on the following fields:
- OS
- OS version
- Browser
- Browser version
- Device
- Select a backend:
- WASM
- WebGL
- CPU
- Set the number of rounds for model inference.
- Select a model to benchmark.
Usage
- Export your access key of BrowserStack's Automate service:
export BROWSERSTACK_USERNAME=YOUR_USERNAME
export BROWSERSTACK_ACCESS_KEY=YOUR_ACCESS_KEY
- Download and run the tool:
git clone https://github.com/tensorflow/tfjs.git
cd tfjs/e2e/benchmarks/browserstack-benchmark
yarn install
node app.js
Then you can see > Running socket on port: 8001 on your Command-line interface.
- Open http://localhost:8001/ and start to benchmark.
3.1 If you want to benchmark code snippet. Please update
benchmarkCodeSnippetwith your code snippet before runningnode app.jsand selectcodeSnippetinmodel name:
Command Line Arguments
The following are supported options arguments which trigger options features:
-
--benchmarks
- Runs benchmarks from a user-specified, pre-configured JSON file.
node app.js --benchmarks=relative_file_path.jsonA pre-configuration file consists of a JSON object with the following format:
{ "benchmark": { "model": ["model_name"], //List of one or more custom or official models to be benchmarked "numRuns": positive_integer, "backend": ["backend_name"] //List of one or more backends to be benchmarked }, "browsers": { "local": {}, // Benchmark on your local device "unique_identifier_laptop_or_desktop": { "base": "BrowserStack", "browser": "browser_name", "browser_version": "browser_version", "os": "os_name", "os_version": "os_version", "device": null }, "unique_identifier_mobile_device": { "base": "BrowserStack", "browser": "iphone_or_android", "browser_version": null, "os": "os_name", "os_version": "os_version", "device": "device_name" } } }Each model in the model list will be run on each backend in the backend list. Each model-backend combination will run on every browser. If you would like to test specific backends on specific models, the recommended method is to create multiple configuration files.
For more examples of documentation, refer to the links below: List of officially supported TFJS browsers Example benchmark pre-configuration
-
--cloud
- Runs GCP compatible version of benchmarking by blocking the local server.
node app.js --cloud -
--firestore
- Pushes successful benchmark results to a Firestore database.
node app.js --firestore -
--h, --help
- Shows help menu and all optional arguments in the shell window.
node app.js --hor
node app.js --help -
--period
- Runs a part of models specified in
--benchmarks's file in a cycle and the part of models to run is determined by the date of a month. The value could be or 1~31 (representing Sunday to Saturday). This argument takes effect only if--benchmarksis set.
node app.js --period=15 - Runs a part of models specified in
-
--date
- Set the date for selecting models and this works only if
--periodis set. The value could be 1~31. If it is not declared, the date will the real date at runtime.
node app.js --period=15 --date=1 - Set the date for selecting models and this works only if
-
--maxBenchmarks
- Sets maximum for number of benchmarks run in parallel. Expects a positive integer.
node app.js --maxBenchmarks=positive_integer -
--maxTries
- Sets maximum for number of tries a given benchmark has to succeed. Expects a positive integer.
node app.js --maxTries=positive_integer -
--outfile
- Writes results to an accessible external file, benchmark_results.js or benchmark_results.json. Expects 'html' or 'json'. If you set it as 'html', benchmark_results.js will be generated and you could review the benchmark results by openning benchmark_result.html file.
node app.js --outfile=js -
--v, --version
- Shows node version in use.
node app.js --vor
node app.js --version -
--localBuild
- Uses local build dependencies, instead of public CDNs. (When using localBuild targets, please make sure you have built the targets (eg. run
yarn build-individual-link-package tfjs-backend-webgl) you need.)
node app.js --localBuild=core,webgl,wasm,cpu,layers,converter,automl - Uses local build dependencies, instead of public CDNs. (When using localBuild targets, please make sure you have built the targets (eg. run
-
--npmVersion
- Specify the npm version of TFJS library to benchmark. By default the latest version of TFJS will be benchmarked.
node app.js --npmVersion=4.4.0
Custom model
The custom model is supported, but is constrained by:
- A URL path to the model is required, while the model in local file system is not supported. The following URLs are examples:
- Currently only
tf.GraphModelandtf.LayersModelare supported.
If you want to benchmark more complex models with customized input preprocessing logic, you need to add your model with load and predictFunc methods into tfjs/e2e/benchmarks/model_config.js, following this example PR.
About this tool
The tool contains:
- A test runner - Karma:
- benchmark_models.js warps the all benchmark logics into a Jasmine spec.
- browser_list.json lists the supported BrowserStack combinations. If you want to add more combinations or refactor this list, you can follow this conversation.
- A node server. app.js runs the test runner and send the benchmark results back to the webpage.
- A webpage.
Thanks, BrowserStack, for providing supports.