Tensorflow's Fairness Evaluation and Visualization Toolkit
Go to file
Andrew Fulton 6c970e0ec6
Ci build (#391)
* add github build workflow

* update repository_url

* update url environment variable in upload_to_pypi

* update workflow with buildwheels

* downgrade python

* add build pip install

* remove python3.11

* comment out code blocking upload to pypi

* don't verify metadata for upload to pypi

* use underscore

* fix setup-name

* run verbose update pypa publish runner

* only build with python 310

* revert package name to real name

* update for pushing to pypi

* fix name

* build wheel and sdist in single step, add build for python 3.9

* fix build to upload both .whl and .tar.gz files
2025-08-04 16:14:37 -07:00
.github Ci build (#391) 2025-08-04 16:14:37 -07:00
docs pre-commit run --all-files 2025-08-04 17:38:53 +00:00
fairness_indicators readd "unused import" 2025-08-04 17:42:50 +00:00
tensorboard_plugin pre-commit run --all-files 2025-08-04 17:38:53 +00:00
.pre-commit-config.yaml add precommit yaml config 2025-08-04 17:37:22 +00:00
CONTRIBUTING.md
LICENSE
README.md Fairness Indicators and Tensorboard Plugin 0.48.0 Release 2025-06-25 13:17:36 -07:00
RELEASE.md Fairness Indicators and Tensorboard Plugin 0.48.0 Release 2025-06-25 13:17:36 -07:00
mkdocs.yml Remove unused plugin 2025-05-06 19:21:59 -07:00
pyproject.toml move pytest configuration to pyproject.toml add linting ignores 2025-08-04 17:37:22 +00:00
requirements-docs.txt Fix video embedding 2025-05-06 19:16:40 -07:00
setup.py pre-commit run --all-files 2025-08-04 17:38:53 +00:00

README.md

Fairness Indicators

Fairness_Indicators

Fairness Indicators is designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit.

The tool is currently actively used internally by many of our products. We would love to partner with you to understand where Fairness Indicators is most useful, and where added functionality would be valuable. Please reach out at tfx@tensorflow.org. You can provide feedback and feature requests here.

What is Fairness Indicators?

Fairness Indicators enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers.

Many existing tools for evaluating fairness concerns dont work well on large-scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate fairenss metrics across any size of use case.

In particular, Fairness Indicators includes the ability to:

  • Evaluate the distribution of datasets
  • Evaluate model performance, sliced across defined groups of users
    • Feel confident about your results with confidence intervals and evals at multiple thresholds
  • Dive deep into individual slices to explore root causes and opportunities for improvement

This case study, complete with videos and programming exercises, demonstrates how Fairness Indicators can be used on one of your own products to evaluate fairness concerns over time.

Installation

pip install fairness-indicators

The pip package includes:

Nightly Packages

Fairness Indicators also hosts nightly packages at https://pypi-nightly.tensorflow.org on Google Cloud. To install the latest nightly package, please use the following command:

pip install --extra-index-url https://pypi-nightly.tensorflow.org/simple fairness-indicators

This will install the nightly packages for the major dependencies of Fairness Indicators such as TensorFlow Data Validation (TFDV), TensorFlow Model Analysis (TFMA).

How can I use Fairness Indicators?

Tensorflow Models

  • Access Fairness Indicators as part of the Evaluator component in Tensorflow Extended [docs]
  • Access Fairness Indicators in Tensorboard when evaluating other real-time metrics [docs]

Not using existing Tensorflow tools? No worries!

  • Download the Fairness Indicators pip package, and use Tensorflow Model Analysis as a standalone tool [docs]
  • Model Agnostic TFMA enables you to compute Fairness Indicators based on the output of any model [docs]

Examples directory contains several examples.

More questions?

For more information on how to think about fairness evaluation in the context of your use case, see this link.

If you have found a bug in Fairness Indicators, please file a GitHub issue with as much supporting information as you can provide.

Compatible versions

The following table shows the package versions that are compatible with each other. This is determined by our testing framework, but other untested combinations may also work.

fairness-indicators tensorflow tensorflow-data-validation tensorflow-model-analysis
GitHub master nightly (1.x/2.x) 1.17.0 0.48.0
v0.48.0 2.17 1.17.0 0.48.0
v0.47.0 2.16 1.16.1 0.47.1
v0.46.0 2.15 1.15.1 0.46.0
v0.44.0 2.12 1.13.0 0.44.0
v0.43.0 2.11 1.12.0 0.43.0
v0.42.0 1.15.5 / 2.10 1.11.0 0.42.0
v0.41.0 1.15.5 / 2.9 1.10.0 0.41.0
v0.40.0 1.15.5 / 2.9 1.9.0 0.40.0
v0.39.0 1.15.5 / 2.8 1.8.0 0.39.0
v0.38.0 1.15.5 / 2.8 1.7.0 0.38.0
v0.37.0 1.15.5 / 2.7 1.6.0 0.37.0
v0.36.0 1.15.2 / 2.7 1.5.0 0.36.0
v0.35.0 1.15.2 / 2.6 1.4.0 0.35.0
v0.34.0 1.15.2 / 2.6 1.3.0 0.34.0
v0.33.0 1.15.2 / 2.5 1.2.0 0.33.0
v0.30.0 1.15.2 / 2.4 0.30.0 0.30.0
v0.29.0 1.15.2 / 2.4 0.29.0 0.29.0
v0.28.0 1.15.2 / 2.4 0.28.0 0.28.0
v0.27.0 1.15.2 / 2.4 0.27.0 0.27.0
v0.26.0 1.15.2 / 2.3 0.26.0 0.26.0
v0.25.0 1.15.2 / 2.3 0.25.0 0.25.0
v0.24.0 1.15.2 / 2.3 0.24.0 0.24.0
v0.23.0 1.15.2 / 2.3 0.23.0 0.23.0