decision-forests/documentation/tutorials/uplift_colab.ipynb

485 lines
21 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Tce3stUlHN0L"
},
"source": [
"##### Copyright 2023 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "tuOe1ymfHZPu"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "36EdAGhThQov"
},
"source": [
"# Uplifting with Decision Forests\n",
"\n",
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
" \u003ctd\u003e\n",
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/decision_forests/tutorials/uplift_colab\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
" \u003c/td\u003e\n",
" \u003ctd\u003e\n",
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/decision-forests/blob/main/documentation/tutorials/uplift_colab.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
" \u003c/td\u003e\n",
" \u003ctd\u003e\n",
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/decision-forests/blob/main/documentation/tutorials/uplift_colab.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView on GitHub\u003c/a\u003e\n",
" \u003c/td\u003e\n",
" \u003ctd\u003e\n",
" \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/decision-forests/documentation/tutorials/uplift_colab.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
" \u003c/td\u003e\n",
"\u003c/table\u003e\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2j8GzKvfVvF8"
},
"source": [
"Welcome to the *Uplifting* Tutorial for TensorFlow Decision Forests (TF-DF). In this tutorial, you will learn what uplifting is, why it is so important, and how to do it in TF-DF.\n",
"\n",
"This tutorial assumes you are familiar with the fundaments of TF-DF, in particular the installation procedure. The [beginner tutorial](https://www.tensorflow.org/decision_forests/tutorials/beginner_colab) is a great place to start learning about TF-DF.\n",
"\n",
"In this colab, you will:\n",
"\n",
"- Learn what an uplift modeling is.\n",
"- Train a Uplift Random Forest model on the **Hillstrom Email Marketing** dataset.\n",
"- Evaluate the quality of this model.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MQIPhTQVW19g"
},
"source": [
"## Installing TensorFlow Decision Forests\n",
"\n",
"Install TF-DF by running the following cell.\n",
"\n",
"[Wurlitzer](https://pypi.org/project/wurlitzer/) is needed to display the detailed training logs in Colabs (when using `verbose=2` in the model constructor).\n",
"\n",
"Tensorflow Datasets is needed download the dataset used in this tutorial."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "oiz5HmMyWxgd"
},
"outputs": [],
"source": [
"pip install tensorflow_decision_forests wurlitzer tensorflow-datasets"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2LIE3UDMXeB4"
},
"source": [
"## Importing libraries"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ue7Q-ysiPOmG"
},
"outputs": [],
"source": [
"import tensorflow_decision_forests as tfdf\n",
"\n",
"import os\n",
"import numpy as np\n",
"import pandas as pd\n",
"import tensorflow as tf\n",
"import math\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bN7quUfTXjaA"
},
"source": [
"The hidden code cell limits the output height in colab.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "nFP4KJ79Xl3J"
},
"outputs": [],
"source": [
"#@title\n",
"\n",
"from IPython.core.magic import register_line_magic\n",
"from IPython.display import Javascript\n",
"from IPython.display import display as ipy_display\n",
"\n",
"# Some of the model training logs can cover the full\n",
"# screen if not compressed to a smaller viewport.\n",
"# This magic allows setting a max height for a cell.\n",
"@register_line_magic\n",
"def set_cell_height(size):\n",
" ipy_display(\n",
" Javascript(\"google.colab.output.setIframeHeight(0, true, {maxHeight: \" +\n",
" str(size) + \"})\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jnpiCdRKXvir"
},
"outputs": [],
"source": [
"# Check the version of TensorFlow Decision Forests\n",
"print(\"Found TensorFlow Decision Forests v\" + tfdf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9SqXMEGLX0ry"
},
"source": [
"## What is uplift modeling?\n",
"\n",
"[Uplift modeling](https://en.wikipedia.org/wiki/Uplift_modelling) is a statistical modeling technique to predict the **incremental impact of an action** on a subject. The action is often referred to as a **treatment** that may or may not be applied.\n",
"\n",
"Uplift modeling is often used in targeted marketing campaigns to predict the increase in the likelihood of a person making a purchase (or any other desired action) based on the marketing exposition they receive.\n",
"\n",
"For example, uplift modeling can predict the **effect** of an email. The effect is defined as the **conditional probability**\n",
"\\begin{align}\n",
"\\text{effect}(\\text{email}) = \u0026\\Pr(\\text{outcome}=\\text{purchase}\\ \\vert\\ \\text{treatment}=\\text{with email})\\\\ \u0026- \\Pr(\\text{outcome}=\\text{purchase} \\ \\vert\\ \\text{treatment}=\\text{no email}),\n",
"\\end{align}\n",
"where $\\Pr(\\text{outcome}=\\text{purchase}\\ \\vert\\ ...)$\n",
"is the probability of purchase depending on the receiving or not an email.\n",
"\n",
"Compare this to a classification model: With a classification model, one can predict the probability of a purchase. However, customers with a high probability are likely to spend money in the store regardless of whether or not they received an email.\n",
"\n",
"Similarly, one can use **numerical uplifting** to predict the numerical **increase in spend** when receiving an email. In comparison, a regression model can only increase the expected spend, which is a less useful metric in many cases.\n",
"\n",
"### Defining uplift models in TF-DF\n",
"\n",
"TF-DF expects uplifting datasets to be presented in a \"flat\" format.\n",
"A dataset of customers might look like this\n",
"\n",
"treatment | outcome | feature_1 | feature_2\n",
"--------- | ------- | --------- | ---------\n",
"0 | 1 | 0.1 | blue \n",
"0 | 0 | 0.2 | blue \n",
"1 | 1 | 0.3 | blue \n",
"1 | 1 | 0.4 | blue \n",
"\n",
"\n",
"The **treatment** is a binary variable indicating whether or not the example has received treatment. In the above example, the treatment indicates if the customer has received an email or not. The **outcome** (label) indicates the status of the example after receiving the treatment (or not). TF-DF supports categorical outcomes for categorical uplifting and numerical outcomes for numerical uplifting.\n",
"\n",
"**Note**: Uplifting is also frequently used in medical contexts. Here the *treatment* can be a medical treatment (e.g. administering a vaccine), the label can be an indicator of quality of life (e.g. whether the patient got sick). This also explains the nomenclature of uplift modeling."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kVaDog4ldPEY"
},
"source": [
"## Training an uplifting model\n",
"\n",
"In this example, we will use the *Hillstrom Email Marketing dataset*.\n",
"\n",
"This dataset contains 64,000 customers who last purchased within twelve months. The customers were involved in an e-mail test:\n",
"\n",
"- 1/3 were randomly chosen to receive an e-mail campaign featuring Mens merchandise.\n",
"- 1/3 were randomly chosen to receive an e-mail campaign featuring Womens merchandise.\n",
"- 1/3 were randomly chosen to not receive an e-mail campaign.\n",
"\n",
"During a period of two weeks following the e-mail campaign, results were tracked. The task is to tell if the Mens or Womens e-mail campaign was successful.\n",
"\n",
"Read more about dataset [in its documentation]( https://blog.minethatdata.com/2008/03/minethatdata-e-mail-analytics-and-data.html). This tutorial uses the dataset as curated by [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/hillstrom)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1veZ9nJZPGsv"
},
"outputs": [],
"source": [
"# Load the dataset\n",
"import tensorflow_datasets as tfds\n",
"raw_train, raw_test = tfds.load('hillstrom', split=['train[:80%]', 'train[20%:]'])\n",
"\n",
"# Display the first 10 examples of the test fold.\n",
"pd.DataFrame(list(raw_test.batch(10).take(1))[0])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5stnFbyKaIgn"
},
"source": [
"### Dataset preprocessing\n",
"\n",
"Since TF-DF currently only supports binary treatments, combine the \"Men's Email\" and the \"Women's Email\" campaign. This tutorial uses the binary variable `conversion` as outcome. This means that the problem is a **Categorical Uplifting** problem. If we were using the numerical variable `spend`, the problem would be a **Numerical Uplifting** problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "dLpAw7jibIrh"
},
"outputs": [],
"source": [
"def prepare_dataset(example):\n",
" # Use a binary treatment class.\n",
" example['treatment'] = 1 if example['segment'] == b'Mens E-Mail' or example['segment'] == b'Womens E-Mail' else 0\n",
" outcome = example['conversion']\n",
" # Restrict the dataset to the input features.\n",
" input_features = ['channel', 'history', 'mens', 'womens', 'newbie', 'recency', 'zip_code', 'treatment']\n",
" example = {feature: example[feature] for feature in input_features}\n",
" return example, outcome\n",
"\n",
"train_ds = raw_train.map(prepare_dataset).batch(100)\n",
"test_ds = raw_test.map(prepare_dataset).batch(100)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Z-mtKmd-RoOu"
},
"source": [
"### Model training\n",
"\n",
"Finally, train and evaluate the model as usual. Note that TF-DF only supports Random Forest models for uplifting."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-OZN8t8LRn38"
},
"outputs": [],
"source": [
"%set_cell_height 300\n",
"\n",
"# Configure the model and its hyper-parameters.\n",
"model = tfdf.keras.RandomForestModel(\n",
" verbose=2,\n",
" task=tfdf.keras.Task.CATEGORICAL_UPLIFT,\n",
" uplift_treatment='treatment'\n",
")\n",
"\n",
"# Train the model.\n",
"model.fit(train_ds)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XKhtZuLhGtv_"
},
"source": [
"# Evaluating Uplift models.\n",
"\n",
"## Metrics for Uplift models\n",
"\n",
"The two most important metrics for evaluating upift models are the **AUUC** (Area Under the Uplift Curve) metric and the **Qini** (Area Under the Qini Curve) metric. This is similar to the use of AUC and accuracy for classification problems. For both metrics, the larger they are, the better.\n",
"\n",
"Both AUUC and Qini are **not** normalized metrics. This means that the best possible value of the metric can vary from dataset to dataset. This is different from, for example, the AUC matric that always varies between 0 and 1.\n",
"\n",
"A formal definition of AUUC is below. For more information about these metrics, see [Guelman](https://diposit.ub.edu/dspace/bitstream/2445/65123/1/Leo%20Guelman_PhD_THESIS.pdf) and [Betlei et al.](https://arxiv.org/pdf/2012.09897.pdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AMSpNTTZmuzv"
},
"source": [
"## Model Self-Evaluation\n",
"\n",
"TF-DF Random Forest models perform self-evaluation on the out-of-bag examples of the training dataset. For uplift models, they expose the AUUC and the Qini metric. You can directly retrieve the two metrics on the training dataset through the inspector\n",
"\n",
"Later, we are going to recompute the AUUC metric \"manually\" on the test dataset. Note that two metrics are not expected to be exactly equal (out-of-bag on train vs test) since the AUUC is not a normalized metric."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OsN1R9mT_8T6"
},
"outputs": [],
"source": [
"# The self-evaluation is available through the model inspector\n",
"insp = model.make_inspector()\n",
"insp.evaluation()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WErGZZ27HWJN"
},
"source": [
"## Manually computing the AUUC\n",
"\n",
"In this section, we manually compute the AUUC and plot the uplift curves.\n",
"\n",
"The next few paragraphs explain the AUUC metric in more detail and may be skipped.\n",
"\n",
"### Computing the AUUC\n",
"\n",
"Suppose you have a labeled dataset with $|T|$ examples with treatment and $|C|$ examples without treatment, called *control* examples. For each example, the uplift model $f$ produces the conditional probability that a treatment on the example will yield a positive outcome.\n",
"\n",
"Suppose a decision-maker needs to decide which clients to send an email using an uplift model $f$. The model produces a (conditional) probability that the email will result in a conversion. The decision-maker might therefore just pick the number $k$ of emails to send and send those $k$ emails to the clients with the highest probability.\n",
"\n",
"Using a labeled test dataset, it is possible to study the impact of $k$ on the success of the campaign. First, we are interested in the ratio $\\frac{|C \\cap T|}{|T|}$ of clients that received an email that converted versus total number of clients that received an email. Here $C$ is the set of clients that received an email and converted and $T$ is the total number of clients that received an email. We plot this ratio against $k$.\n",
"\n",
"Ideally, we like to have this curve increase steeply. This would mean that the model prioritizes sending email to those clients that will generate a conversion when receiving an email."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xUGNWKkkkl-s"
},
"outputs": [],
"source": [
"# Compute all predictions on the test dataset\n",
"predictions = model.predict(test_ds).flatten()\n",
"# Extract outcomes and treatments\n",
"outcomes = np.concatenate([outcome.numpy() for _, outcome in test_ds])\n",
"treatment = np.concatenate([example['treatment'].numpy() for example,_ in test_ds])\n",
"control = 1 - treatment\n",
"\n",
"num_treatments = np.sum(treatment)\n",
"# Clients without treatment are called 'control' group\n",
"num_control = np.sum(control)\n",
"num_examples = len(predictions)\n",
"\n",
"# Sort labels and treatments according to predictions in descending order\n",
"prediction_order = predictions.argsort()[::-1]\n",
"outcomes_sorted = outcomes[prediction_order]\n",
"treatment_sorted = treatment[prediction_order]\n",
"control_sorted = control[prediction_order]\n",
"ratio_treatment = np.cumsum(np.multiply(outcomes_sorted, treatment_sorted), axis=0)/num_treatments\n",
"\n",
"fig, ax = plt.subplots()\n",
"ax.plot(ratio_treatment, label='Conversion ratio of treatment')\n",
"ax.set_xlabel('k')\n",
"ax.set_ylabel('Ratio of conversion')\n",
"ax.legend()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "97IFpq5epHsx"
},
"source": [
"Similarly, we can also compute and plot the conversion ratio of those not receiving an email, called the *control group*. Ideally, this curve is initially flat: This would mean that the model does not prioritize sending emails to clients that will generate a conversion despite **not** receiving a email"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bIY-oA9alwzY"
},
"outputs": [],
"source": [
"ratio_control = np.cumsum(np.multiply(outcomes_sorted, control_sorted), axis=0)/num_control\n",
"ax.plot(ratio_control, label='Conversion ratio of control')\n",
"ax.legend()\n",
"fig"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "q9MopM5MnCK0"
},
"source": [
"The AUUC metric measures the area between these two curves, normalizing the y-axis between 0 and 1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "99XXGsq7nQgN"
},
"outputs": [],
"source": [
"x = np.linspace(0, 1, num_examples)\n",
"plt.plot(x,ratio_treatment, label='Conversion ratio of treatment')\n",
"plt.plot(x,ratio_control, label='Conversion ratio of control')\n",
"plt.fill_between(x, ratio_treatment, ratio_control, where=(ratio_treatment \u003e ratio_control), color='C0', alpha=0.3)\n",
"plt.fill_between(x, ratio_treatment, ratio_control, where=(ratio_treatment \u003c ratio_control), color='C1', alpha=0.3)\n",
"plt.xlabel('k')\n",
"plt.ylabel('Ratio of conversion')\n",
"plt.legend()\n",
"\n",
"# Approximate the integral of the difference between the two curves.\n",
"auuc = np.trapz(ratio_treatment-ratio_control, dx=1/num_examples)\n",
"print(f'The AUUC on the test dataset is {auuc}')"
]
}
],
"metadata": {
"colab": {
"name": "uplift_colab.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}