examples/financial_time_series/Financial Time Series with ...

1170 lines
47 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# TensorFlow Machine Learning with Financial Data on Google Cloud Platform"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This solution presents an accessible, non-trivial example of machine learning with financial time series on Google Cloud Platform (GCP).\n",
"\n",
"Time series are an essential part of financial analysis. Today, you have more data at your disposal than ever, more sources of data, and more frequent delivery of that data. New sources include new exchanges, social media outlets, and news sources. The frequency of delivery has increased from tens of messages per second 10 years ago, to hundreds of thousands of messages per second today. Naturally, more and different analysis techniques are being brought to bear as a result. Most of the modern analysis techniques aren't different in the sense of being new, and they all have their basis in statistics, but their applicability has closely followed the amount of computing power available. The growth in available computing power is faster than the growth in time series volumes, so it is possible to analyze time series today at scale in ways that weren't previously practical.\n",
"\n",
"In particular, machine learning techniques, especially deep learning, hold great promise for time series analysis. As time series become more dense and many time series overlap, machine learning offers a way to separate the signal from the noise, even when the noise can seem overwhelming. Deep learning holds great potential because it is often the best fit for the seemingly random nature of financial time series.\n",
"\n",
"In this solution, you will:\n",
"\n",
"* Obtain data for a number of financial markets.\n",
"* Munge that data into a usable format and perform exploratory data analysis in order to explore and validate a premise.\n",
"* Use TensorFlow to build, train and evaluate a number of models for predicting what will happen in financial markets\n",
"\n",
"**Important:** This solution is intended to illustrate the capabilities of GCP and TensorFlow for fast, interactive, iterative data analysis and machine learning. It does not offer any advice on financial markets or trading strategies. The scenario presented in the tutorial is an example. Don't use this code to make investment decisions."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The premise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The premise is straightforward: financial markets are increasingly global, and if you follow the sun from Asia to Europe to the US and so on, you can use information from an earlier time zone to your advantage in a later time zone.\n",
"\n",
"The following table shows a number of stock market indices from around the globe, their closing times in Eastern Standard Time (EST), and the delay in hours between the close that index and the close of the S&P 500 in New York. This makes EST the base time zone. For example, Australian markets close for the day 15 hours before US markets close. If the close of the All Ords in Australia is a useful predictor of the close of the S&P 500 for a given day we can use that information to guide our trading activity. Continuing our example of the Australian All Ords, if this index closes up and we think that means the S&P 500 will close up as well then we should either buy stocks that compose the S&P 500 or, more likely, an ETF that tracks the S&P 500. In reality, the situation is more complex because there are commissions and tax to account for. But as a first approximation, we'll assume an index closing up indicates a gain, and vice-versa.\n",
"\n",
"|Index|Country|Closing Time (EST)|Hours Before S&P Close|\n",
"|---|---|---|---|\n",
"|[All Ords](https://en.wikipedia.org/wiki/All_Ordinaries)|Australia|0100|15|\n",
"|[Nikkei 225](https://en.wikipedia.org/wiki/Nikkei_225)|Japan|0200|14|\n",
"|[Hang Seng](https://en.wikipedia.org/wiki/Hang_Seng_Index)|Hong Kong|0400|12|\n",
"|[DAX](https://en.wikipedia.org/wiki/DAX)|Germany|1130|4.5|\n",
"|[FTSE 100](https://en.wikipedia.org/wiki/FTSE_100_Index)|UK|1130|4.5|\n",
"|[NYSE Composite](https://en.wikipedia.org/wiki/NYSE_Composite)|US|1600|0|\n",
"|[Dow Jones Industrial Average](https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average)|US|1600|0|\n",
"|[S&P 500](https://en.wikipedia.org/wiki/S%26P_500_Index)|US|1600|0|"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Set up"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, install and import necessary libraries."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip3 install google-cloud-bigquery==1.6.0 pandas==0.23.4 matplotlib==3.0.3 scipy==1.2.1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"\n",
"# import google.datalab.bigquery as bq\n",
"from google.cloud import bigquery"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from pandas.plotting import autocorrelation_plot\n",
"from pandas.plotting import scatter_matrix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Get the data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The data covers roughly the last 5 years, using the date range from 1/1/2010 to 10/1/2015. Data comes from the S&P 500 (S&P), NYSE, Dow Jones Industrial Average (DJIA), Nikkei 225 (Nikkei), Hang Seng, FTSE 100 (FTSE), DAX, and All Ordinaries (AORD) indices.\n",
"\n",
"This data is publicly available and is stored in BigQuery for convenience. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Instantiates a client\n",
"bigquery_client = bigquery.Client()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tickers = ['snp', 'nyse', 'djia', 'nikkei', 'hangseng', 'ftse', 'dax', 'aord']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"bq_query = {}\n",
"for ticker in tickers:\n",
" bq_query[ticker] = bigquery_client.query('SELECT Date, Close from `bingo-ml-1.market_data.{}`'.format(ticker))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"results = {}\n",
"for ticker in tickers:\n",
" results[ticker] = bq_query[ticker].result().to_dataframe().set_index('Date')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Munge the data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the first instance, munging the data is straightforward. The closing prices are of interest, so for convenience extract the closing prices for each of the indices into a single Pandas DataFrame, called closing_data. Because not all of the indices have the same number of values, mainly due to bank holidays, we'll forward-fill the gaps. This means that, if a value isn't available for day N, fill it with the value for another day, such as N-1 or N-2, so that it contains the latest available value."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"closing_data = pd.DataFrame()\n",
"\n",
"for ticker in tickers:\n",
" closing_data['{}_close'.format(ticker)] = results[ticker]['Close']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"closing_data.info()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Pandas includes a very convenient function for filling gaps in the data.\n",
"closing_data.sort_index(inplace=True)\n",
"closing_data = closing_data.fillna(method='ffill')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"closing_data.info()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, you've sourced five years of time series for eight financial indices, combined the pertinent data into a single data structure, and harmonized the data to have the same number of entries, by using only the 20 lines of code in this notebook. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exploratory data analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Exploratory Data Analysis (EDA) is foundational to working with machine learning, and any other sort of analysis. EDA means getting to know your data, getting your fingers dirty with your data, feeling it and seeing it. The end result is you know your data very well, so when you build models you build them based on an actual, practical, physical understanding of the data, not assumptions or vaguely held notions. You can still make assumptions of course, but EDA means you will understand your assumptions and why you're making those assumptions.\n",
"\n",
"First, take a look at the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"closing_data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that the various indices operate on scales differing by orders of magnitude. It's best to scale the data so that, for example, operations involving multiple indices aren't unduly influenced by a single, massive index.\n",
"\n",
"Plot the data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# N.B. A super-useful trick-ette is to assign the return value of plot to _ \n",
"# so that you don't get text printed before the plot itself.\n",
"\n",
"_ = pd.concat([closing_data['{}_close'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As expected, the structure isn't uniformly visible for the indices. Divide each value in an individual index by the maximum value for that index., and then replot. The maximum value of all indices will be 1."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for ticker in tickers:\n",
" closing_data['{}_close_scaled'.format(ticker)] = closing_data['{}_close'.format(ticker)]/max(closing_data['{}_close'.format(ticker)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_ = pd.concat([closing_data['{}_close_scaled'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see that, over the five-year period, these indices are correlated. Notice that sudden drops from economic events happened globally to all indices, and they otherwise exhibited general rises. This is an good start, though not the complete story. Next, plot autocorrelations for each of the indices. The autocorrelations determine correlations between current values of the index and lagged values of the same index. The goal is to determine whether the lagged values are reliable indicators of the current values. If they are, then we've identified a correlation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig = plt.figure()\n",
"fig.set_figwidth(20)\n",
"fig.set_figheight(15)\n",
"\n",
"for ticker in tickers:\n",
" _ = autocorrelation_plot(closing_data['{}_close'.format(ticker)], label='{}_close'.format(ticker))\n",
"\n",
"_ = plt.legend(loc='upper right')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should see strong autocorrelations, positive for around 500 lagged days, then going negative. This tells us something we should intuitively know: if an index is rising it tends to carry on rising, and vice-versa. It should be encouraging that what we see here conforms to what we know about financial markets.\n",
"\n",
"Next, look at a scatter matrix, showing everything plotted against everything, to see how indices are correlated with each other."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_ = scatter_matrix(pd.concat([closing_data['{}_close_scaled'.format(ticker)] for ticker in tickers], axis=1), figsize=(20, 20), diagonal='kde')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see significant correlations across the board, further evidence that the premise is workable and one market can be influenced by another.\n",
"\n",
"As an aside, this process of gradual, incremental experimentation and progress is the best approach and what you probably do normally. With a little patience, we'll get to some deeper understanding.\n",
"\n",
"The actual value of an index is not that useful for modeling. It can be a useful indicator, but to get to the heart of the matter, we need a time series that is stationary in the mean, thus having no trend in the data. There are various ways of doing that, but they all essentially look at the difference between values, rather than the absolute value. In the case of market data, the usual practice is to work with logged returns, calculated as the natural logarithm of the index today divided by the index yesterday:\n",
"```\n",
"ln(Vt/Vt-1)\n",
"```\n",
"There are more reasons why the log return is preferable to the percent return (for example the log is normally distributed and additive), but they don't matter much for this work. What matters is to get to a stationary time series.\n",
"\n",
"Calculate and plot the log returns in a new DataFrame."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"log_return_data = pd.DataFrame()\n",
"\n",
"for ticker in tickers:\n",
" log_return_data['{}_log_return'.format(ticker)] = np.log(closing_data['{}_close'.format(ticker)]/closing_data['{}_close'.format(ticker)].shift())\n",
" \n",
"log_return_data.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the log returns, you should see that the mean, min, max are all similar. You could go further and center the series on zero, scale them, and normalize the standard deviation, but there's no need to do that at this point. Let's move forward with plotting the data, and iterate if necessary."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_ = pd.concat([log_return_data['{}_log_return'.format(ticker)] for ticker in tickers], axis=1).plot(figsize=(20, 15))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can see from the plot that the log returns of our indices are similarly scaled and centered, with no visible trend in the data. It's looking good, so now look at autocorrelations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fig = plt.figure()\n",
"fig.set_figwidth(20)\n",
"fig.set_figheight(15)\n",
"\n",
"for ticker in tickers:\n",
" _ = autocorrelation_plot(log_return_data['{}_log_return'.format(ticker)], label='{}_log_return'.format(ticker))\n",
"\n",
"_ = plt.legend(loc='upper right')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"No autocorrelations are visible in the plot, which is what we're looking for. Individual financial markets are Markov processes, knowledge of history doesn't allow you to predict the future.\n",
"\n",
"You now have time series for the indices, stationary in the mean, similarly centered and scaled. That's great! Now start to look for signals to try to predict the close of the S&P 500.\n",
"\n",
"Look at a scatterplot to see how the log return indices correlate with each other."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"_ = scatter_matrix(log_return_data, figsize=(20, 20), diagonal='kde')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The story with the previous scatter plot for log returns is more subtle and more interesting. The US indices are strongly correlated, as expected. The other indices, less so, which is also expected. But there is structure and signal there. Now let's move forward and start to quantify it so we can start to choose features for our model.\n",
"\n",
"First look at how the log returns for the closing value of the S&P 500 correlate with the closing values of other indices available on the same day. This essentially means to assume the indices that close before the S&P 500 (non-US indices) are available and the others (US indices) are not."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tmp = pd.DataFrame()\n",
"tmp['snp_0'] = log_return_data['snp_log_return']\n",
"tmp['nyse_1'] = log_return_data['nyse_log_return'].shift()\n",
"tmp['djia_1'] = log_return_data['djia_log_return'].shift()\n",
"tmp['ftse_0'] = log_return_data['ftse_log_return']\n",
"tmp['dax_0'] = log_return_data['dax_log_return']\n",
"tmp['hangseng_0'] = log_return_data['hangseng_log_return']\n",
"tmp['nikkei_0'] = log_return_data['nikkei_log_return']\n",
"tmp['aord_0'] = log_return_data['aord_log_return']\n",
"tmp.corr().iloc[:,0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here, we are directly working with the premise. We're correlating the close of the S&P 500 with signals available before the close of the S&P 500. And you can see that the S&P 500 close is correlated with European indices at around 0.65 for the FTSE and DAX, which is a strong correlation, and Asian/Oceanian indices at around 0.15-0.22, which is a significant correlation, but not with US indices. We have available signals from other indices and regions for our model.\n",
"\n",
"Now look at how the log returns for the S&P closing values correlate with index values from the previous day to see if they previous closing is predictive. Following from the premise that financial markets are Markov processes, there should be little or no value in historical values."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tmp = pd.DataFrame()\n",
"tmp['snp_0'] = log_return_data['snp_log_return']\n",
"tmp['nyse_1'] = log_return_data['nyse_log_return'].shift(2)\n",
"tmp['djia_1'] = log_return_data['djia_log_return'].shift(2)\n",
"tmp['ftse_0'] = log_return_data['ftse_log_return'].shift()\n",
"tmp['dax_0'] = log_return_data['dax_log_return'].shift()\n",
"tmp['hangseng_0'] = log_return_data['hangseng_log_return'].shift()\n",
"tmp['nikkei_0'] = log_return_data['nikkei_log_return'].shift()\n",
"tmp['aord_0'] = log_return_data['aord_log_return'].shift()\n",
"tmp.corr().iloc[:,0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You should see little to no correlation in this data, meaning that yesterday's values are no practical help in predicting today's close. Let's go one step further and look at correlations between today and the the day before yesterday."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tmp = pd.DataFrame()\n",
"tmp['snp_0'] = log_return_data['snp_log_return']\n",
"tmp['nyse_1'] = log_return_data['nyse_log_return'].shift(3)\n",
"tmp['djia_1'] = log_return_data['djia_log_return'].shift(3)\n",
"tmp['ftse_0'] = log_return_data['ftse_log_return'].shift(2)\n",
"tmp['dax_0'] = log_return_data['dax_log_return'].shift(2)\n",
"tmp['hangseng_0'] = log_return_data['hangseng_log_return'].shift(2)\n",
"tmp['nikkei_0'] = log_return_data['nikkei_log_return'].shift(2)\n",
"tmp['aord_0'] = log_return_data['aord_log_return'].shift(2)\n",
"\n",
"tmp.corr().iloc[:,0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, there are little to no correlations."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Summing up the EDA"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, you've done a good enough job of exploratory data analysis. You've visualized our data and come to know it better. You've transformed it into a form that is useful for modelling, log returns, and looked at how indices relate to each other. You've seen that indices from Europe strongly correlate with US indices, and that indices from Asia/Oceania significantly correlate with those same indices for a given day. You've also seen that if you look at historical values, they do not correlate with today's values. Summing up:\n",
"\n",
"* European indices from the same day were a strong predictor for the S&P 500 close.\n",
"* Asian/Oceanian indices from the same day were a significant predictor for the S&P 500 close.\n",
"* Indices from previous days were not good predictors for the S&P close.\n",
"\n",
"What should we think so far?\n",
"\n",
"JupyterHub is working great. With just a few lines of code, you were able to munge the data, visualize the changes, and make decisions. You could easily analyze and iterate. This is a common feature of iPython, but the advantage here is that JupyterHub is a managed service that you can simply click and use, so you can focus on your analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Feature selection"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, we can see a model:\n",
"\n",
"* We'll predict whether the S&P 500 close today will be higher or lower than yesterday.\n",
"* We'll use all our data sources: NYSE, DJIA, Nikkei, Hang Seng, FTSE, DAX, AORD.\n",
"* We'll use three sets of data points—T, T-1, and T-2—where we take the data available on day T or T-n, meaning today's non-US data and yesterday's US data.\n",
"\n",
"Predicting whether the log return of the S&P 500 is positive or negative is a classification problem. That is, we want to choose one option from a finite set of options, in this case positive or negative. This is the base case of classification where we have only two values to choose from, known as binary classification, or logistic regression.\n",
"\n",
"This uses the findings from of our exploratory data analysis, namely that log returns from other regions on a given day are strongly correlated with the log return of the S&P 500, and there are stronger correlations from those regions that are geographically closer with respect to time zones. However, our models also use data outside of those findings. For example, we use data from the past few days in addition to today. There are two reasons for using this additional data. First, we're adding additional features to our model for the purpose of this solution to see how things perform. which is not a good reason to add features outside of a tutorial setting. Second, machine learning models are very good at finding weak signals from data.\n",
"\n",
"In machine learning, as in most things, there are subtle tradeoffs happening, but in general good data is better than good algorithms, which are better than good frameworks. You need all three pillars but in that order of importance: data, algorithms, frameworks."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tensorflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[TensorFlow](https://tensorflow.org) is an open source software library, initiated by Google, for numerical computation using data flow graphs. TensorFlow is based on Google's machine learning expertise and is the next generation framework used internally at Google for tasks such as translation and image recognition. It's a wonderful framework for machine learning because it's expressive, efficient, and easy to use."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Feature engineering for Tensorflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From a training and testing perspective, time series data is easy. Training data should come from events that happened before test data events, and be contiguous in time. Otherwise, your model would be trained on events from \"the future\", at least as compared to the test data. It would then likely perform badly in practice, because you cant really have access to data from the future. That means random sampling or cross validation don't apply to time series data. Decide on a training-versus-testing split, and divide your data into training and test datasets.\n",
"\n",
"In this case, you'll create the features together with two additional columns:\n",
"\n",
"* snp_log_return_positive, which is 1 if the log return of the S&P 500 close is positive, and 0 otherwise. \n",
"* snp_log_return_negative, which is 1 if the log return of the S&P 500 close is negative, and 1 otherwise. \n",
"\n",
"Now, logically you could encode this information in one column, named snp_log_return, which is 1 if positive and 0 if negative, but that's not the way TensorFlow works for classification models. TensorFlow uses the general definition of classification, that there can be many different potential values to choose from, and a form or encoding for these options called one-hot encoding. One-hot encoding means that each choice is an entry in an array, and the actual value has an entry of 1 with all other values being 0. This encoding (i.e. a single 1 in an array of 0s) is for the input of the model, where you categorically know which value is correct. A variation of this is used for the output, where each entry in the array contains the probability of the answer being that choice. You can then choose the most likely value by choosing the highest probability, together with having a measure of the confidence you can place in that answer relative to other answers.\n",
"\n",
"We'll use 80% of our data for training and 20% for testing."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"log_return_data['snp_log_return_positive'] = 0\n",
"log_return_data.ix[log_return_data['snp_log_return'] >= 0, 'snp_log_return_positive'] = 1\n",
"log_return_data['snp_log_return_negative'] = 0\n",
"log_return_data.ix[log_return_data['snp_log_return'] < 0, 'snp_log_return_negative'] = 1\n",
"\n",
"training_test_data = pd.DataFrame(\n",
" columns=[\n",
" 'snp_log_return_positive', 'snp_log_return_negative',\n",
" 'snp_log_return_1', 'snp_log_return_2', 'snp_log_return_3',\n",
" 'nyse_log_return_1', 'nyse_log_return_2', 'nyse_log_return_3',\n",
" 'djia_log_return_1', 'djia_log_return_2', 'djia_log_return_3',\n",
" 'nikkei_log_return_0', 'nikkei_log_return_1', 'nikkei_log_return_2',\n",
" 'hangseng_log_return_0', 'hangseng_log_return_1', 'hangseng_log_return_2',\n",
" 'ftse_log_return_0', 'ftse_log_return_1', 'ftse_log_return_2',\n",
" 'dax_log_return_0', 'dax_log_return_1', 'dax_log_return_2',\n",
" 'aord_log_return_0', 'aord_log_return_1', 'aord_log_return_2'])\n",
"\n",
"for i in range(7, len(log_return_data)):\n",
" snp_log_return_positive = log_return_data['snp_log_return_positive'].ix[i]\n",
" snp_log_return_negative = log_return_data['snp_log_return_negative'].ix[i]\n",
" snp_log_return_1 = log_return_data['snp_log_return'].ix[i-1]\n",
" snp_log_return_2 = log_return_data['snp_log_return'].ix[i-2]\n",
" snp_log_return_3 = log_return_data['snp_log_return'].ix[i-3]\n",
" nyse_log_return_1 = log_return_data['nyse_log_return'].ix[i-1]\n",
" nyse_log_return_2 = log_return_data['nyse_log_return'].ix[i-2]\n",
" nyse_log_return_3 = log_return_data['nyse_log_return'].ix[i-3]\n",
" djia_log_return_1 = log_return_data['djia_log_return'].ix[i-1]\n",
" djia_log_return_2 = log_return_data['djia_log_return'].ix[i-2]\n",
" djia_log_return_3 = log_return_data['djia_log_return'].ix[i-3]\n",
" nikkei_log_return_0 = log_return_data['nikkei_log_return'].ix[i]\n",
" nikkei_log_return_1 = log_return_data['nikkei_log_return'].ix[i-1]\n",
" nikkei_log_return_2 = log_return_data['nikkei_log_return'].ix[i-2]\n",
" hangseng_log_return_0 = log_return_data['hangseng_log_return'].ix[i]\n",
" hangseng_log_return_1 = log_return_data['hangseng_log_return'].ix[i-1]\n",
" hangseng_log_return_2 = log_return_data['hangseng_log_return'].ix[i-2]\n",
" ftse_log_return_0 = log_return_data['ftse_log_return'].ix[i]\n",
" ftse_log_return_1 = log_return_data['ftse_log_return'].ix[i-1]\n",
" ftse_log_return_2 = log_return_data['ftse_log_return'].ix[i-2]\n",
" dax_log_return_0 = log_return_data['dax_log_return'].ix[i]\n",
" dax_log_return_1 = log_return_data['dax_log_return'].ix[i-1]\n",
" dax_log_return_2 = log_return_data['dax_log_return'].ix[i-2]\n",
" aord_log_return_0 = log_return_data['aord_log_return'].ix[i]\n",
" aord_log_return_1 = log_return_data['aord_log_return'].ix[i-1]\n",
" aord_log_return_2 = log_return_data['aord_log_return'].ix[i-2]\n",
" training_test_data = training_test_data.append(\n",
" {'snp_log_return_positive':snp_log_return_positive,\n",
" 'snp_log_return_negative':snp_log_return_negative,\n",
" 'snp_log_return_1':snp_log_return_1,\n",
" 'snp_log_return_2':snp_log_return_2,\n",
" 'snp_log_return_3':snp_log_return_3,\n",
" 'nyse_log_return_1':nyse_log_return_1,\n",
" 'nyse_log_return_2':nyse_log_return_2,\n",
" 'nyse_log_return_3':nyse_log_return_3,\n",
" 'djia_log_return_1':djia_log_return_1,\n",
" 'djia_log_return_2':djia_log_return_2,\n",
" 'djia_log_return_3':djia_log_return_3,\n",
" 'nikkei_log_return_0':nikkei_log_return_0,\n",
" 'nikkei_log_return_1':nikkei_log_return_1,\n",
" 'nikkei_log_return_2':nikkei_log_return_2,\n",
" 'hangseng_log_return_0':hangseng_log_return_0,\n",
" 'hangseng_log_return_1':hangseng_log_return_1,\n",
" 'hangseng_log_return_2':hangseng_log_return_2,\n",
" 'ftse_log_return_0':ftse_log_return_0,\n",
" 'ftse_log_return_1':ftse_log_return_1,\n",
" 'ftse_log_return_2':ftse_log_return_2,\n",
" 'dax_log_return_0':dax_log_return_0,\n",
" 'dax_log_return_1':dax_log_return_1,\n",
" 'dax_log_return_2':dax_log_return_2,\n",
" 'aord_log_return_0':aord_log_return_0,\n",
" 'aord_log_return_1':aord_log_return_1,\n",
" 'aord_log_return_2':aord_log_return_2},\n",
" ignore_index=True)\n",
" \n",
"training_test_data.describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"log_return_data['snp_log_return_positive'].value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The odds are more in favor of s&p ending positive then negative (55% for positive, 45% for negative)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, create the training and test data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"predictors_tf = training_test_data[training_test_data.columns[2:]]\n",
"\n",
"classes_tf = training_test_data[training_test_data.columns[:2]]\n",
"\n",
"training_set_size = int(len(training_test_data) * 0.8)\n",
"test_set_size = len(training_test_data) - training_set_size\n",
"\n",
"training_predictors_tf = predictors_tf[:training_set_size]\n",
"training_classes_tf = classes_tf[:training_set_size]\n",
"test_predictors_tf = predictors_tf[training_set_size:]\n",
"test_classes_tf = classes_tf[training_set_size:]\n",
"\n",
"training_predictors_tf.describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_predictors_tf.describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define some metrics here to evaluate the models.\n",
"\n",
"* [Precision](https://en.wikipedia.org/wiki/Precision_and_recall#Precision) - The ability of the classifier not to label as positive a sample that is negative.\n",
"* [Recall](https://en.wikipedia.org/wiki/Precision_and_recall#Recall) - The ability of the classifier to find all the positive samples.\n",
"* [F1 Score](https://en.wikipedia.org/wiki/F1_score) - A weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.\n",
"* Accuracy - The percentage correctly predicted in the test data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def tf_confusion_metrics(model, actual_classes, session, feed_dict):\n",
" predictions = tf.argmax(model, 1)\n",
" actuals = tf.argmax(actual_classes, 1)\n",
"\n",
" ones_like_actuals = tf.ones_like(actuals)\n",
" zeros_like_actuals = tf.zeros_like(actuals)\n",
" ones_like_predictions = tf.ones_like(predictions)\n",
" zeros_like_predictions = tf.zeros_like(predictions)\n",
"\n",
" tp_op = tf.reduce_sum(\n",
" tf.cast(\n",
" tf.logical_and(\n",
" tf.equal(actuals, ones_like_actuals), \n",
" tf.equal(predictions, ones_like_predictions)\n",
" ), \n",
" \"float\"\n",
" )\n",
" )\n",
"\n",
" tn_op = tf.reduce_sum(\n",
" tf.cast(\n",
" tf.logical_and(\n",
" tf.equal(actuals, zeros_like_actuals), \n",
" tf.equal(predictions, zeros_like_predictions)\n",
" ), \n",
" \"float\"\n",
" )\n",
" )\n",
"\n",
" fp_op = tf.reduce_sum(\n",
" tf.cast(\n",
" tf.logical_and(\n",
" tf.equal(actuals, zeros_like_actuals), \n",
" tf.equal(predictions, ones_like_predictions)\n",
" ), \n",
" \"float\"\n",
" )\n",
" )\n",
"\n",
" fn_op = tf.reduce_sum(\n",
" tf.cast(\n",
" tf.logical_and(\n",
" tf.equal(actuals, ones_like_actuals), \n",
" tf.equal(predictions, zeros_like_predictions)\n",
" ), \n",
" \"float\"\n",
" )\n",
" )\n",
"\n",
" tp, tn, fp, fn = \\\n",
" session.run(\n",
" [tp_op, tn_op, fp_op, fn_op], \n",
" feed_dict\n",
" )\n",
"\n",
" tpfn = float(tp) + float(fn)\n",
" tpr = 0 if tpfn == 0 else float(tp)/tpfn\n",
" fpr = 0 if tpfn == 0 else float(fp)/tpfn\n",
"\n",
" total = float(tp) + float(fp) + float(fn) + float(tn)\n",
" accuracy = 0 if total == 0 else (float(tp) + float(tn))/total\n",
"\n",
" recall = tpr\n",
" tpfp = float(tp) + float(fp)\n",
" precision = 0 if tpfp == 0 else float(tp)/tpfp\n",
" \n",
" f1_score = 0 if recall == 0 else (2 * (precision * recall)) / (precision + recall)\n",
" \n",
" print('Precision = ', precision)\n",
" print('Recall = ', recall)\n",
" print('F1 Score = ', f1_score)\n",
" print('Accuracy = ', accuracy)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Binary classification with Tensorflow"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, get some tensors flowing. The model is binary classification expressed in TensorFlow."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sess = tf.Session()\n",
"\n",
"# Define variables for the number of predictors and number of classes to remove magic numbers from our code.\n",
"num_predictors = len(training_predictors_tf.columns) # 24 in the default case\n",
"num_classes = len(training_classes_tf.columns) # 2 in the default case\n",
"\n",
"# Define placeholders for the data we feed into the process - feature data and actual classes.\n",
"feature_data = tf.placeholder(\"float\", [None, num_predictors])\n",
"actual_classes = tf.placeholder(\"float\", [None, num_classes])\n",
"\n",
"# Define a matrix of weights and initialize it with some small random values.\n",
"weights = tf.Variable(tf.truncated_normal([num_predictors, num_classes], stddev=0.0001))\n",
"biases = tf.Variable(tf.ones([num_classes]))\n",
"\n",
"# Define our model...\n",
"# Here we take a softmax regression of the product of our feature data and weights.\n",
"model = tf.nn.softmax(tf.matmul(feature_data, weights) + biases)\n",
"\n",
"# Define a cost function (we're using the cross entropy).\n",
"cost = -tf.reduce_sum(actual_classes*tf.log(model))\n",
"\n",
"# Define a training step...\n",
"# Here we use gradient descent with a learning rate of 0.01 using the cost function we just defined.\n",
"training_step = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)\n",
"\n",
"init = tf.global_variables_initializer()\n",
"sess.run(init)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll train our model in the following snippet. The approach of TensorFlow to executing graph operations allows fine-grained control over the process. Any operation you provide to the session as part of the run operation will be executed and the results returned. You can provide a list of multiple operations.\n",
"\n",
"You'll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration we'll assess the accuracy of the model on the training data to assess progress."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n",
"\n",
"for i in range(1, 30001):\n",
" sess.run(\n",
" training_step, \n",
" feed_dict={\n",
" feature_data: training_predictors_tf.values, \n",
" actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)\n",
" }\n",
" )\n",
" if i%5000 == 0:\n",
" print(i, sess.run(\n",
" accuracy,\n",
" feed_dict={\n",
" feature_data: training_predictors_tf.values, \n",
" actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)\n",
" }\n",
" ))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An accuracy of 65% on the training data is fine, certainly better than random."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"feed_dict= {\n",
" feature_data: test_predictors_tf.values,\n",
" actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)\n",
"}\n",
"\n",
"tf_confusion_metrics(model, actual_classes, sess, feed_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The metrics for this most simple of TensorFlow models are unimpressive, an F1 Score of 0.36 is not going to blow any light bulbs in the room. That's partly because of its simplicity and partly because It hasn't been tuned; selection of hyperparameters is very important in machine learning modelling."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Feed-forward neural network with two hidden layers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You'll now build a proper feed-forward neural net with two hidden layers."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sess1 = tf.Session()\n",
"\n",
"num_predictors = len(training_predictors_tf.columns)\n",
"num_classes = len(training_classes_tf.columns)\n",
"\n",
"feature_data = tf.placeholder(\"float\", [None, num_predictors])\n",
"actual_classes = tf.placeholder(\"float\", [None, 2])\n",
"\n",
"weights1 = tf.Variable(tf.truncated_normal([24, 50], stddev=0.0001))\n",
"biases1 = tf.Variable(tf.ones([50]))\n",
"\n",
"weights2 = tf.Variable(tf.truncated_normal([50, 25], stddev=0.0001))\n",
"biases2 = tf.Variable(tf.ones([25]))\n",
" \n",
"weights3 = tf.Variable(tf.truncated_normal([25, 2], stddev=0.0001))\n",
"biases3 = tf.Variable(tf.ones([2]))\n",
"\n",
"hidden_layer_1 = tf.nn.relu(tf.matmul(feature_data, weights1) + biases1)\n",
"hidden_layer_2 = tf.nn.relu(tf.matmul(hidden_layer_1, weights2) + biases2)\n",
"model = tf.nn.softmax(tf.matmul(hidden_layer_2, weights3) + biases3)\n",
"\n",
"cost = -tf.reduce_sum(actual_classes*tf.log(model))\n",
"\n",
"train_op1 = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)\n",
"\n",
"init = tf.global_variables_initializer()\n",
"sess1.run(init)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, you'll train the model over 30,000 iterations using the full dataset each time. Every thousandth iteration, you'll assess the accuracy of the model on the training data to assess progress."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"correct_prediction = tf.equal(tf.argmax(model, 1), tf.argmax(actual_classes, 1))\n",
"accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n",
"\n",
"for i in range(1, 30001):\n",
" sess1.run(\n",
" train_op1, \n",
" feed_dict={\n",
" feature_data: training_predictors_tf.values, \n",
" actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)\n",
" }\n",
" )\n",
" if i%5000 == 0:\n",
" print(i, sess1.run(\n",
" accuracy,\n",
" feed_dict={\n",
" feature_data: training_predictors_tf.values, \n",
" actual_classes: training_classes_tf.values.reshape(len(training_classes_tf.values), 2)\n",
" }\n",
" ))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A significant improvement in accuracy with the training data shows that the hidden layers are adding additional capacity for learning to the model.\n",
"\n",
"Looking at precision, recall, and accuracy, you can see a measurable improvement in performance, but certainly not a [step function](https://wikipedia.org/wiki/Step_function). This indicates that we're likely reaching the limits of this relatively simple feature set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"feed_dict= {\n",
" feature_data: test_predictors_tf.values,\n",
" actual_classes: test_classes_tf.values.reshape(len(test_classes_tf.values), 2)\n",
"}\n",
"\n",
"tf_confusion_metrics(model, actual_classes, sess1, feed_dict)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Inference"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# run inference on first 5 samples in the test set\n",
"sess1.run(\n",
" [model, actual_classes], \n",
" feed_dict={\n",
" feature_data: test_predictors_tf.values[:5], \n",
" actual_classes: test_classes_tf.values[:5].reshape(len(test_classes_tf.values[:5]), 2)\n",
" }\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The model seems to be right for all five days (if treshold probability is set at 0.5)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# investigate the close values of the corresponding days\n",
"closing_data['snp_close'][training_set_size-1+7:training_set_size+7+5]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"index = closing_data.index.get_loc(\"2014-08-12\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"index_test_set = index - 7 - training_set_size"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"test_predictors_tf.values[index_test_set].shape"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sess1.run(\n",
" [tf.argmax(model, 1), tf.argmax(actual_classes,1)], \n",
" feed_dict={\n",
" feature_data: np.expand_dims(test_predictors_tf.values[index_test_set], axis=0), \n",
" actual_classes: np.expand_dims(test_classes_tf.values[index_test_set], axis=0)\n",
" }\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"we will re-use this example to unit test our serving component later on"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}