Building an AI financial market model – Lesson IV

In Lesson 3, I introduced the Sliding Window Validation operator to test how well we can forecast a trend in a time series.  Our initial results are very poor, we were able to forecast the trend with an average accuracy of 55.5%. This is fractionally better than a simple coin flip! In this updated lesson I will introduce the ability of Parameter Optimization in RapidMiner to see if we can forecast the trend better.

Parameter Optimization

We begin with the same process in Lesson 3 but we introduce a new operator called the Optimize Parameter (Grid) operator. We also do some house cleaning for putting this process into production.

The Optimize Parameter (Grid) operator let’s you do some amazing things, it lets you vary – by your predefined limits – parameter values of different operators.  Any operator that you put inside this operator’s subprocess can have their parameters automatically iterated over and the overall performance measured.  This is a great way to fine tune and optimize models for your analysis and ultimately for production.

For our process, we want to vary the training window width, testing window width, training step width on the Sliding Window Validation operator, the C and gamma parameter of the SVM machine learning algorithm, and the forecasting horizon on the Forecast Trend Performance operator. We want to test all combinations and ultimately determine the best combination of these parameters that will give us the best tuned trend prediction.

Note: I run a weekly optimization process for my volatility trend predictions. I’ve noticed depending on market activity, the training width of the Sliding Window Validation operator needs to be tweaked between 8 and 12 weeks.

I also add a few Store operators to save the Performance and Weights of the Optimize Selection operator, and the Performance and Parameter Set of the Optimization Parameter (Grid) operator. We’ll need this data for production.

Varying Parameters Automatically

Whatever operators you put inside the Optimize Parameters (Grid) operator can have their parameters varied automatically, you just have to select which ones and set minimum and maximum values.  Just click on the Edit Parameter settings button. Once you do, you are presented with a list of available operators to vary. Select one operator and another list of available parameters is shown. Then select which parameter you want and define min/max values.

Note: If you select a lot of parameters to vary with a very large max value, you could be optimizing for hours and even days. This operator consumes your computer resources when you millions of combinations!

The Log File

The log file is a handy operator that we use in optimization because we can create a custom log file that has the values of the parameters we’re measuring and the resulting forecast performance. You just name your column and select which operator and parameter you want to have an entry for.

Pro Tip: If you want to measure the performance, make sure you select the Sliding Window Validation operator’s performance port and NOT the Forecast Trend Performance operator. Why? Because the Forecast Trend Performance operator generates several models as it slides across the time series. Some performances are better than others. The Sliding Window Validation operator averages all the results together, and that’s the measure you want!

This is a great way of seeing what initial parameter combinations are generating the best performance.  It can also be used to visualize your best parameter combinations too!

The Results

The results are point to a parameter combination of:

  • Training Window Width: 10
  • Testing Window Width: 5
  • Step Width: 4
  • C: 0
  • Gamma: 0.1
  • Horizon: 3

To generate an average Forecast Trend accuracy of 61.5%. Compared to the original accuracy, this is an improvement.

That’s the end of Lesson 4 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 5 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7+. In the original set of posts I used the term AI when I really meant Machine Learning

Building an AI financial market model – Lesson III

In Lesson 2, I went over the concept of MultiObjective Feature Selection (MOFS). In this lesson we’ll build on MOFS for our model but we’ll forecast the trend and measure it’s accuracy.

Revisiting MOFS

We learned in lesson 2 that RapidMiner can simultaneously select the best features in your data set while maximizing the performance.  We ran the process and the best features were selected below.

mofs-list

From here we want to feed the data into three new operators that are part of the Series Extension. We will be using the Windowing, Sliding Window Validation, and the Forecasting Performance operator.

These there operators are key to measure a performance of your time series model. RapidMiner is really good and determining the directional accuracy of time series and a bit rough when it comes to point forecasts. My personal observation is that it’s futile to get a point forecast for an asset price, you have better luck with direction and volatility.

Our forecasting model will use a Support Vector Machine and and RBF kernel. Time series appear to benefit from this combination and you can always check out these links for more info.


process
Windowing the Data

RapidMiner allows you to do multivariate time series analysis also known as a model driven approach to analysis.  This is different than a data driven approach, such as ARIMA, and allows you to use many different inputs to make a forecast. Of course, this means that point forecasting becomes very difficult when you have multiple inputs, but makes directional forecast more robust.

The model driven approach in RapidMiner requires you to Window your Data. To do that you’ll need to use the Window operator. This operator is often misunderstood, so I suggest you read my post in the community on how it works.

Tip: Another great reference on using RapidMiner for time series is here.

There are key parameters that you should be aware of especially the window size, the step size, whether or not you create a label, and the horizon.

 

window-parameters

When it comes to time series for the stock market, I usually choose a value of 5 for my window. This can be fore 5 days, if your data is daily, or 5 weeks if it’s weekly. You can choose what you think is best.

The Step Size parameter tells the Windowing operator to create a new window with the next example row it encounters. If it was set to two, then it will move two examples ahead and make a new window.

Tip: The Series Representation parameter is defaulted to “encode_series_by_examples.” You should leave this default if your time series data is row by row. If a new value of your time series data is in a new column (e.g. many columns and one row), then you should change it to “encode_series_by_attributes.”

Sliding Validation

The Sliding Window Validation operator is what is used to backtest your time series, it operates differently than a Cross Validation because it creates a “time window” on your data, builds a model, and tests it’s performance before sliding to another time point in your time series.

sliding-window-parameters

In our example we create a training and testing window width of 10 example rows, our step size is -1 (which is the size of the last testing window), and our horizon is 1. The horizon is how far into the future we want to predict, in this case it’s 1 example row.

There are some other interesting toggle parameters to choose from. The default is average performances only, so your Forecast Trend Accuracy will be your average performance. If you toggle on “cumulative training” then the Sliding Window Validation operator will keep adding the previous window to the training set. This is handy if you want see if the past time series data might affect your performance going forward BUT it makes training and testing very memory intensive.

Double clicking on the Sliding Window Validation operator we see a typical RapidMiner Validation training and testing sides where we can embed our SVM, Apply Model, and Forecasting Performance operators. The Forecasting Performance operator is a special Series Extension operator. You need to use this to forecast the trend on any time series problem.

 

sliding-window-guts

Forecast the Trend

Once we run the process and the analysis completes, we see that we have a 55.5% average accuracy to predict the direction of the trend. Not great, but we can see if we can optimize the SVM parameters of C and gamma to get better performance out of the model.

forecast-trend-accuracy

In my next lesson I’ll go over how to do Optimization in RapidMiner to better forecast the trend.

That’s the end of Lesson 3 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 4 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning

Building an AI Financial Market Model – Lesson II

In this tutorial I want to show you how to use MultiObjective Feature Selection (MOFS) in RapidMinerIt’s a great technique to simultaneously reduce your attribute set and maximize your performance (hence: MultiObjective). This feature selection process can be run over and over again for your AI Financial Market Model, should it begin to drift.

Load in the Process from Tutorial One

Start by reading the Building an AI Financial Market Model – Lesson 1 post. At the bottom of that post you can download the RapidMiner process.

Add an Optimize Selection (Evolutionary) operator

The data that we pass through the process contains the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold. Feature Selection let’s us chose which one of these attributes contributes the most to the overall model performance, and which really don’t matter at all.

To do that, we need to add an Optimize Selection (Evolutionary) operator.

ai2-process

Why do you want to do MultiObjective Feature Selection? There are many reasons but most important of all is that a smaller data set increases your training time by reducing consumption of your computer resources.

When we execute this process, you can see that the Optimize Selection (Evolutionary) operator starts evaluating each attribute. At first, it measures the performance of ALL attributes and it looks like it’s all over the map.

ai2-generation0

How it measures the performance is with a Cross Validation operator embedded inside the subprocess.

ai2-crossvalidation

ai2-gbt

 

The Cross Validation operator use a Gradient Boosted Tree algorithm to analyze the permutated inputs and measures their performance in an iterative manner. Attributes are removed if they don’t provide an increase in performance.

ai2-20generations

MultiObjective Feature Selection Results

From running this process, we see that the following attributes provide the best performance over 25 iterations.

ai2-selectedfeatures

ai2-parameters

Note: We choose to have a minimum of 5 attributes returned in the parameter configuration. The selected ones have a weight of 1.

The resulting performance for this work is below.

ai2-performance

The overall accuracy was 66%. In the end predicting and UP trend was pretty decent, but not so good for the DOWN trend.

The possible reason for this poor performance is that I purposely made a mistake here. I used a Cross Validation operator instead of using a Sliding Window Validation operator.

The Sliding Window Validation operator is used to backtest and train a time series model in RapidMiner and we’ll explain the concepts of Windowing and Sliding Window Validation in the next Lesson.

Note: You can use the above method of MultiObjective Feature Selection for both time series and standard classification tasks.

That’s the end of Lesson  for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 3 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning