Building an AI Financial Market Model – Lesson II

In this tutorial I want to show you how to use MultiObjective Feature Selection (MOFS) in RapidMinerIt’s a great technique to simultaneously reduce your attribute set and maximize your performance (hence: MultiObjective). This feature selection process can be run over and over again for your AI Financial Market Model, should it begin to drift.

Load in the Process from Tutorial One

Start by reading the Building an AI Financial Market Model – Lesson 1 post. At the bottom of that post you can download the RapidMiner process.

Add an Optimize Selection (Evolutionary) operator

The data that we pass through the process contains the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold. Feature Selection let’s us chose which one of these attributes contributes the most to the overall model performance, and which really don’t matter at all.

To do that, we need to add an Optimize Selection (Evolutionary) operator.


Why do you want to do MultiObjective Feature Selection? There are many reasons but most important of all is that a smaller data set increases your training time by reducing consumption of your computer resources.

When we execute this process, you can see that the Optimize Selection (Evolutionary) operator starts evaluating each attribute. At first, it measures the performance of ALL attributes and it looks like it’s all over the map.


How it measures the performance is with a Cross Validation operator embedded inside the subprocess.




The Cross Validation operator use a Gradient Boosted Tree algorithm to analyze the permutated inputs and measures their performance in an iterative manner. Attributes are removed if they don’t provide an increase in performance.


MultiObjective Feature Selection Results

From running this process, we see that the following attributes provide the best performance over 25 iterations.



Note: We choose to have a minimum of 5 attributes returned in the parameter configuration. The selected ones have a weight of 1.

The resulting performance for this work is below.


The overall accuracy was 66%. In the end predicting and UP trend was pretty decent, but not so good for the DOWN trend.

The possible reason for this poor performance is that I purposely made a mistake here. I used a Cross Validation operator instead of using a Sliding Window Validation operator.

The Sliding Window Validation operator is used to backtest and train a time series model in RapidMiner and we’ll explain the concepts of Windowing and Sliding Window Validation in the next Lesson.

Note: You can use the above method of MultiObjective Feature Selection for both time series and standard classification tasks.

That’s the end of Lesson  for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 3 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning


Building an AI financial market model – Lesson I

Before you can begin with building your own AI Financial Market Model (machine learned), you have to decide on what software to use.  Since I wrote this article in 2007, many new advances have been made in machine learning. Notably the python module Scikit Learn came out and Hadoop was released into the wild.

I’m not overly skilled in coding and programming – I know enough to get by- I settled on RapidMiner.  RapidMiner is a very simple visual programming platform that let’s you drag and drop “operators” into a design canvas. Each operator has a specific type of task related to ETL, modeling, scoring, and extending the features of RapidMiner.

There is a slight learning curve but, it’s not hard to learn if you follow along with this tutorial!

The AI Financial Market Model

First download RapidMiner Studio and then get your market data (OHLCV prices), merge them together, transform the dates, figure out the trends, and so forth. Originally these tutorials built a simple classification type of model that look to see if your trend was classified as being in an “up-trend” or a “down-trend.” The fallacy was they didn’t not take into account the time series nature of the market data and the resulting model was pretty bad.

For this revised tutorial we’re going to do a few things.

  1. Install the Finance and Economics, and Series Extensions
  2. Select the S&P500 weekly OHLCV data for a range of 5 years. We’ll visualize the closing prices and auto-generate a trend label (i.e. Up or Down)
  3. We’ll add in other market securities (i.e. Gold, Bonds, etc) and see if we can do some feature selection
  4. Then we’ll build a forecasting model using some of new algorithms included in RapidMiner v7.2

All processes will be shared and included in these tutorials. I welcome your feedback and comments.

The Data

We’re going to use the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold Index from September 30, 2011 through September 20, 2016.

The raw data looks like this:


We renamed the columns (attributes) humanely by removing the “^” character from the stock symbols.

Next we visualized the adjusted weekly closing price of the S&P500 using the built in visualization tools of RapidMiner.


The next step will be to transform the S&P500 adjusted closing price into Up and Down trend labels. To automatically do this we have to install the RapidMiner Series Extension and use the Classify by Trend operator. The Classify by Trend operator can only work if you set the set the SP500_Adjusted_Close column (attribute) as a Label role.

The Label role in RapidMiner is your target variable. In RapidMiner all data columns come in as “Regular” roles and a “Label” role is considered a special role. It’s special in the sense that it’s what you want the machine learned model to learn to. To achieve this you’ll use the Set Role operator. In the sample process I share below I also set the Date to the ID role. The ID role is just like a primary key, it’s useful when looking up records but doesn’t get built into the model.

The final data transformation looks like this:


The GSPC_Adjusted_Close column is now transformed and renamed to the label column.

The resulting process looks like this:


That’s the end of Lesson 1 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 2 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning

Search Engine Optimization (SEO) and Data Mining

I posted about the power of Data Mining when analyzing your blog’s traffic and how to maximize your Google Adword advertising relative to your Adsense earnings, but I forgot to mention one critical thing! Search Engine Optimization (SEO)!

SEO is just a process to organize your blog, or website, in such a way that you’ll end up at the top when ever an Internet user searches for something that is relative to your site. If you advertise your blog using a Pay Per Click method, like Google Adwords, then being ranked at the top of searches is really important as Ms. Danielle points out!

It won’t come as a shock to readers of this blog that Data Mining can really help with your SEO! Techniques like associative analysis and cluster data mining are great ways to discover who’s clicking what on your site. Associative analysis is used to estimate the probability of whether a person will purchase a product given that they own a particular product or group of products.

Cluster data mining, on the other hand, can identify the profile or group of customers that are associated with a particular type of Web site [via Data Mining and Business Productivity, by Stephan Kudyba]. These two techniques are critical if you want to maximize any e-business!

Now here’s the caveat, before you can start data mining your site, you spend a few months gathering website statistics and data. However, this doesn’t preclude your ability to start optimizing your website for better web searching. Here are a 5 tips that I’ve been using that have had a great traffic impact in my blog’s short life.

5 SEO Tips:

  1. Write valuable content or offer a valuable service. I can’t stress this enough;
  2. If you run a blog, spend considerable time selecting the right categories, those help search engines effectively index your site. Over time I’ve modified my category list to create relevant descriptions for my blog posts;
  3. Create a Crawl List and XML sitemap for Google. Doing this let’s the Google spider index your site easier and faster;
  4. Use Google Webmaster tool to manage your sitemap and clean out old URLs;
  5. Try to keep the size of your content on your site under 30k so your site can load in under 8 seconds for 56.6k modems. This helps your page load under 8 seconds.

Update: I now use a great Python package called PySEOAnalyzer to review how the content on my blog is working. It’s open source and can be downloaded here.