Blog

Building an AI financial market model – Lesson I

Before you can begin with building your own AI Financial Market Model (machine learned), you have to decide on what software to use.  Since I wrote this article in 2007, many new advances have been made in machine learning. Notably the python module Scikit Learn came out and Hadoop was released into the wild.

I’m not overly skilled in coding and programming – I know enough to get by- I settled on RapidMiner.  RapidMiner is a very simple visual programming platform that let’s you drag and drop “operators” into a design canvas. Each operator has a specific type of task related to ETL, modeling, scoring, and extending the features of RapidMiner.

There is a slight learning curve but, it’s not hard to learn if you follow along with this tutorial!

The AI Financial Market Model

First download RapidMiner Studio and then get your market data (OHLCV prices), merge them together, transform the dates, figure out the trends, and so forth. Originally these tutorials built a simple classification type of model that look to see if your trend was classified as being in an “up-trend” or a “down-trend.” The fallacy was they didn’t not take into account the time series nature of the market data and the resulting model was pretty bad.

For this revised tutorial we’re going to do a few things.

  1. Install the Finance and Economics, and Series Extensions
  2. Select the S&P500 weekly OHLCV data for a range of 5 years. We’ll visualize the closing prices and auto-generate a trend label (i.e. Up or Down)
  3. We’ll add in other market securities (i.e. Gold, Bonds, etc) and see if we can do some feature selection
  4. Then we’ll build a forecasting model using some of new H20.ai algorithms included in RapidMiner v7.2

All processes will be shared and included in these tutorials. I welcome your feedback and comments.

The Data

We’re going to use the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold Index from September 30, 2011 through September 20, 2016.

The raw data looks like this:

ai1-raw-data

We renamed the columns (attributes) humanely by removing the “^” character from the stock symbols.

Next we visualized the adjusted weekly closing price of the S&P500 using the built in visualization tools of RapidMiner.

ai1-time-series

The next step will be to transform the S&P500 adjusted closing price into Up and Down trend labels. To automatically do this we have to install the RapidMiner Series Extension and use the Classify by Trend operator. The Classify by Trend operator can only work if you set the set the SP500_Adjusted_Close column (attribute) as a Label role.

The Label role in RapidMiner is your target variable. In RapidMiner all data columns come in as “Regular” roles and a “Label” role is considered a special role. It’s special in the sense that it’s what you want the machine learned model to learn to. To achieve this you’ll use the Set Role operator. In the sample process I share below I also set the Date to the ID role. The ID role is just like a primary key, it’s useful when looking up records but doesn’t get built into the model.

The final data transformation looks like this:

ai1-transformed-data

The GSPC_Adjusted_Close column is now transformed and renamed to the label column.

The resulting process looks like this:

ai1-process

That’s the end of Lesson 1 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 2 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning

Search Engine Optimization (SEO) and Data Mining

I posted about the power of Data Mining when analyzing your blog’s traffic and how to maximize your Google Adword advertising relative to your Adsense earnings, but I forgot to mention one critical thing! Search Engine Optimization (SEO)!

SEO is just a process to organize your blog, or website, in such a way that you’ll end up at the top when ever an Internet user searches for something that is relative to your site. If you advertise your blog using a Pay Per Click method, like Google Adwords, then being ranked at the top of searches is really important as Ms. Danielle points out!

It won’t come as a shock to readers of this blog that Data Mining can really help with your SEO! Techniques like associative analysis and cluster data mining are great ways to discover who’s clicking what on your site. Associative analysis is used to estimate the probability of whether a person will purchase a product given that they own a particular product or group of products.

Cluster data mining, on the other hand, can identify the profile or group of customers that are associated with a particular type of Web site [via Data Mining and Business Productivity, by Stephan Kudyba]. These two techniques are critical if you want to maximize any e-business!

Now here’s the caveat, before you can start data mining your site, you spend a few months gathering website statistics and data. However, this doesn’t preclude your ability to start optimizing your website for better web searching. Here are a 5 tips that I’ve been using that have had a great traffic impact in my blog’s short life.

5 SEO Tips:

  1. Write valuable content or offer a valuable service. I can’t stress this enough;
  2. If you run a blog, spend considerable time selecting the right categories, those help search engines effectively index your site. Over time I’ve modified my category list to create relevant descriptions for my blog posts;
  3. Create a Crawl List and XML sitemap for Google. Doing this let’s the Google spider index your site easier and faster;
  4. Use Google Webmaster tool to manage your sitemap and clean out old URLs;
  5. Try to keep the size of your content on your site under 30k so your site can load in under 8 seconds for 56.6k modems. This helps your page load under 8 seconds.

Update: I now use a great Python package called PySEOAnalyzer to review how the content on my blog is working. It’s open source and can be downloaded here.

Blog Traffic Analysis Using Data Mining

Yes, you read correctly. You can data mine your blog’s traffic using a simple web statistics data collector like Google Adwords. I’m doing it right now for Neural Market Trends and I’m finding out some very interesting information.

I found out that:

  • Tuesday’s are my busiest days.
  • The optimal amount of posts per day should be 2.
  • The most popular post category happens to be my posts about Forex and YALE.

A lot of the above information I gleaned from a Google Adwords data dump that I put into an Excel Pivot Chart report. There’s no neural net magic behind that and you could do this quite easily yourself. Its when I built a neural net and mined the data that I discovered some unique relationships. These relationships should help me tweak my content to better serve my readers.

One of the things I learned from this little exercise is that I have a selective group of readers on Saturdays. Welcome! 🙂