Mean Reversion Trading Process in RapidMiner

Lately I’ve been think about becoming more active in trading again. I was reviewing some strategies and decided to recreate a mean reversion trading process in RapidMiner. I found a mean reversion trading strategy that uses Python here and just recreated it in RapidMiner.

The Process

The process is quite simple. You do the following:

1. Load in stock quote data via CSV;
2. Calculate daily returns;
3. Calculate a 20 day moving average;
4. Calculate a rolling 90 day standard deviation;
5. Generate Trading Criteria per the article
6. Wrap it all together and look at the Buy vs Hold and Buy Signals
Mind you, this doesn’t include commission costs and slippage. I suspect that once I add that in, the Buy and Hold strategy will be the best.

PS: to test this, just go to Yahoo Finance and download historical quote data for a stock and then repath it in the Read CSV operator. Use at least a 2 year time period.

Next Steps

I still have several ‘kink’s to work out but I can definitely see the opportunity for optimization here, such as:

  1. Why use a rolling 90 day window? Use parameter optimization to vary that value from 50 to 100.
  2. Why use a 20 day moving average? You could vary between a 10 or 30 day MA?
  3. Write a python script to download EOD stock data and then have RapidMiner loop through it.
  4. Write an commission and slippage subprocess to see if this method IS really profitable or not.
  5. Offload the processes to a RapidMiner Server and have it spit out trading recommendations on a daily basis

Building a Machine Learning Framework from scratch

Building a Machine Learning Framework from Scratch

Great article by Florian Cäsar on how his team developed a new machine learning framework. From scratch. In 491 steps!

He summarizes the entire process up in this great quote:

From images, text files, or your cat videos, bits are fed to the data pipeline that transforms them into usable data chunks and in turn to data sets,
which are then fed in small pieces to a trainer that manages all the training and passes it right on to the underlying neural network,
which consists of many underlying neural network layers connected through an arbitrarily linear or funky architecture,
which consist of many underlying neurons that form the smallest computational unit and are nudged in the right direction according to the trainer’s optimiser,
which takes the network and the transient training data in the shape of layer buffers, marks the parameters it can improve, runs every layer, and calculates a “how well did we do” score based on the calculated and correct answers from the supplied small pieces of the given dataset according to the optimiser’s settings, 
which computes the gradient of every parameter with respect to the score and then nudges the individual neurons correspondingly,
which then is run again and again until the optimiser reports results that are good enough as set in a rich criteria and hook system,
which is based on global and local nested parameter-identifier-registries that contain the shared parameters and distribute them safely to all workers
which are the actual workhorses of the training process that do as their operator says using individual and separate mathematical backends, 
which use the layer-defined placeholder computation graphs and put in the raw data and then execute it on their computational backend,
which are all also managed by the operator that distributes the worker’s work as needed and configured and also functions as a coordinator to the owning trainer,
which connects the network, the optimiser, the operator, the initialisers, 
which tell the trainer with which distribution to initialise what parameters, which work similar to hooks that act as a bridge between them all and communicate with external things using the Sigma environment,
which is the container and laid-back manager to everything that also supplies and runs these external things called monitors, 
which can be truly anything that makes us of the training data and
which finally display the learned funny cat image
… from the hooks from the workers from their operator from its assigned network from its dozens of layers from its millions of individual neurons derived from some data records from data chunks from data sets from data extractors.

In other words, they created a new software called Sigma.Core.


Sigma.core appears to be a Windows based machine learning software that uses deep learning. It’s feature list is small but impressive:

  • Uses different deep learning layers (i.e. dropouts, recurrent, etc)
  • Uses both linear and nonlinear networks
  • Four (4) different optimizers
  • Has hooks for storing, restoring checkpoints, CPU and runtime metrics
  • Runs on multi and single CPU’s, CUDA GPU
  • Native Windows GUI
  • Functional automatic differentiation

How long did it take?

According to Florian it took about 700 hours of intro/research, 2000 hours of development, and 2 souls sold to the devil. That’s over 1 full year of work for one person, assuming a standard 40 hour work week!

How StockTwits Uses Machine Learning

Fascinating behind the scenes interview of StockTwit’s Senior Data Scientist Garrett Hoffman. He shares great tidbits on how StockTwits uses machine learning for sentiment analysis. I’ve summarized the highlights below:

  • Idea generation is a huge barrier for active trading
  • Next gen of traders uses social media to make decisions
  • Garrett solves data problems and builds features for the StockTwits platform
  • This includes: production data science, product analytics, and insights research
  • Understanding social dynamics makes for a better user experience
  • Focus is to understand social dynamics of StockTwits (ST) community
  • Focuses on what’s happening inside the ST community
  • ST’s market sentiment model helps users with decision making
  • Users ‘tag’ content for bullish or bearish classes
  • Only 20 to 30% of content is tagged
  • Using ST’s market sentiment model increases coverage to 100%
  • For Data Science work, Python Stack is used
  • Use: Numpy, SciPy, Pandas, Scikit-Learn
  • Jupyter Notebooks for research and prototyping
  • Flask for API deployment
  • For Deep Learning, uses Tensorflow with AWS EC2 instances
  • Can spin up GPU’s as needed
  • Deep Learning methods used are Recurrent Neural Nets, Word2Vec, and Autoencoders
  • Stays abreast of new machine learning techniques from blogs, conferences and Twitter
  • Follows Twitter accounts from Google, Spotify, Apple, and small tech companies
  • One area ST wants to improve on is DevOps around Data Science
  • Bridge the gap between research/prototype phase and embedding it into tech stack for deployment
  • Misconception that complex solutions are best
  • Complexity ONLY ok if it leads to deeper insight
  • Simple solutions are best
  • Future long-term ideas: use AI around natural language