There are NEW livestream videos about RapidMiner! Visit my Channel here
In this tutorial I want to show you how to use MultiObjective Feature Selection (MOFS) in RapidMiner. It’s a great technique to simultaneously reduce your attribute set and maximize your performance (hence: MultiObjective). This feature selection process can be run over and over again for your AI Financial Market Model, should it begin to drift.
Load in the Process from Tutorial One
Start by reading the Building an AI Financial Market Model – Lesson 1 post. At the bottom of that post you can download the RapidMiner process.
Add an Optimize Selection (Evolutionary) operator
The data that we pass through the process contains the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold. Feature Selection let’s us chose which one of these attributes contributes the most to the overall model performance, and which really don’t matter at all.
To do that, we need to add an Optimize Selection (Evolutionary) operator.
Why do you want to do MultiObjective Feature Selection? There are many reasons but most important of all is that a smaller data set increases your training time by reducing consumption of your computer resources.
When we execute this process, you can see that the Optimize Selection (Evolutionary) operator starts evaluating each attribute. At first, it measures the performance of ALL attributes and it looks like it’s all over the map.
How it measures the performance is with a Cross Validation operator embedded inside the subprocess.
The Cross Validation operator use a Gradient Boosted Tree algorithm to analyze the permutated inputs and measures their performance in an iterative manner. Attributes are removed if they don’t provide an increase in performance.
MultiObjective Feature Selection Results
From running this process, we see that the following attributes provide the best performance over 25 iterations.
Note: We choose to have a minimum of 5 attributes returned in the parameter configuration. The selected ones have a weight of 1.
The resulting performance for this work is below.
The overall accuracy was 66%. In the end predicting and UP trend was pretty decent, but not so good for the DOWN trend.
The possible reason for this poor performance is that I purposely made a mistake here. I used a Cross Validation operator instead of using a Sliding Window Validation operator.
The Sliding Window Validation operator is used to backtest and train a time series model in RapidMiner and we’ll explain the concepts of Windowing and Sliding Window Validation in the next Lesson.
Note: You can use the above method of MultiObjective Feature Selection for both time series and standard classification tasks.
_This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning. _