Build Blog Traffic Using Excel and Data Mining

This past Saturday, I posted about using data mining to look for patterns in your blog traffic. I wrote that you can use something called an Excel Pivot Chart report to get a better feel for how your readers are interacting with your site. What I should’ve written was that you can use an Excel Pivot Table report, the chart is optional. Why should you build an Excel Pivot Table? Simple, it’s how you start to build blog traffic!

Building a report is a great way to see trends in your readership and its easy to do. Once you see things happening on your site you can start asking questions like, what type of content drives the most traffic and on what days?” The Excel Pivot Table report won’t be able to answer that question but it can answer the question, what’s my busiest day, or what’s my optimal post per day quantity, and what’s my most popular category.”

Interested? Here’s how you do it in 5 easy steps.

Step 1 – Gather Data

If you use Google Adwords, or another site statisitc monitor, download your visitor data. You can choose what ever time frame you like, a good rule of thumb is about 2 months worth of data. You’ll need to get the number of hits and the date of the hits. Next, add this information to an Excel spreadsheet and add the following columns: Weekday, Number of Posts, and Category.

Step 2 – Transform the Data

Go back between the dates of your data download and fill in the columns for Weekday, make sure to match the date with the right weekday. Next, fill in the Number of Posts column with, you guessed it, the number of posts you did that day.

Step 3 – Create a Category

When you get to the step of data mining your traffic, you’ll want to know what content drives your traffic and on what days. Adding a key of categories will help you accomplish that. I entered the number 1” if the post that day was about Forex, 2″ if it was for stocks, etc. You get the point. If you posted more than one post on any given day and it was about more than one topic, you can add a second or third category column. You get as detailed as you want, its really up to you.

Step 4 – Build the Excel Pivot Table

Once you have all your information, you can build the table. Go to Data > Pivot Table and follow the instructions. You can place the table in your existing worksheet or a new one (I usually go for a new worksheet). Select your data range to include the Weekday, # of Visitors, Posts Per Day, and Category.

Once you did that you’ll see your new worksheet with a little floating menu system. You can drag and drop the fields into your new table. Drag the # of Visitors into the Drop Data Items area, drag the Weekday field into the Drop Column Fields area, drag Posts per Day to the Drop Row Fields area, and lastly drag the Category field to the Drop Page Fields area.

Step 5 – Format the Table

Use Excel’s auto format function in the Pivot Table wizard to select the style of table you’d like to see. When your all done, your spreadsheet should look something like this Blog Data example.

The first step before Data Mining your blog traffic is done! You can easily see what your busiest day of the week is, what’s a good # of posts per day (this is great from an efficiency standpoint), and what’s your most popular category. Just doing this simple Excel exercise can help you identify the ways to build more traffic to your website, which could yield financial benefits if you’re using Adsense or some other monetizing method.

As always, if you have a question please leave me a comment.

Building an AI financial market model – Lesson III

In Lesson 2, I went over the concept of MultiObjective Feature Selection (MOFS). In this lesson we’ll build on MOFS for our model but we’ll forecast the trend and measure it’s accuracy.

Revisiting MOFS

We learned in lesson 2 that RapidMiner can simultaneously select the best features in your data set while maximizing the performance.  We ran the process and the best features were selected below.


From here we want to feed the data into three new operators that are part of the Series Extension. We will be using the Windowing, Sliding Window Validation, and the Forecasting Performance operator.

These there operators are key to measure a performance of your time series model. RapidMiner is really good and determining the directional accuracy of time series and a bit rough when it comes to point forecasts. My personal observation is that it’s futile to get a point forecast for an asset price, you have better luck with direction and volatility.

Our forecasting model will use a Support Vector Machine and and RBF kernel. Time series appear to benefit from this combination and you can always check out these links for more info.

Windowing the Data

RapidMiner allows you to do multivariate time series analysis also known as a model driven approach to analysis.  This is different than a data driven approach, such as ARIMA, and allows you to use many different inputs to make a forecast. Of course, this means that point forecasting becomes very difficult when you have multiple inputs, but makes directional forecast more robust.

The model driven approach in RapidMiner requires you to Window your Data. To do that you’ll need to use the Window operator. This operator is often misunderstood, so I suggest you read my post in the community on how it works.

Tip: Another great reference on using RapidMiner for time series is here.

There are key parameters that you should be aware of especially the window size, the step size, whether or not you create a label, and the horizon.



When it comes to time series for the stock market, I usually choose a value of 5 for my window. This can be fore 5 days, if your data is daily, or 5 weeks if it’s weekly. You can choose what you think is best.

The Step Size parameter tells the Windowing operator to create a new window with the next example row it encounters. If it was set to two, then it will move two examples ahead and make a new window.

Tip: The Series Representation parameter is defaulted to “encode_series_by_examples.” You should leave this default if your time series data is row by row. If a new value of your time series data is in a new column (e.g. many columns and one row), then you should change it to “encode_series_by_attributes.”

Sliding Validation

The Sliding Window Validation operator is what is used to backtest your time series, it operates differently than a Cross Validation because it creates a “time window” on your data, builds a model, and tests it’s performance before sliding to another time point in your time series.


In our example we create a training and testing window width of 10 example rows, our step size is -1 (which is the size of the last testing window), and our horizon is 1. The horizon is how far into the future we want to predict, in this case it’s 1 example row.

There are some other interesting toggle parameters to choose from. The default is average performances only, so your Forecast Trend Accuracy will be your average performance. If you toggle on “cumulative training” then the Sliding Window Validation operator will keep adding the previous window to the training set. This is handy if you want see if the past time series data might affect your performance going forward BUT it makes training and testing very memory intensive.

Double clicking on the Sliding Window Validation operator we see a typical RapidMiner Validation training and testing sides where we can embed our SVM, Apply Model, and Forecasting Performance operators. The Forecasting Performance operator is a special Series Extension operator. You need to use this to forecast the trend on any time series problem.



Forecast the Trend

Once we run the process and the analysis completes, we see that we have a 55.5% average accuracy to predict the direction of the trend. Not great, but we can see if we can optimize the SVM parameters of C and gamma to get better performance out of the model.


In my next lesson I’ll go over how to do Optimization in RapidMiner to better forecast the trend.

That’s the end of Lesson 3 for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 4 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning

Building an AI Financial Market Model – Lesson II

In this tutorial I want to show you how to use MultiObjective Feature Selection (MOFS) in RapidMinerIt’s a great technique to simultaneously reduce your attribute set and maximize your performance (hence: MultiObjective). This feature selection process can be run over and over again for your AI Financial Market Model, should it begin to drift.

Load in the Process from Tutorial One

Start by reading the Building an AI Financial Market Model – Lesson 1 post. At the bottom of that post you can download the RapidMiner process.

Add an Optimize Selection (Evolutionary) operator

The data that we pass through the process contains the adjusted closing prices of the S&P500, 10 Year Bond Yield, and the Philadelphia Gold. Feature Selection let’s us chose which one of these attributes contributes the most to the overall model performance, and which really don’t matter at all.

To do that, we need to add an Optimize Selection (Evolutionary) operator.


Why do you want to do MultiObjective Feature Selection? There are many reasons but most important of all is that a smaller data set increases your training time by reducing consumption of your computer resources.

When we execute this process, you can see that the Optimize Selection (Evolutionary) operator starts evaluating each attribute. At first, it measures the performance of ALL attributes and it looks like it’s all over the map.


How it measures the performance is with a Cross Validation operator embedded inside the subprocess.




The Cross Validation operator use a Gradient Boosted Tree algorithm to analyze the permutated inputs and measures their performance in an iterative manner. Attributes are removed if they don’t provide an increase in performance.


MultiObjective Feature Selection Results

From running this process, we see that the following attributes provide the best performance over 25 iterations.



Note: We choose to have a minimum of 5 attributes returned in the parameter configuration. The selected ones have a weight of 1.

The resulting performance for this work is below.


The overall accuracy was 66%. In the end predicting and UP trend was pretty decent, but not so good for the DOWN trend.

The possible reason for this poor performance is that I purposely made a mistake here. I used a Cross Validation operator instead of using a Sliding Window Validation operator.

The Sliding Window Validation operator is used to backtest and train a time series model in RapidMiner and we’ll explain the concepts of Windowing and Sliding Window Validation in the next Lesson.

Note: You can use the above method of MultiObjective Feature Selection for both time series and standard classification tasks.

That’s the end of Lesson  for your first AI financial market model. You can download the above sample process here. To install it, just go to File > Import Process. Lesson 3 will be updated shortly.

This is an update to my original 2007 YALE tutorials and are updated for RapidMiner v7.0. In the original set of posts I used the term AI when I really meant Machine Learning