What works; What Doesn’t Work

The most important lesson I’ve learned while working at a Startup is to do more of what works and jettison what doesn’t work, quickly. That’s the way to success, the rest is just noise and a waste of time. This lesson can be applied to everything in life.

Data is your friend

We generate data all the time, whether it’s captured in a database or spreadsheet, just by being alive you throw of data points. The trick is to take notice of it, capture it, and then do something with it.

It’s the “do something with it” that matters to your success or not.  Your success can be anything that is of value to you. Time, money, weight loss, stock trading, whatever. You just need to start capturing data, evaluate it, and take action on it.

This is where you fail

Many people fail by taking no action on the data they captured and evaluated. They hope that things are going to get better or that things are going to change. Maybe they will, maybe they won’t but you must act on what the data is telling you now.

NOW!

My Examples, what Works/Doesn’t Work

  1. My $100 Forex experiment worked really well for a time, then it started to flag. The data was telling me that my trading method was no longer working. Did I listen? Nope. I blew up that account. This didn’t work for me.
  2. Writing RapidMiner Tutorials on this blog ended up getting me a job with a great job with RapidMiner. This lead to an amazing career in Data Science. Writing and taking an interest in things works.
  3. Day trading doesn’t work for me. I blow up all the time. What works for me is swing and trend trading. Do more of that and no day trading.

Keep it simple, stupid

The one thing I’ve also learned working at a startup is to keep things simple and stupid. You’re running so fast trying to make your quarter that you have no time for complex processes. Strip things down to their minimum and go as light as you can. This way you can adjust your strategy and make changes quickly, you can do more of what works and jettison what doesn’t.

 

 

Keras and NLTK

Lately I’ve been doing a lot more Python hacking, especially around text mining and using the deep learning library Keras and NLTK. Normally I’d do most of my work in RapidMiner but I wanted to do some grunt work and learn something along the way.  It was really about educating myself on Recurrent Neural Networks (RNN) and doing it the hard way I guess.

Keras and NLTK

As usually I went to google to do some sleuthing about how to text mine using an LSTM implementation of Keras and boy did I find some goodies.

The best tutorials are easy to understand and follow along. My introduction to Deep Learning with Keras was via Jason’s excellent tutorial called Text Generation with LSTM Recurrent Neural Networks in Python with Keras.

Jason took a every easy to bite approach to implementing Keras to read in the Alice In Wonderland book character by character and then try to generate some text in the ‘style’ of what was written before. It was a great Proof of Concept but fraught with some strange results. He acknowledges that and offers some additional guidance at the end of the tutorial, mainly removing punctuation and more training epochs.

The text processing is one thing but the model optimization is another. Since I have a crappy laptop I can just forget about optimizing a Keras script, so I went the text process route and used NLTK.

Now that I’ve been around the text mining/processing block a bunch of times, the NLTK python library makes more sense in this application. I much prefer using the RapidMiner Text Processing implementation for 90% of what I do with text but every so often you need something special and atypical.

Initial Results

The first results were terrible as my tweet can attest too!

So I added a short function to Jason’s script that preprocesses a new file loaded with haikus. I removed all punctuation and stop words with the express goal of generating haiku.

While this script was learning I started to dig around the Internet for some other interesting and related posts on LSTM’s, NLTK and text generation until I found Click-O-Tron.  That cracked me up. Leave it to us humans to take some cool piece of technology and implement it for lulz.

Implementation

I have grandiose dreams of using this script so I would need to put it in production one day. This is where everything got to be a pain in the ass. My first thought was to run the training on  a smaller machine and then use the trained weights to autogenerate new haikus in a separate scripts. This is not an atypical type of implementation. Right now I don’t care if this will take days to train.

While Python is great in many ways, dealing with libraries on one machine might be different on another machine and hardware. Especially when dealing with GPU’s and stuff like that.  It’s gets tricky and annoying considering I work on many different workstations these days. I have a crappy little ACER laptop that I use to cron python scripts for my Twitter related work, which also happens to be an AMD processor.

I do most of my ‘hacking’ on larger laptop that happens to have an Intel processor. To transfer my scripts from one machine to another I have to always make sure that every single Python package is installed on each machine. PITA!

Despite these annoyances, I ended up learning A LOT about Deep Learning architecture, their application, and short comings. In the end, it’s another tool in a Data Science toolkit, just don’t expect it to be a miracle savior.

Additional reading list

  • http://h6o6.com/2013/03/using-python-and-the-nltk-to-find-haikus-in-the-public-twitter-stream/
  • https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py

The Python Script

 

 

 

Is it Possible to Automate Data Science?

A few months ago I read about a programmer that automated his job down to the point where the coffee machine would make him lattes! Despite the ethical quandary, I thought it was pretty cool to automate your job with scripts. Then I wondered, was it possible to automate data science? Or at least parts of it? This general question proved to be a rabbit hole of exploration.
 
StackExchange has an ongoing discussion into another programmer’s automation of his tasks. He used scripts to prepare customer data into spreadsheets that other employees would use. The task used to take a month to do but it was able to cut that time down to 10 minutes. It did take him several months figure out how to build the right scripts to do the work he now only works 1 to 2 hours a week and gets paid for 40 hours
Throw Data on the Wall
 
In my life at RapidMiner I interacted with potential customers that wanted to “throw data on the wall and see what sticks.” They wanted to find some automated way to use data science to tell them something novel. This usually raises a red flag in my mind and leads me to ask more detailed questions like:
 
“Do you know the business objective you want to solve/meet?”
 
“Do you have a Data Science team or plan to hire a Data Scientist?”
 
“How do you do you do your data exploration and glean insight now?”
 
At this point I can ferret out the true reason for the call or the lack of understanding for the true problem at hand. I’ve even had one potential customer reveal that he called us because he heard of this “data mining stuff” 6 months ago and wanted to get in on it quick.
 
I get it. If you have lots of data where do you begin to make sense of it?
Automate what?
 
The path to insight in your data starts with the data. It’s always going to be messy, missing values, wrong key strokes, and in wrong places. It’s in a database in one office but Sally’s spreadsheet in another office. You can’t get any insight until you start extracting the data, transforming it, and loading it for analysis. This is the standard ETL we all know and love to hate.
 
You can automate ETL completely provided you know what format your data needs to be in. This is where tools like SQL and RapidMiner can help with your dirty work. If you haven’t automated your ETL, you’re behind the curve!
 
Once all the data is ready, then you can model it and test your hypothesis, but which algorithm?
 
Here’s where the crticial thinking comes in. You can’t automate your decision of which model to put into production but you can automate the modeling and evaluation of it. Once again, here’s where RapidMiner can help.
 
When working with a business group, the ubiquitous Decision Tree algorithm tends to come up. Why? Because business LOVE the pretty tree it makes and they’ve always used it before.
 
While Decision Trees are a great algorithm, they’re notorious for overfitting. So then use Random Forests! Random Forests do help with the overfitting problem but is it the right algorithm to use for your particular problem?
Automate Modeling and Evaluation
 
You can automate modeling and evaluation in RapidMiner. It’s easy to try many different algorithms within the same process and build ROC plots. You can output performance measures like LogLoss our AUC to rank which model performed the best. You can even create a leaderboard in RapidMiner Server to ‘automatically’ display which model performed the best!
 
I’ve worked with Customers that do just that. They used RapidMiner to prototype, optimize, and deploy models in a week. Even if they need bits of Python or R to finish the job, they just automate everything.
 
Yet still the question remains should you do this? The answer is that it depends if you know what you are doing. For example, feature generation is something that I’d be every cautious to ‘automate’. Sure you can create some simple calculations and add them as a new attribute, but in general feature generation is something that requires a bit more thinking and less automation. That is until you figured out what features work.
In a nutshell here’s what you can automate with warnings:

 

  1. ETL: You bet, automate away if you know what your format your data needs to be in
  2. Model Building: Yes, because of the no free lunch theorem you should try multiple models on the same data set. Just be cautious of the algorithms you choose
  3. Evaluation: Yes, just compare each model results using the same and multiple performance metrics (i.e. LogLoss, AUC, Kappa, etc)
  4. Feature Generation: No at first.  This is where your thinking comes in on how to include new data or manipulate the existing data to create new features that your model can train on. After that, you can automate it