==========================
== Neural Market Trends ==
==========================

Is it Possible to Automate Data Science?

AI Machine Learning

A few months ago I read about a programmer that automated his job down to the point where the coffee machine would make him lattes! Despite the ethical quandary, I thought it was pretty cool to automate your job with scripts. Then I wondered, was it possible to automate data science? Or at least parts of it? This general question proved to be a rabbit hole of exploration.

StackExchange has an ongoing discussion into another programmer’s automation of his tasks. He used scripts to prepare customer data into spreadsheets that other employees would use. The task used to take a month to do but it was able to cut that time down to 10 minutes. It did take him several months figure out how to build the right scripts to do the work he now only works 1 to 2 hours a week and gets paid for 40 hours.

In my life at RapidMiner I interacted with potential customers that wanted to “throw data on the wall and see what sticks.” They wanted to find some automated way to use data science to tell them something novel. This usually raises a red flag in my mind and leads me to ask more detailed questions like:

  • “Do you know the business objective you want to solve/meet?”

  • “Do you have a Data Science team or plan to hire a Data Scientist?”

  • “How do you do you do your data exploration and glean insight now?”

At this point I can ferret out the true reason for the call or the lack of understanding for the true problem at hand. I’ve even had one potential customer reveal that he called us because he heard of this “data mining stuff” 6 months ago and wanted to get in on it quick.

I get it. If you have lots of data where do you begin to make sense of it?

##Automate what?

The path to insight in your data starts with the data. It’s always going to be messy, missing values, wrong key strokes, and in wrong places. It’s in a database in one office but Sally’s spreadsheet in another office.You can’t get any insight until you start extracting the data, transforming it, and loading it for analysis. This is the standard ETL we all know and love to hate.

You can automate ETL completely provided you know what format your data needs to be in. This is where tools like SQL and RapidMiner can help with your dirty work. If you haven’t automated your ETL, you’re behind the curve!

Once all the data is ready, then you can model it and test your hypothesis, but which algorithm? Here’s where the critical thinking comes in. You can’t automate your decision of which model to put into production but you can automate the modeling and evaluation of it. Once again, here’s where RapidMiner can help.

When working with a business group, the ubiquitous Decision Tree algorithm tends to come up. Why? Because business LOVE the pretty tree it makes and they’ve always used it before.

You can automate modeling and evaluation in RapidMiner. It’s easy to try many different algorithms within the same process and build ROC plots. You can output performance measures like LogLoss our AUC to rank which model performed the best. You can even create a leaderboard in RapidMiner Server to ‘automatically’ display which model performed the bestI’ve worked with Customers that do just that. They used RapidMiner to prototype, optimize, and deploy models in a week. Even if they need bits of Python or R to finish the job, they just automate everything.

Yet still the question remains should you do this? The answer is that it depends if you know what you are doing. For example, feature generation is something that I’d be every cautious to ‘automate’. Sure you can create some simple calculations and add them as a new attribute, but in general feature generation is something that requires a bit more thinking and less automation. That is until you figured out what features work.

In a nutshell here’s what you can automate with warnings:

  1. ETL: You bet, automate away if you know what your format your data needs to be in
  2. Model Building: Yes, because of the no free lunch theorem you should try multiple models on the same data set. Just be cautious of the algorithms you choose
  3. Evaluation: Yes, just compare each model results using the same and multiple performance metrics (i.e. LogLoss, AUC, Kappa, etc)
  4. Feature Generation: No at first.  This is where your thinking comes in on how to include new data or manipulate the existing data to create new features that your model can train on. After that, you can automate it
comments powered by Disqus