Driving Marketing Performance with H2O Driverless AI

I watched this great video of G5 explaining how they use H20-3, AutoML, and Driverless AI to build an NLP model and put it in production. Really cool. It uses the AWS stack and AWS Lambda. My summary notes below:

  • G5 started with zero in ML and in 3 months built an ML pipeline
  • G5 is a leader in marketing optimization for real estate marketing companies
  • They find leads for their customers
  • Owned/Paid/Earned media are breadcrumbs that site visitors leave
  • Clicks are not the most important interaction, it’s a call (90% of the time)
  • How to classify caller intent?
  • Build a training set from unstructured call data
  • Started with 110,000 unlabeled data set
  • Hired people to listen to them and hand score them
  • Problem: Every one scores things a bit different
  • Built a questionnaire to find the similar workers that would score the data the same way
  • Every day took a sample and reviewed them for consistency
Driverless AI, AutoML, Word2Vec
G5 – Getting Data to Prediction
  • Experimented with H2O-3 for testing
  • Took training set and rain it through H2o-3 and built a Word2Vec model
  • Used AutoML to understand the parameters Word2Vec model
  • Ended up with 500 features and enriched with metadata (day of the week, length of call, etc)
  • Took that processed training set and put it through Driverless AI
  • Driverless AI came up with a model with a high 95% accuracy model, beating the 89% benchmark.
  • Driverless AI made it simple to put the model in production
  • Results from Driverless AI feature interaction is making G5 rethink dropping the Word2Vec model and go completely in Driverless
  • DevOps needs to make sure the customers can use the results
  • Reliability / Reproducibility / Scalability / Iterability
  • A phone call comes in, it gets transcribed in AWS Lambda, then vectorizes with the same Lambda model that does the training. This is done so you can get the same feature set every time (for model scoring)
  • H20-3 makes transitions between R and Python easy
  • This model saves 3 minutes per call vs human listening, at 1 million calls a month that is 50,000 hours saved
  • Best part, it reduces the scoring time by 99% vs competitors

Questions and Answers

  • Do you need to retrain all or part of your manual labels periodically to tackle model shift? Yes, hand scoring continues and retraining is done and compared with model in production to see if shift occurs
  • How to maintain the models and how often to u refresh? Right now it’s a monthly cadence of review and update

Automatic Feature Engineering with Driverless AI

Dmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at H2O.ai goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.

This is by far one of the best videos I’ve seen on the topic of feature engineering, not because I work for H2O.ai, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.

The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.

  • Feature engineering is extremely important in model building
  • “Coming up with features is difficult, time-consuming, requires expert knowledge. “Applied machine learning” is basically feature engineering” – Andrew Ng
  • Common Machine Learning workflow (see image below)
Feature Engineering, Driverless AI
    null
  • What is feature engineering? Example uses Polar coordinate conversions for linear classifications
  • Creating a target variable is NOT feature engineering
  • Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
  • Feature Selection should be done AFTER feature engineering
  • Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
  • Domain knowledge is key, so is prior experience
  • EDA / ML model feedback is important
  • Validation set: use cross validation. Be aware of data leakage
  • Target encoding is powerful but can introduce leakage when applied wrong
  • Feature engineering is hard and very very time consuming
  • Feature engineering makes your model better, simpler models
  • Transform predictor/response variables into a normal distribution in some situation like log transform
  • Feature Encoding turns categorical features into numerical features
  • Labeled encoding and one hot encoding
  • Labeled encoding is bad, it implies an order which is not preferred
  • One hot encoding transforms into binary (dummy coding)
  • One hot encoding create a very sparse data set
  • Columns BLOW UP in size with one hot encoding
  • You can do frequency encoding instead of one hot encoding
  • Frequency Encoding is robust but what about balanced data sets?
  • Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
  • To avoid leakage, you can use ‘leave one out’ schema
  • Apply Bayesian smoothing, calc a weight average on the mean of the training set
  • What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
  • Great, then how do you find feature interactions?
  • Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
  • You could encode categories features by stats (std dev, etc)
  • Feature Extraction is the application of extracting value out of hidden features, like zip code
  • Zip code can give you state and city information
  • You can extract day, week, holiday, etc can be extracted date-times

Update: The H2O.ai documentation on the feature transformations applied is here. Check it out, it’s pretty intense.

What’s new in Driverless AI?

Arno, H2O’s CTO, gave a great 1+ hour overview in what’s new with Driverless AI version 1.4.1. If you check back in a few weeks/months, it’ll be even better. In all honesty, I have never seen a company innovate this fast.

Below are my notes from the video:

  • H2O-3 is the open source product
  • Driverless AI is the commercial product
  • Makes Feature Engineering for you
  • When you have Domain Knowledge, Feature Engineering can give you a huge lift
  • Salary, Jon Title, Zip Code example
  • What about people in this Zip Code, with # of cars >> generate mean of salaries
  • Create out of fold estimates
  • Don’t take your own prediction feature for training
  • Writes in Python, CUDA and C++ is under the hood that Python directs
  • Able to create good models in an automated way
  • Driverless AI does not handle images
  • Handles strings, numbers, and categorial
  • Can be 100’s of Gigabytes
  • Creates 100’s of models with 1,000’s of new features
  • Creates an ensemble model after its done
  • Then creates a exportable model (Java runtime or Python)
  • C++ version is being worked on
  • All standalone models
  • Connect with Python client or via the web browser
  • Changelog is on docs.h2o.ai
  • Tests against Kaggle datasets
  • BNP Paribas Kaggle set, Driverless AI ranked in the top 10 out of the box
  • Took Driverless AI 2 hours, whereas Grandmasters it took 2 months
  • Discussed how Logloss is interpreted
  • Uses Reusable Holdout(RH) and subsamples of RH
  • Driverless AI uses unsupervised methods to make supervised models
  • Uses XGBoost, GLM, LightGBM, TensorFlow CNN, and Rule Fit
  • Implemented in R’s datatable for feature engineering and munging
  • Working on a open source version of R’s datatable in Python
  • Overview in how Driverless AI handles outliers (AutoViz)
  • AutoViz only plots what you should see, not 100’s of scatterplots like Tableau
  • Overview on the GUI, what you can do
  • Validation and Test sets. How to use them and when
  • Checks data shift in training and testing set
  • Includes Machine Learning Interpretability suite
  • Does Time Series and NLP

And much more! Arno’s presentation style is excellent and he makes Data Science simply understood.