Automatic Feature Engineering with Driverless AI

Dmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at H2O.ai goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.

This is by far one of the best videos I’ve seen on the topic of feature engineering, not because I work for H2O.ai, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.

The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.

  • Feature engineering is extremely important in model building
  • “Coming up with features is difficult, time-consuming, requires expert knowledge. “Applied machine learning” is basically feature engineering” – Andrew Ng
  • Common Machine Learning workflow (see image below)
Feature Engineering, Driverless AI
    null
  • What is feature engineering? Example uses Polar coordinate conversions for linear classifications
  • Creating a target variable is NOT feature engineering
  • Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
  • Feature Selection should be done AFTER feature engineering
  • Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
  • Domain knowledge is key, so is prior experience
  • EDA / ML model feedback is important
  • Validation set: use cross validation. Be aware of data leakage
  • Target encoding is powerful but can introduce leakage when applied wrong
  • Feature engineering is hard and very very time consuming
  • Feature engineering makes your model better, simpler models
  • Transform predictor/response variables into a normal distribution in some situation like log transform
  • Feature Encoding turns categorical features into numerical features
  • Labeled encoding and one hot encoding
  • Labeled encoding is bad, it implies an order which is not preferred
  • One hot encoding transforms into binary (dummy coding)
  • One hot encoding create a very sparse data set
  • Columns BLOW UP in size with one hot encoding
  • You can do frequency encoding instead of one hot encoding
  • Frequency Encoding is robust but what about balanced data sets?
  • Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
  • To avoid leakage, you can use ‘leave one out’ schema
  • Apply Bayesian smoothing, calc a weight average on the mean of the training set
  • What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
  • Great, then how do you find feature interactions?
  • Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
  • You could encode categories features by stats (std dev, etc)
  • Feature Extraction is the application of extracting value out of hidden features, like zip code
  • Zip code can give you state and city information
  • You can extract day, week, holiday, etc can be extracted date-times

Update: The H2O.ai documentation on the feature transformations applied is here. Check it out, it’s pretty intense.

What’s new in Driverless AI?

Arno, H2O’s CTO, gave a great 1+ hour overview in what’s new with Driverless AI version 1.4.1. If you check back in a few weeks/months, it’ll be even better. In all honesty, I have never seen a company innovate this fast.

Below are my notes from the video:

  • H2O-3 is the open source product
  • Driverless AI is the commercial product
  • Makes Feature Engineering for you
  • When you have Domain Knowledge, Feature Engineering can give you a huge lift
  • Salary, Jon Title, Zip Code example
  • What about people in this Zip Code, with # of cars >> generate mean of salaries
  • Create out of fold estimates
  • Don’t take your own prediction feature for training
  • Writes in Python, CUDA and C++ is under the hood that Python directs
  • Able to create good models in an automated way
  • Driverless AI does not handle images
  • Handles strings, numbers, and categorial
  • Can be 100’s of Gigabytes
  • Creates 100’s of models with 1,000’s of new features
  • Creates an ensemble model after its done
  • Then creates a exportable model (Java runtime or Python)
  • C++ version is being worked on
  • All standalone models
  • Connect with Python client or via the web browser
  • Changelog is on docs.h2o.ai
  • Tests against Kaggle datasets
  • BNP Paribas Kaggle set, Driverless AI ranked in the top 10 out of the box
  • Took Driverless AI 2 hours, whereas Grandmasters it took 2 months
  • Discussed how Logloss is interpreted
  • Uses Reusable Holdout(RH) and subsamples of RH
  • Driverless AI uses unsupervised methods to make supervised models
  • Uses XGBoost, GLM, LightGBM, TensorFlow CNN, and Rule Fit
  • Implemented in R’s datatable for feature engineering and munging
  • Working on a open source version of R’s datatable in Python
  • Overview in how Driverless AI handles outliers (AutoViz)
  • AutoViz only plots what you should see, not 100’s of scatterplots like Tableau
  • Overview on the GUI, what you can do
  • Validation and Test sets. How to use them and when
  • Checks data shift in training and testing set
  • Includes Machine Learning Interpretability suite
  • Does Time Series and NLP

And much more! Arno’s presentation style is excellent and he makes Data Science simply understood.

Latest Writings Elsewhere – December 2018

I’m happy to announce my very first article went live on the H2O.ai blog! Writers gonna write! It’s been a long time since I contributed to my firm’s online content and it feels good to do it again.

The article is about seeking clarity in the Automated Machine Learning space. Why? Because it’s plain confusing. Everyone is coming out with their version of an auto modeling AI product. Some were the first, some are not there yet, and then there’s Driverless AI.

Automated modeling is just a fancy term to make the life of data scientists, engineers, IT administrators, and anyone else in the analytics space easier by automating processes. The IT space has done an amazing job of automating server spin ups, logins, or security. So why shouldn’t we in the AI field do the same?

H2O.ai Blog

I’ve been meaning to write this for a while since H2O World in London, but work has been busy! I did find some time pre Thanksgiving Day weekend to work on it, so I’d appreciate it if you gave it a read on the H2o.ai blog.