|||

Automatic Feature Engineering with Driverless AI

H2o.aiH2o.ai

Dmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at H2O.ai goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.

This is by far one of the best videos I’ve seen on the topic of feature engineering, not because I work for H2O.ai, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.

The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.

  • Feature engineering is extremely important in model building
  • “Coming up with features is difficult, time-consuming, requires expert knowledge.”Applied machine learning” is basically feature engineering” - Andrew Ng
  • Common Machine Learning workflow (see image below)
Feature Engineering, Driverless AIFeature Engineering, Driverless AI
    null
  • What is feature engineering? Example uses Polar coordinate conversions for linear classifications
  • Creating a target variable is NOT feature engineering
  • Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
  • Feature Selection should be done AFTER feature engineering
  • Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
  • Domain knowledge is key, so is prior experience
  • EDA / ML model feedback is important
  • Validation set: use cross validation. Be aware of data leakage
  • Target encoding is powerful but can introduce leakage when applied wrong
  • Feature engineering is hard and very very time consuming
  • Feature engineering makes your model better, simpler models
  • Transform predictor/response variables into a normal distribution in some situation like log transform
  • Feature Encoding turns categorical features into numerical features
  • Labeled encoding and one hot encoding
  • Labeled encoding is bad, it implies an order which is not preferred
  • One hot encoding transforms into binary (dummy coding)
  • One hot encoding create a very sparse data set
  • Columns BLOW UP in size with one hot encoding
  • You can do frequency encoding instead of one hot encoding
  • Frequency Encoding is robust but what about balanced data sets?
  • Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
  • To avoid leakage, you can use leave one out’ schema
  • Apply Bayesian smoothing, calc a weight average on the mean of the training set
  • What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
  • Great, then how do you find feature interactions?
  • Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
  • You could encode categories features by stats (std dev, etc)
  • Feature Extraction is the application of extracting value out of hidden features, like zip code
  • Zip code can give you state and city information
  • You can extract day, week, holiday, etc can be extracted date-times

Update: The H2O.ai documentation on the feature transformations applied is here. Check it out, it’s pretty intense.

Up next Ray Dalio's Pure Alpha Fund Ray Dalio’s Pure Alpha Fund returned 14.6% for 2018. That’s an amazing feat considering the majority of hedge funds averaged a loss of 6.7%. How Functional Programming in Python I’m spending time trying to understand the differences between writing classes and functions in Python. Which one is better and why? From what I’m
Latest posts The Ye Old Blog List Motorola: Then and Now EWM Redux Testing for mean reversion with Python & developing simple VIX system - Talaikis unsorted - Tadas Talaikis Blog Steps to calculate centroids in cluster using K-means clustering algorithm - Data Science Central Basics of Statistical Mean Reversion Testing - QuantStart Algorithmic trading in less than 100 lines of Python code - O’Reilly Media Interpreting Machine Learning Models Microsoft the AI Powerhouse Investing in the S&P500 still beats AI Trading Microsoft makes a push to simplify machine learning | TechCrunch 10 Great Articles On Python Development — Hacker Noon Introduction to Keras Democratising Machine learning with H2O — Towards Data Science Getting started with Python datatable | Kaggle Phone Addiction Version 12 Launches Today! Machine Learning Making Pesto Tastier 5 Dangerous Things You Should Let Your Kids Do The Pyschology of Writing Investing in 2019 and beyond TensorFlow and High Level APIs Driving Marketing Performance with H2O Driverless AI Machine Learning and Data Munging in H2O Driverless AI with datatable Making AI Happen Without Getting Fired Latest Musings from a Traveling Sales Engineer The Night before H2O World 2019 Why Forex Trading is Frustrating Functional Programming in Python Automatic Feature Engineering with Driverless AI Ray Dalio's Pure Alpha Fund