Tag H2O.ai

Posts: 21

Driving Marketing Performance with H2O Driverless AI

I watched this great video of G5 explaining how they use H20-3, AutoML, and Driverless AI to build an NLP model and put it in production. Really cool. It uses the AWS stack and AWS Lambda. My summary notes below:

  • G5 started with zero in ML and in 3 months built an ML pipeline
  • G5 is a leader in marketing optimization for real estate marketing companies
  • They find leads for their customers
  • Owned/Paid/Earned media are breadcrumbs that site visitors leave
  • Clicks are not the most important interaction, it's a call (90% of the time)
  • How to classify caller intent?
  • Build a training set from unstructured call data
  • Started with 110,000 unlabeled data set
  • Hired people to listen to them and hand score them
  • Problem: Every one scores things a bit different
  • Built a questionnaire to find the similar workers that would score the data the same way
  • Every day took a sample and reviewed them for consistency

Driverless AI, AutoML, Word2Vec

  • Experimented with H2O-3 for testing
  • Took training set and rain it through H2o-3 and built a Word2Vec model
  • Used AutoML to understand the parameters Word2Vec model
  • Ended up with 500 features and enriched with metadata (day of the week, length of call, etc)
  • Took that processed training set and put it through Driverless AI
  • Driverless AI came up with a model with a high 95% accuracy model, beating the 89% benchmark.
  • Driverless AI made it simple to put the model in production
  • Results from Driverless AI feature interaction is making G5 rethink dropping the Word2Vec model and go completely in Driverless
  • DevOps needs to make sure the customers can use the results
  • Reliability / Reproducibility / Scalability / Iterability
  • A phone call comes in, it gets transcribed in AWS Lambda, then vectorizes with the same Lambda model that does the training. This is done so you can get the same feature set every time (for model scoring)
  • H20-3 makes transitions between R and Python easy
  • This model saves 3 minutes per call vs human listening, at 1 million calls a month that is 50,000 hours saved
  • Best part, it reduces the scoring time by 99% vs competitors

Questions and Answers

  • Do you need to retrain all or part of your manual labels periodically to tackle model shift? Yes, hand scoring continues and retraining is done and compared with model in production to see if shift occurs
  • How to maintain the models and how often to u refresh? Right now it's a monthly cadence of review and update

comments

Machine Learning and Data Munging in H2O Driverless AI with datatable

I missed this presentation at H2O World and I'm glad it was recorded. Pasha Stetsenko and Oleksly Kononenko give a great presentation on the Python version of R's data.table called simply: datatable.

H2O World San Francisco, 2019

I'm going to be trying this new package out in my next python munging work. It looks incredibly fast. Just as I do it with all my videos, I add in my notes for readers below.

Datatable Notes

  • Introduction to using the open source datatable
  • 9 million rows in 7 seconds??
  • Recently implemented Follow the Regularized Leader (FTRL) in Driverless AI:
    • Has a Python fronted with a C++ blackened
    • Parallelized with OpenMP and Hogwild
    • Supports boolean, integer, real, and string functions
    • Hashing trick based on Murmur hash function
    • Second-order feature interactions
    • One-vs-rest multinomial class-action and regression targets (experimental)
  • As simple as 'import datatable as dt'
  • Use it because its: reliable, fast, datatable FTRL is already in Kaggle and open source!!!
  • Datatable comes from the popular R data.table package
  • When Driverless AI started, we knew Pandas was a problem
  • Pandas is memory hungry
  • Realized we needed a python version of datatable
  • The first customer is Driverless AI
  • Wanted it to be multithreaded and efficient
  • Memory thrifty
  • Memory mapped on data sets (data set can live in memory or on disk)
  • Native C++ implementation
  • Open Source
  • Fread: A doorway to Driverless AI, reading in data
  • Next step in DAI is to save it to a binary format
  • The file is called '.jay'
  • Check it with '%%timit'
  • Opening a .jay file is nearly instant
  • Syntax is very SQL like, if you're familiar with R's data.table, then you can get this
  • See timestamp 16:00 is basic syntax in use

H2O.ai, datatable

Question and Answers

  • Can you create datatable from redshift or some other db? No, suggest use connecting in Pandas and then convert to datatable
  • Is python datatable as fully featured as R data.table and if not is there a plan to build it out? No, it's still being built out

comments

Making AI Happen Without Getting Fired

I watched Mike Gualtieri's keynote presentation from H2O World San Francisco (2019) and found it to be very insightful from a non-technical MBA type of way. The gist of the presentation is to really look at all the business connections to doing data science. It's not just about the problem at hand but rather setting yourself up for success, and as he puts it, not getting fired!

My notes from the video are below (emphasis mine):

  • Set the proper expectations
  • There is a difference between Pure AI and Pragmatic AI
  • Pure AI is like what you see in movies (i.e. ExMachina)
  • Pragmatic AI is machine learning. Highly specialized in one thing but does it really well
  • Chose more than one use case
  • The use case you choose could fail. Choose many different kinds
  • Drop the ones that don't work and optimize the ones that do
  • Ask for comprehensive data access
  • Data will be in silos
  • Get faster with AutoML
  • Data Scientists aren't expensive, they need better tools to be more efficient
  • Three segments of ML tools
    • Multimodel (drag and drop like RapidMiner/KNIME)
    • Notebook-based (like Jupyter Notebook)
    • Automation-focused (like Driverless AI)
  • Use them to augment your work, go faster
  • Warning: Data-savvy users can use these tools to build ML. Can be dangerous but they can vet use cases
  • Know when to quit
  • Sometimes the use case won't work. There is no signal in the data and you must quit
  • Stop wasting time
  • Keep production models fresh
  • When code is written, it's written the same way and runs the same forever
  • ML Models decay, so you need to figure out how to do it at scale
  • Model staging, A/B testing, Monitoring
  • Model deployment via collaboration with DevOps
  • Get Business and IT engaged early
  • They have meetings with business and IT, get ducks in a row
  • Ask yourself, how is it going to be deployed and how it will impact business process
  • Ignore the model to protect the jewels
  • You don't have to do what the model tells you to do (i.e False Positives, etc)
  • Knowledge Engineering: AI and Humans working together
  • Explainability is important

comments

The Night before H2O World 2019

I'm in Mountain View this week for our annual Sales Kick-Off meeting and will be staying for H2O World this coming Monday and Tuesday. If our registration numbers are an indication, this H20 World will be our largest ever! As the 'hip kids' say, it's going to be lit!

https://twitter.com/neuralmarket/status/1090968396357722114

H20 World Schedule

The first day of H2O World is all about training. In London we had 500 people show up for training on the first day and almost twice that on the second day. I fully expect that number to be larger for San Francisco. There's going to be breakout training for AutoML, Driverless AI, and much more. I'll be stationed at the Driverless AI booth, so come and say hi!

I'll be stationed at the Driverless AI booth, so come say hi!

The next day is all about presentations. I'm itching to see Tanya Berger-Wolf's presentation "AI and Humans Combatting Extinction Together." I'm also excited to see the "Women and Inclusion in Tech Panel" on Day 2. Then there's Dmitry's "Lessons Learned" presentation on Kaggle Airbus Ship Detection Challenge.

There are some people I want to finally meet at H2O World, like Rueben Diaz from Vision Banco and Leland Wilkenson. Rueben did a Driverless AI deployment for Vision Banco in Paraguay and I want to hear how he did on on IBM's Power 9 box. I also want to finally meet Leland in person. I usually hear him on the phone during company meetings. Leland is the leader of Driverless AI's AutoViz development and has a deep knowledge of visualizations and is the author of 'The Grammar of Graphics.'

There's always so many presentations, ideas, awesome coffee talk, that two days are never enough. All in all, I expect this to be a fun but intense few days. There's always such an amazing vibe when you go to H2O world, the ideas, the problem solving. It feels like changing the world for the better is possible.

Full H2O World schedule and speaker list here. Hope to see you there!

comments

Automatic Feature Engineering with Driverless AI

Dmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at H2O.ai goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.

This is by far one of the best videos I've seen on the topic of feature engineering, not because I work for H2O.ai, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.

The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.

  • Feature engineering is extremely important in model building
  • "Coming up with features is difficult, time-consuming, requires expert knowledge. "Applied machine learning" is basically feature engineering" - Andrew Ng
  • Common Machine Learning workflow (see image below)
Feature Engineering, Driverless AI
    null
  • What is feature engineering? Example uses Polar coordinate conversions for linear classifications
  • Creating a target variable is NOT feature engineering
  • Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
  • Feature Selection should be done AFTER feature engineering
  • Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
  • Domain knowledge is key, so is prior experience
  • EDA / ML model feedback is important
  • Validation set: use cross validation. Be aware of data leakage
  • Target encoding is powerful but can introduce leakage when applied wrong
  • Feature engineering is hard and very very time consuming
  • Feature engineering makes your model better, simpler models
  • Transform predictor/response variables into a normal distribution in some situation like log transform
  • Feature Encoding turns categorical features into numerical features
  • Labeled encoding and one hot encoding
  • Labeled encoding is bad, it implies an order which is not preferred
  • One hot encoding transforms into binary (dummy coding)
  • One hot encoding create a very sparse data set
  • Columns BLOW UP in size with one hot encoding
  • You can do frequency encoding instead of one hot encoding
  • Frequency Encoding is robust but what about balanced data sets?
  • Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
  • To avoid leakage, you can use 'leave one out' schema
  • Apply Bayesian smoothing, calc a weight average on the mean of the training set
  • What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
  • Great, then how do you find feature interactions?
  • Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
  • You could encode categories features by stats (std dev, etc)
  • Feature Extraction is the application of extracting value out of hidden features, like zip code
  • Zip code can give you state and city information
  • You can extract day, week, holiday, etc can be extracted date-times

Update: The H2O.ai documentation on the feature transformations applied is here. Check it out, it's pretty intense.

comments