Driving Marketing Performance with H2O Driverless AI

h2o.ai, blog, Driverless AI, H2O World, writers gonna write, datatable

I watched this great video of G5 explaining how they use H20-3, AutoML, and Driverless AI to build an NLP model and put it in production. Really cool. It uses the AWS stack and AWS Lambda. My summary notes below:

  • G5 started with zero in ML and in 3 months built an ML pipeline
  • G5 is a leader in marketing optimization for real estate marketing companies
  • They find leads for their customers
  • Owned/Paid/Earned media are breadcrumbs that site visitors leave
  • Clicks are not the most important interaction, it’s a call (90% of the time)
  • How to classify caller intent?
  • Build a training set from unstructured call data
  • Started with 110,000 unlabeled data set
  • Hired people to listen to them and hand score them
  • Problem: Every one scores things a bit different
  • Built a questionnaire to find the similar workers that would score the data the same way
  • Every day took a sample and reviewed them for consistency
Driverless AI, AutoML, Word2Vec
G5 – Getting Data to Prediction
  • Experimented with H2O-3 for testing
  • Took training set and rain it through H2o-3 and built a Word2Vec model
  • Used AutoML to understand the parameters Word2Vec model
  • Ended up with 500 features and enriched with metadata (day of the week, length of call, etc)
  • Took that processed training set and put it through Driverless AI
  • Driverless AI came up with a model with a high 95% accuracy model, beating the 89% benchmark.
  • Driverless AI made it simple to put the model in production
  • Results from Driverless AI feature interaction is making G5 rethink dropping the Word2Vec model and go completely in Driverless
  • DevOps needs to make sure the customers can use the results
  • Reliability / Reproducibility / Scalability / Iterability
  • A phone call comes in, it gets transcribed in AWS Lambda, then vectorizes with the same Lambda model that does the training. This is done so you can get the same feature set every time (for model scoring)
  • H20-3 makes transitions between R and Python easy
  • This model saves 3 minutes per call vs human listening, at 1 million calls a month that is 50,000 hours saved
  • Best part, it reduces the scoring time by 99% vs competitors

Questions and Answers

  • Do you need to retrain all or part of your manual labels periodically to tackle model shift? Yes, hand scoring continues and retraining is done and compared with model in production to see if shift occurs
  • How to maintain the models and how often to u refresh? Right now it’s a monthly cadence of review and update