Tag LIME

Posts: 3

Interpretable Machine Learning with RSparkling

Last evening my colleague Navdeep Gill (@Navdeep_Gill_) posted a link to his latest talk titled "Interpretable Machine Learning with RSparkling." Navdeep is part of our MLI team and has a wealth of experience to share about explaining black boxes with modern techniques like Shapley values and LIME.

Machine Learning Interpretability (MLI)

H2O Sparkling Water, R, RSparkling

H2O has this awesome open source Big Data software called Sparkling Water. It's similiar to RapidMiner's Radoop but 1) open source, 2) more powerful, and 3) been tested by the masses. It's stable and runs on many a Hadoop cluster with Spark. The neat thing about Sparkling Water is that you can take the H2O.ai Algorithms and push them down to the cluster to train on your 'Big Data.' There are quite a few powerful, fast, and accurate algorithms that H2O-3 has. H2o-3 is the current version of the open source set of algorithms and H2O.ai continues to develop this suite over time. Most recently they added Isolation Forests!

H2O-3, Open Source, Algorithms

Surrogate Models/Shapley Values/RSparkling

Sparkling Water let's Data Scientists and Hadoop DevOps people connect and work how they want too. You can connect via Scala, R, and Python. In Navdeep's talk, he uses R to connect to Sparkling Water, hence RSparkling. His presentation goes into a few basics of H2O, Sparkling Water, and R but ends with the fascinating topic of Machine Learning Interpretability (MLI). He shows how simple interpretability can be done using H2O-3's GLM and CoxPH algorithms (H2O-3 is also open source).

While the GLM and CoxPH algos are part of a 'Surrogate Modeling' technique, they are NOT accurate. Instead they are approximations of the decision boundary. While this is fast, it might not be what you need. For a highly accurate method, you want to look use Shapley Values.

H2O-3 allows you to use Tree Shap for XGBoost and GBM algorithms. These, of course, can be used with R and Sparkling Water, so now you can generate 'reason codes' for billions of rows of data right on your cluster.

More Information

Navdeep's presentation ends with a demo of a large credit card data set on Hadoop and with a slide for more information. I suggest that you check out this free O'Reilly book that H2O-3 published with Navdeep being a co-author. It goes into the heavy math of MLI but it's only 40 pages long, so it's a great read on a flight.

comments

Interpreting Machine Learning Models

Shapley Values, MLI

I found this short 8 minute video from H2O World about Machine Learning Interpretability (MLI). It's given by Patrick Hall, the lead for building these capabilities in Driverless AI.

My notes from the video are below:

  • ML as an opaque black box is no longer the case
  • Cracking the black box with LIME and Shapley Values
  • Shapley Values won the Nobel Prize in Economics in 2012
  • After Driverless AI model runs, a dashboard is created
  • Shows the complex feature engineered and the original features
  • Global Shapley Values is like Feature Importance and includes negative and positive contributions
  • Quickly identify what are the important features in the dataset
  • Then go to Partial Dependence Plots, which are the average prediction of the model across different values of the feature
  • Row by Row analysis of each feature can be done to understand interactions and generate reason codes
  • Shapley is accurate for feature contribution, LIME is an approximation
  • Done via stacked ensemble model
  • Can be deployed via Python Scoring pipeline

comments

Interpretable Machine Learning Using LIME Framework

I found this talk to be fascinating. I've been a big fan of LIME but never really understood the details of how it works under the hood. I understood that it works on an observation by observation basis but I never knew that it permutates data, tests against the black box model, and then builds a simple linear model to explain it.

Really cool. My notes are below the video.

Notes

  • Input > black box > output; when don't understand the black box like neural nets
  • Example, will the loan default?
  • Typical classification problem
  • Loan and applicant information relative to historical data
  • Linear relationships are easy
  • Nonlinear relationships via a Decision Tree can still be interpreted
  • Big data creates more complexity and dimensions
  • One way to overcome this: use feature importance
  • Feature importance doesn't give us any understanding if it's a linear or nonlinear relationship
  • Gets better with partial dependence plots
  • Can't do partial dependence plots for neural nets
  • You can create Bayesian Networks / shows dependencies of all variables including output variable and strength of relationship
  • Bummer: Not as accurate as some other algorithms
  • Can give you global understanding but not detailed explanation
  • Accuracy vs Interpretablity tradeoff. Does it exist?
  • Enter LIME! Local Interpretable Model-agnostic Explanations
  • At a local level, it uses a linear model to explain the prediction
  • Creates an observation, creates fakes data (permutation), then it calculates a similarity store between the fake and original data, then it takes your black box algo (neural nets?), tries different combinations of predictors
  • Takes those features with similarity scores, fits a simple model to it to define weights and scores to explain it
  • Without know what the model picks up on if it's really signal or noise. You need LIME to verify!
  • Can apply to NLP/Text models
  • Why is it important? Trust / Predict / Improve
  • LIME helps feature engineering by none ML practitioners
  • LIME can help comply with GDPR
  • Understanding our models can help prevent vulnerable people

comments

Neural Market Trends is the online home of Thomas Ott.