I found this talk to be fascinating. I’ve been a big fan of LIME but never really understood the details of how it works under the hood. I understood that it works on an observation by observation basis but I never knew that it permutates data, tests against the black box model, and then builds a simple linear model to explain it.
Really cool. My notes are below the video.
- Input > black box > output; when don’t understand the black box like neural nets
- Example, will the loan default?
- Typical classification problem
- Loan and applicant information relative to historical data
- Linear relationships are easy
- Nonlinear relationships via a Decision Tree can still be interpreted
- Big data creates more complexity and dimensions
- One way to overcome this: use feature importance
- Feature importance doesn’t give us any understanding if it’s a linear or nonlinear relationship
- Gets better with partial dependence plots
- Can’t do partial dependence plots for neural nets
- You can create Bayesian Networks / shows dependencies of all variables including output variable and strength of relationship
- Bummer: Not as accurate as some other algorithms
- Can give you global understanding but not detailed explanation
- Accuracy vs Interpretablity tradeoff. Does it exist?
- Enter LIME! Local Interpretable Model-agnostic Explanations
- At a local level, it uses a linear model to explain the prediction
- Creates an observation, creates fakes data (permutation), then it calculates a similarity store between the fake and original data, then it takes your black box algo (neural nets?), tries different combinations of predictors
- Takes those features with similarity scores, fits a simple model to it to define weights and scores to explain it
- Without know what the model picks up on if it’s really signal or noise. You need LIME to verify!
- Can apply to NLP/Text models
- Why is it important? Trust / Predict / Improve
- LIME helps feature engineering by none ML practitioners
- LIME can help comply with GDPR
- Understanding our models can help prevent vulnerable people