What is Reusable Holdout?

Overfitting and introducing bias during model training is always a big topic in data science. Typically you train a model using Cross Validation by creating a model on k-1 folds and test it on the remaining one fold. This one fold is the holdout set and will usually work very well if, and only if, the trained model is independent of the holdout set. Under normal situations, this works well, but you might begin to leak information into the model as the test fold changes. The observations in the first test will be different in the second BUT some observations in the first test set will now be in training set. This creates an opportunity to ‘leak’ information into the model.

Ideally, the holdout score gives an accurate estimate of the true performance of the model on the underlying distribution from which the data were drawn. However, this is only the case when the model is independent of the holdout data! In contrast, in a competition the model generally incorporates previously observed feedback from the holdout set. Competitors work adaptively and iteratively with the feedback they receive. An improved score for one submission might convince the team to tweak their current approach, while a lower score might cause them to try out a different strategy. But the moment a team modifies their model based on a previously observed holdout score, they create a dependency between the model and the holdout data that invalidates the assumption of the classic holdout method. As a result, competitors may begin to overfit to the holdout data that supports the leaderboard. This means that their score on the public leaderboard continues to improve, while the true performance of the model does not. In fact, unreliable leaderboards are a widely observed phenomenon in machine learning competitions. (via Moritz Hardt)

What is Reusable Holdout?

Reusable Holdout is a tweak to Cross Validation. It uses the same holdout test set over and over again during training. You select this holdout set prior to Cross Validation and then use it for every iteration of the model building you do, thereby ensuring that nothing potentially leaks into your model building.

Rather than limiting the analyst, our approach provides means of reliably verifying the results of an arbitrary adaptive data analysis. The key tool for doing so is what we call the reusable holdout method. As with the classic holdout method discussed above, the analyst is given unfettered access to the training data. What changes is that there is a new algorithm in charge of evaluating statistics on the holdout set. This algorithm ensures that the holdout set maintains the essential guarantees of fresh data over the course of many estimation steps.

What is Reusable Holdout

If you want to dig deeper into this topic, check out this research paper here.

iML Package for Model Agnostic Interpretable Machine Learning

In this video the presenter goes over a new R package called ‘iML.’ This package has a lot of power when explaining global and local feature importance. These explanations are critical, especially in the health field and if your under GDPR regulations. Now, with the combination of Shapley, LIME, and partial dependence plots, you can figure out how the model works and why.

I think we’ll see a lot of innovation in the ‘model interpretation’ space going forward.

Notes from the video:

  • IML R package
  • ML models have huge potential but are complex and hard to understand
  • In critical conditions (life vs death), you need to explain your decision
  • Current tools for Model Interpretation: Decision Trees, Rules, Linear Regressions
  • Needs a model agnostic method
  • Feature Importance @ interpreted level for the global model
  • Compute generalization error on dataset and model
  • Scored features, what is the effect on that feature on the fitted model?
  • Fit a surrogate model
  • Generate Partial Dependence Plots (visualize the feature importance)
  • For Local Interpretation, use LIME.
  • Now part of the R as iML package (written in R 6?)
  • What’s in the iml package? Permutation Feature Importance / Feature Interactions / Partial Dependence Plots / LIME / Shapley Values / Tree Surrogates
  • Shows the bike data set example

Interpretable Machine Learning Using LIME Framework

I found this talk to be fascinating. I’ve been a big fan of LIME but never really understood the details of how it works under the hood. I understood that it works on an observation by observation basis but I never knew that it permutates data, tests against the black box model, and then builds a simple linear model to explain it.

Really cool. My notes are below the video.

Notes

  • Input > black box > output; when don’t understand the black box like neural nets
  • Example, will the loan default?
  • Typical classification problem
  • Loan and applicant information relative to historical data
  • Linear relationships are easy
  • Nonlinear relationships via a Decision Tree can still be interpreted
  • Big data creates more complexity and dimensions
  • One way to overcome this: use feature importance
  • Feature importance doesn’t give us any understanding if it’s a linear or nonlinear relationship
  • Gets better with partial dependence plots
  • Can’t do partial dependence plots for neural nets
  • You can create Bayesian Networks / shows dependencies of all variables including output variable and strength of relationship
  • Bummer: Not as accurate as some other algorithms
  • Can give you global understanding but not detailed explanation
  • Accuracy vs Interpretablity tradeoff. Does it exist?
  • Enter LIME! Local Interpretable Model-agnostic Explanations
  • At a local level, it uses a linear model to explain the prediction
  • Creates an observation, creates fakes data (permutation), then it calculates a similarity store between the fake and original data, then it takes your black box algo (neural nets?), tries different combinations of predictors
  • Takes those features with similarity scores, fits a simple model to it to define weights and scores to explain it
  • Without know what the model picks up on if it’s really signal or noise. You need LIME to verify!
  • Can apply to NLP/Text models
  • Why is it important? Trust / Predict / Improve
  • LIME helps feature engineering by none ML practitioners
  • LIME can help comply with GDPR
  • Understanding our models can help prevent vulnerable people