Below you will find pages that utilize the taxonomy term “H2Oai”
Last evening my colleague Navdeep Gill (@NavdeepGill) posted a link to his latest talk titled “Interpretable Machine Learning with RSparkling.” Navdeep is part of our MLI team and has a wealth of experience to share about explaining black boxes with modern techniques like Shapley values and LIME.
Machine Learning Interpretability (MLI) H2O has this awesome open source Big Data software called Sparkling Water. It’s similiar to RapidMiner’s Radoop but 1) open source, 2) more powerful, and 3) been tested by the masses.
In this video the presenter goes over a new R package called ‘iML.’ This package has a lot of power when explaining global and local feature importance. These explanations are critical, especially in the health field and if your under GDPR regulations. Now, with the combination of Shapley, LIME, and partial dependence plots, you can figure out how the model works and why. I think we’ll see a lot of innovation in the ‘model interpretation’ space going forward.
I found this talk to be fascinating. I’ve been a big fan of LIME but never really understood the details of how it works under the hood. I understood that it works on an observation by observation basis but I never knew that it permutates data, tests against the black box model, and then builds a simple linear model to explain it. Really cool. My notes are below the video.