Interpretable Machine Learning with RSparkling

Posted on Sa 01 Juni 2019 in Data Science • Tagged with H2O.ai, LIME, rSparkling • 2 min read

Last evening my colleague Navdeep Gill (@Navdeep_Gill_) posted a link to his latest talk titled "Interpretable Machine Learning with RSparkling." Navdeep is part of our MLI team and has a wealth of experience to share about explaining black boxes with modern techniques like Shapley values and LIME.

Machine Learning Interpretability …


Continue reading

Interpreting Machine Learning Models

Posted on Do 09 Mai 2019 in Data Science • Tagged with H2O.ai, LIME, Shapley Values, MLI • 1 min read

Shapley Values, MLI

I found this short 8 minute video from H2O World about Machine Learning Interpretability (MLI). It's given by Patrick Hall, the lead for building these capabilities in Driverless AI.

My notes from the video are below:

  • ML as an opaque black box is no longer the case
  • Cracking the black …

Continue reading

Interpretable Machine Learning Using LIME Framework

Posted on Fr 07 September 2018 in Data Science • Tagged with LIME, Machine Learning Interpretability • 2 min read

I found this talk to be fascinating. I've been a big fan of LIME but never really understood the details of how it works under the hood. I understood that it works on an observation by observation basis but I never knew that it permutates data, tests against the black …


Continue reading