How can Open Source Help On the Interpretability of Machine Learning Models?

  1. In deep learning models, the “knowledge” is created by hidden layers. It is still a challenge to understand the functionality of the different hidden layers and understanding them is essential to interpreting a model.
  2. Segmenting and understanding how a group of interconnected neurons in a neural network work will provide a simpler level of abstraction to understand its functionality.
  3. To better interpret neural networks, it is essential to understand how they form individual concepts and how they assemble them into the final output.
  1. ELI5 (“Explain like I am a 5-year old”) — The 5 refers to a five-year-old child, the implication being that the person requesting the explanation has a limited or naive understanding of the model.
  2. LIME (Local Interpretable Model-Agnostic Explanation) is able to explain any black-box classifier, with two or more classes.
  3. SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explain the output of any machine learning model.
  4. Alibi focus is to provide high-quality implementations of black-box, white-box, local, and global explanation methods for classification and regression models.
  1. AI Explainability 360 is a new toolkit that was recently open-sourced by IBM. The toolkit provides state-of-the-art algorithms that support the interpretability and explainability of machine learning models.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Isabella Ferreira

Isabella Ferreira

I'm a PhD Candidate in Computer Engineering at PolyMTL. I'm passionated about Free/Libre and Open Source Software (FLOSS) development.