Creating explanationsThese functions are the bread and butter of lime and is used to create an explainer from a model and apply it to an observation. |
|
---|---|
Create a model explanation function based on training data |
|
Explain model predictions |
|
Methods for extending limes model support |
|
Indicate model type to lime |
|
Investigating explanationsWhile an explanation can be inspected through its tabular output format it is often much more powerful through different visualisations. |
|
Plot the features in an explanation |
|
Plot a condensed overview of all explanations |
|
Display image explanations as superpixel areas |
|
Plot text explanations |
|
|
Interactive explanations |
MiscellaneousThis set of functions are sometimes needed in more specialised tasks |
|
Default function to tokenize |
|
Test super pixel segmentation |
|
Stop words list |
|
Sentence corpus - test part |
|
Sentence corpus - train part |
|
lime: Local Interpretable Model-Agnostic Explanations |