Once an explainer has been created using the
lime() function it can be used
to explain the result of the model on new observations. The
function takes new observation along with the explainer and returns a
data.frame with prediction explanations, one observation per row. The
returned explanations can then be visualised in a number of ways, e.g. with
# S3 method for data.frame explain(x, explainer, labels = NULL, n_labels = NULL, n_features, n_permutations = 5000, feature_select = "auto", dist_fun = "gower", kernel_width = NULL, gower_pow = 1, ...) # S3 method for character explain(x, explainer, labels = NULL, n_labels = NULL, n_features, n_permutations = 5000, feature_select = "auto", single_explanation = FALSE, ...) explain(x, explainer, labels, n_labels = NULL, n_features, n_permutations = 5000, feature_select = "auto", ...) # S3 method for imagefile explain(x, explainer, labels = NULL, n_labels = NULL, n_features, n_permutations = 1000, feature_select = "auto", n_superpixels = 50, weight = 20, n_iter = 10, p_remove = 0.5, batch_size = 10, background = "grey", ...)
New observations to explain, of the same format as used when creating the explainer
The specific labels (classes) to explain in case the model is
a classifier. For classifiers either this or
The number of labels to explain. If this is given for
classifiers the top
The number of features to use for each explanation.
The number of permutations to use for each explanation.
The algorithm to use for selecting features. One of:
The distance function to use for calculating the distance
from the observation to the permutations. If
The width of the exponential kernel that will be used to
convert the distance to a similarity in case
A modifier for gower distance. The calculated distance will be raised to the power of this value.
Parameters passed on to the
A boolean indicating whether to pool all text in
The number of segments an image should be split into
How high should locality be weighted compared to colour. High values leads to more compact superpixels, while low values follow the image structure more
How many iterations should the segmentation run for
The probability that a superpixel will be removed in each permutation
The number of explanations to handle at a time
The colour to use for blocked out superpixels
A data.frame encoding the explanations one row per explained observation. The columns are:
model_type: The type of the model used for prediction.
case: The case being explained (the rowname in
model_r2: The quality of the model used for the explanation
model_intercept: The intercept of the model used for the explanation
model_prediction: The prediction of the observation based on the model
used for the explanation.
feature: The feature used for the explanation
feature_value: The value of the feature used
feature_weight: The weight of the feature in the explanation
feature_desc: A human readable description of the feature importance.
data: Original data being explained
prediction: The original prediction from the model
Furthermore classification explanations will also contain:
label: The label being explained
label_prob: The probability of
label as predicted by
# Explaining a model and an explainer for it library(MASS) iris_test <- iris[1, 1:4] iris_train <- iris[-1, 1:4] iris_lab <- iris[][-1] model <- lda(iris_train, iris_lab) explanation <- lime(iris_train, model) # This can now be used together with the explain method explain(iris_test, explanation, n_labels = 1, n_features = 2)#> # A tibble: 2 x 13 #> model_type case label label_prob model_r2 model_intercept model_prediction #> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> #> 1 classific… 1 seto… 1 0.567 0.0982 1.00 #> 2 classific… 1 seto… 1 0.567 0.0982 1.00 #> # … with 6 more variables: feature <chr>, feature_value <dbl>, #> # feature_weight <dbl>, feature_desc <chr>, data <list>, prediction <list>