LIME Integration

Use LIME (Local Interpretable Model-agnostic Explanations) to understand individual predictions through locally faithful interpretable models.

What is LIME?

LIME explains individual predictions by learning an interpretable model locally around the prediction. It works by perturbing the input and seeing how predictions change.

  • Model-agnostic: works with any black-box classifier
  • Provides local explanations for individual predictions
  • Easy to understand with sparse linear models

Basic LIME Usage

Initialize an explainer with LIME method:

python
from blackbox_core import Explainer

# Initialize with LIME
explainer = Explainer(
    model=your_model,
    method='lime'
)

# Explain a single prediction
explanation = explainer.explain(
    data=X_test[0:1],
    feature_names=feature_names,
    num_features=10  # Number of features to show
)

# Visualize
explanation.plot()

Classification Example

Complete example with a classification model:

python
from sklearn.ensemble import RandomForestClassifier
from blackbox_core import Explainer
import numpy as np

# Train classifier
X_train = np.random.rand(1000, 20)
y_train = np.random.randint(0, 3, 1000)  # 3 classes

model = RandomForestClassifier()
model.fit(X_train, y_train)

# Initialize LIME explainer
explainer = Explainer(model, method='lime')

# Explain prediction for a test instance
X_test = np.random.rand(1, 20)
explanation = explainer.explain(
    data=X_test,
    feature_names=[f'feature_{i}' for i in range(20)],
    num_features=10
)

# Show which features contribute to each class
explanation.plot()

Regression Example

Using LIME with regression models:

python
from sklearn.ensemble import GradientBoostingRegressor
from blackbox_core import Explainer

# Train regressor
model = GradientBoostingRegressor()
model.fit(X_train, y_train)

# Create LIME explainer
explainer = Explainer(model, method='lime')

# Explain a prediction
explanation = explainer.explain(
    data=X_test[0:1],
    feature_names=feature_names
)

# Get feature weights
weights = explanation.to_dict()
print("Feature contributions:", weights)

Advanced Configuration

Customize LIME behavior with additional parameters:

python
# Advanced LIME configuration
explainer = Explainer(
    model=your_model,
    method='lime',
    num_samples=5000,  # Number of perturbed samples
    kernel_width=0.75   # Width of exponential kernel
)

# Explain with custom settings
explanation = explainer.explain(
    data=X_test[0:1],
    feature_names=feature_names,
    num_features=15,    # Show top 15 features
    top_labels=3        # Explain top 3 predicted classes
)

When to Use LIME

LIME is particularly useful in these scenarios:

  • When you need to explain individual predictions in detail
  • For model-agnostic explanations that work with any classifier
  • When interpretability is more important than computational speed
  • For debugging specific predictions or edge cases

Best Practices

  • Use num_samples=5000 or higher for more stable explanations
  • Adjust kernel_width based on your feature space density
  • Compare LIME with SHAP to validate explanation consistency
  • Focus on explaining high-stakes or surprising predictions