Skip to content

Integrated Gradients and Embedding layer #37

@gladomat

Description

@gladomat

I would like to calculate feature importances using integrated gradients, as I am hoping this calculation will be faster than the SHAP KernelExplainer. Unfortunately, the embeddding layer is non-differentiable, since it's a simple matrix multiplication. This causes a failure in the gradient calculation.

I also tried embedding the categorical variables separately and then feeding the full data into the model. Theoretically, this procedure avoids calculating the gradients for the embedding layer because there is none present. However, when every variable is numeric the preprocesser discards all variables which are 0 from top to bottom. This is not what I want, as I want to train in batches.

I have two questions:

  1. Is there a way to calculate integrated gradients with embedding layers?
  2. How can I stop the deeptables pre-processor to discard variables which have non-unique (all zeros) values?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions