You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see in the 'code' section that the python version has both a global and local explain. However, in the R 'interpret' package I'm only seeing "ebm_show" which appears to be a global explanation. I've tried searching around, but haven't had any luck.
Is there currently a way to get a local level explain? Will this feature be added to the R version soon?
I've tested EBM against existing ML algos we used on a recent project (h2o automl stacked ensemble) and it performed comparably. I'd really love to be able to use it as a big issue for the project is interpretability/explainability.
Look forward to your reply,
Brian
The text was updated successfully, but these errors were encountered:
Hi @brianfhead -- We're happy to hear that EBMs are performing comparably on your datasets. You are correct that the current R implementation only supports global explanations. The R package has a subset of the functionality available in python in several areas, like this one. We do plan to improve this and eventually match the python package in terms of EBM functionality, but we don't have a timeline for that work yet. Improving EBMs is still an active area of research and we've been spending more of our time there recently, so we would welcome community contributions like local explanation graphs for R in the meantime.
Hi,
I see in the 'code' section that the python version has both a global and local explain. However, in the R 'interpret' package I'm only seeing "ebm_show" which appears to be a global explanation. I've tried searching around, but haven't had any luck.
Is there currently a way to get a local level explain? Will this feature be added to the R version soon?
I've tested EBM against existing ML algos we used on a recent project (h2o automl stacked ensemble) and it performed comparably. I'd really love to be able to use it as a big issue for the project is interpretability/explainability.
Look forward to your reply,
Brian
The text was updated successfully, but these errors were encountered: