-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
More pipeline diagnostics #239
Comments
I agree, it could definitely be organized better. If you're just interested in the underlying tpot model, you can get it with:
If you're interested in the best "entire" pipeline in terms of going from material object to prediction (including featurization, cleaning, reduction, learning), that is a bit more difficult, because the fitted matpipe is the best pipeline lol. My thoughts are to either add another method which only returns the most important information. E.g., which featurizers were used, what are the cleaning rules generally, what is the best autoML pipeline, etc. |
I think that would be nice! It took me some time to discover that |
In the case of tpot pipelines saved and loaded, you are correct, because pickling tpot objects doesn't work last time I checked (may have been updated though). Current behavior is to select the best pipeline from the tpot object and save that single sklearn Pipeline as the backend (similar to a Tl;dr: you can open up the best pipeline from a loaded (toot-backend) pipe using:
Only the best pipeline is saved. The I've opened an issue addressing this #241 |
related to #221 |
Besides #238, I think the diagnostics into a fitted pipe could be further improved. In particular, it's too difficult to determine which model actually performed best.
The text was updated successfully, but these errors were encountered: