When deploying machine learning models, other factors besides the accuracy of the model needs to be considered. Is the model biased? Can the decisions made by the model be explained?. Is the model robust to adversarial attacks? IBM has developed 3 toolkits to help address these questions.
This session will provide a brief overview of the toolkits and 1 lab on each toolkit. In addition, a lab on Watson Openscale which monitors accuracy, bias, and model drift will also be included. The attendees will use Watson Studio to complete the labs.
Lab-0-Prerequisites - This lab will walk through the steps to create a Watson Studiop project. A Watson Studio project is a way to organize your data and analytical assets for ab analytics project.
Lab-1 - This lab will feature IBM's AI Fairness 360 (AIF360), a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias.
Lab-2 - This lab will feature IBM's AI Explainability 360, a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models.
Lab-3 - This lab will feature IBM's Adversarial Robustness Toolbox (ART). ART is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
Lab-4 - This lab will feature Watson OpenScale. IBM Watson OpenScale is an open platform that helps remove barriers to enterprise-scale AI by supporting bias mitigation, accuracy, and explainability of outcomes among other features.