Skip to content

Latest commit

 

History

History
 
 

Lab-1

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Lab-1: Fairness Toolkit (AIF360)

Introduction

Machine learning models are increasingly used to inform high stakes decisions about people. Although machine learning, by its very nature, is always a form of statistical discrimination, the discrimination becomes objectionable when it places certain privileged groups at systematic advantage and certain unprivileged groups at systematic disadvantage. Biases in training data, due to either prejudice in labels or under-/over-sampling, yields models with unwanted bias.

AI Fairness 360 is an open source toolkit developed by IBM Research, that can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. The goal of this lab is to introduce its bias detection functionalities and help mitigate the bias in the machine learning model using Adversarial Debiasing technique.

For more information see links below:

AIF360 Demo: https://aif360.mybluemix.net
AIF360 GitHub: https://github.com/IBM/AIF360
AIF360 API Docs: https://aif360.readthedocs.io/en/latest/

Objectives

In this notebook you will utilize AIF360 to detect and mitigate bias on Compas dataset which is used to assess the likelihood that a criminal deefendant will reoffend.

Upon completing this lab you will learn:

1. How to load datasets from the toolkit package
2. Check the dataset for bias
3. Mitigate existing bias by using Adversarial Debiasing technique
4. Train on both original and corrected dataset and compare results

Step 1. Please click on the link below to download the instructions to your machine.

Instructions.