Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
-
Updated
Aug 8, 2022
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
Using a Kaggle dataset, customer personality was analysed on the basis of their spending habits, income, education, and family size. K-Means, XGBoost, and SHAP Analysis were performed.
In this repository you will fine explainability of machine learning models.
Code for EACL Workshop paper Can BERT eat RuCoLA? Topological Data Analysis to Explain
📊🛰️ Data processing scripts, ML models, and Explainable AI results created as part of my Masters Thesis @ Johns Hopkins
Measuring galaxy environmental distance scales with GNNs and explainable ML models
Android malware detection using machine learning.
Code for my thesis about SHAP. Implementation of DecisionTree, SVM, BERT on 2 Datasets Imdb and Argument Mining
Predicting NBA game outcomes using schedule related information. This is an example of supervised learning where a xgboost model was trained with 20 seasons worth of NBA games and uses SHAP values for model explainability.
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of soils. This model is developed using XGBoost and SHAP.
Implementation of the algorithm described in the paper "An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data"
gradient-boosted regression and decision tree models on behavioural animal data (PLOS Computational Biology, doi: https://doi.org/10.1371/journal.pcbi.1011985)
No-code Machine learning (Pre-alpha)
In this project we predict credit card defaults using classification models.
Determining Feature Importance by Integrating Random Forest and SHAP in Python
XGB - SHAP XAI
An Analysis of Lassa Fever Outbreaks in Nigeria using Machine Learning Models and Shapley Values
The goal of SHAP is to explain the prediction of an instance x by computing the contribution of each feature to the prediction. The SHAP explanation method computes Shapley values from coalitional game theory. The feature values of a data instance act as players in a coalition.
Add a description, image, and links to the shapley-additive-explanations topic page so that developers can more easily learn about it.
To associate your repository with the shapley-additive-explanations topic, visit your repo's landing page and select "manage topics."