: Code for "Self-supervised Multi-modal Training from Uncurated Image and Reports Enables Zero-shot Oversight Artificial Intelligence in Radiology"
Paper link: https://arxiv.org/abs/2208.05140
[Paper] | Official Pytorch code
Medical X-VL: Medical Cross-attention Vision-Language model
Medical X-VL is a vision-language model developed to be tailored for the properties of the medical domain data. For demo, we provide python codes where you can vision-language model training, zero-shot oversight AI, visualize the cross-attention between the words and visual semantics.
- Ubuntu 20.04
- Python 3.8 (tested on)
- Conda
- Pytorch 1.8.0 (tested on)
- CUDA version 11.3 (tested on)
- CPU or GPU that supports CUDA CuDNN and Pytorch 1.8.
- We tested on GeFore RTX 3090.
- We recommend RAM of more than 32 GB.
- Install Pytorch and other dependencies. It can be easily installed with requirements.txt file.
> pip install -r requirements.txt
The open-source datasets used in paper can be obtained from following links.
- The MIMIC-CXR database is available at MIMIC.
- Subset of the CheXpert test data and corresponding labels used for the evaluation of the model in zero-shot abnormality detection can be found at CheXpert.
- COVIDx dataset used for the evauation of the model in unseen disease is available at COVIDx.
Other parts of the institutional data used in this study are not publicly available due to the patient privacy obligation. Interested users can request the access to these data for research, by contacting the corresponding author J.C.Y. (jong.ye@kaist.ac.kr).
You can download the pretrained weights in link below, which should be located as,
Coming soon.
Coming soon.
First, download the vision transformer (ViT-S/16) self-supervised on the MIMIC-CXR data from this [link](Comming Soon). We utilized this model as the uni-modal visual encoder.
> --config ./configs/Pretrain.yaml --output_dir ./output/
Coming soon.
From the VLP weights, the model can be fine-tuned for the report generation task as below. Coming soon
Succesful visualization will show the cross-attention between the words and the visual semantics (image patches) as below.
> --config ./configs/Pretrain.yaml --output_dir ./output/ --checkpoint /PATH/TO/PRETRAIN/ --evaluate