Official PyTorch implementation for paper MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images, accepted by IEEE Transactions on Medical Imaging.
This code is made by Yanwu Xu and Li Sun.
- Environment Setup
- Pretrained Checkpoint
- Pre-processing Data
- Training
- Inference
- Additional Scripts
- Generated Samples
- Citation
- License and Copyright
- Contact
Before running and doing inference based on our code, we highly recommend preparing at least two GPUs with 48G GPU memory each.
conda create -n medsyn python==3.9
In addition to this, you need to also install several packages by:
pip3 install -U xformers --index-url https://download.pytorch.org/whl/cu121
pip install monai==0.8.0
pip install accelerate
pip install einops
pip install einops_exts
Refer to the src
folder
Our checkpoint for pre-trained language model is available here. Our checkpoint for model pre-trained on UPMC dataset is available here (Application required).
Refer to the preprocess
folder
Refer to the src
folder
This is a one-key running bash, which will run both low-res and high-res. But the training can be done independently
sh run_train.sh
Refer to the src
folder
sh run_inference.sh
We give the inference for our text conditional generation in "prompt.ipynb" and the conditional generation with segmentation in "seg_conditional.ipynb"
Low-Res | High-Res |
---|---|
@ARTICLE{medsyn2024,
author={Xu, Yanwu and Sun, Li and Peng, Wei and Jia, Shuyue and Morrison, Katelyn and Perer, Adam and Zandifar, Afrooz and Visweswaran, Shyam and Eslami, Motahhare and Batmanghelich, Kayhan},
journal={IEEE Transactions on Medical Imaging},
title={MedSyn: Text-guided Anatomy-aware Synthesis of High-Fidelity 3D CT Images},
year={2024},
doi={10.1109/TMI.2024.3415032}}
CC-BY-NC
Yanwu Xu [yanwuxu@bu.edu], Li Sun [lisun@bu.edu], Kayhan Batmanghelich [batman@bu.edu]