We present the Long Range Graph Benchmark (LRGB) with 5 graph learning datasets that arguably require long-range reasoning to achieve strong performance in a given task.
- PascalVOC-SP
- COCO-SP
- PCQM-Contact
- Peptides-func
- Peptides-struct
In this repo, we provide the source code to load the proposed datasets and run baseline experiments. The repo is based on GraphGPS which is built using PyG and GraphGym from PyG2.
Dataset | Domain | Task | Node Feat. (dim) | Edge Feat. (dim) | Perf. Metric |
---|---|---|---|---|---|
PascalVOC-SP | Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
COCO-SP | Computer Vision | Node Prediction | Pixel + Coord (14) | Edge Weight (1 or 2) | macro F1 |
PCQM-Contact | Quantum Chemistry | Link Prediction | Atom Encoder (9) | Bond Encoder (3) | Hits@K, MRR |
Peptides-func | Chemistry | Graph Classification | Atom Encoder (9) | Bond Encoder (3) | AP |
Peptides-struct | Chemistry | Graph Regression | Atom Encoder (9) | Bond Encoder (3) | MAE |
Dataset | # Graphs | # Nodes | μ Nodes | μ Deg. | # Edges | μ Edges | μ Short. Path | μ Diameter |
---|---|---|---|---|---|---|---|---|
PascalVOC-SP | 11,355 | 5,443,545 | 479.40 | 5.65 | 30,777,444 | 2,710.48 | 10.74±0.51 | 27.62±2.13 |
COCO-SP | 123,286 | 58,793,216 | 476.88 | 5.65 | 332,091,902 | 2,693.67 | 10.66±0.55 | 27.39±2.14 |
PCQM-Contact | 529,434 | 15,955,687 | 30.14 | 2.03 | 32,341,644 | 61.09 | 4.63±0.63 | 9.86±1.79 |
Peptides-func | 15,535 | 2,344,859 | 150.94 | 2.04 | 4,773,974 | 307.30 | 20.89±9.79 | 56.99±28.72 |
Peptides-struct | 15,535 | 2,344,859 | 150.94 | 2.04 | 4,773,974 | 307.30 | 20.89±9.79 | 56.99±28.72 |
conda create -n lrgb python=3.9
conda activate lrgb
conda install pytorch=1.9 torchvision torchaudio -c pytorch -c nvidia
conda install pyg=2.0.2 -c pyg -c conda-forge
conda install pandas scikit-learn
# RDKit is required for OGB-LSC PCQM4Mv2 and datasets derived from it.
conda install openbabel fsspec rdkit -c conda-forge
# Check https://www.dgl.ai/pages/start.html to install DGL based on your CUDA requirements
pip install dgl-cu111 dglgo -f https://data.dgl.ai/wheels/repo.html
pip install performer-pytorch
pip install torchmetrics==0.7.2
pip install ogb
pip install wandb
conda clean --all
conda activate lrgb
# Running GCN baseline for Peptides-func.
python main.py --cfg configs/GCN/peptides-func-GCN.yaml wandb.use False
# Running SAN baseline for PascalVOC-SP.
python main.py --cfg configs/SAN/vocsuperpixels-SAN.yaml wandb.use False
The scripts for all experiments are located in run directory.
To use W&B logging, set wandb.use True
and have a gtransformers
entity set-up in your W&B account (or change it to whatever else you like by setting wandb.entity
).
Dataset | Derived from | Original License | LRGB Release License |
---|---|---|---|
PascalVOC-SP | Pascal VOC 2011 | Custom* | Custom* |
COCO-SP | MS COCO | CC BY 4.0 | CC BY 4.0 |
PCQM-Contact | PCQM4Mv2 | CC BY 4.0 | CC BY 4.0 |
Peptides-func | SATPdb | CC BY-NC 4.0 | CC BY-NC 4.0 |
Peptides-struct | SATPdb | CC BY-NC 4.0 | CC BY-NC 4.0 |
*Custom License for Pascal VOC 2011 (respecting Flickr terms of use)
The leaderboards of various models' performance on the datasets in LRGB are at paperswithcode.
If you find this work useful, please cite our paper:
@article{dwivedi2022LRGB,
title={Long Range Graph Benchmark},
author={Dwivedi, Vijay Prakash and Rampášek, Ladislav and Galkin, Mikhail and Parviz, Ali and Wolf, Guy and Luu, Anh Tuan and Beaini, Dominique},
journal={arXiv:2206.08164},
year={2022}
}