Skip to content

This is the official project repository for BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation, which has been accepted by ECCV2024.

Notifications You must be signed in to change notification settings

Intelligent-Computing-Research-Group/BKDSNN

Repository files navigation

 BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation

| Paper | Blog |

License Maintenance Contributions welcome

Contents

News

  • [2024/7] Code of BKDSNN is released!
  • [2024/11] Checkpoints for ImageNet and CIFAR are released!

Introduction

image

This is the official project repository for BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation, which has been accepted by ECCV2024. If you find this repository helpful, Please kindly cite:

@inproceedings{
spikeziptf2024,
title={BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation},
author={Zekai Xu and Kang You and Qinghai Guo and Xiang Wang and Zhezhi He},
booktitle={18th European Conference on Computer Vision(ECCV)},
year={2024}
}

Usage

Preparation

For Environment: For Spikingformer-CML, we follow the environment from Spikingformer-CML. For Spike-Element-Wise-ResNet, we follow the environment from Spike-Element-Wise-ResNet (Note that we use the previous version with spikingjelly==0.0.0.0.4).

For Dataset: Prepare ImageNet1K, CIFAR-10, CIFAR-100 and CIFAR10-DVS.

Train

For ImageNet-1K on CML + Spikingformer-8-768 with mixed distillation

cd Spikingformer-CML/imagenet/scripts/mixed
bash run_b.sh

For ImageNet-1K on SEW-ResNet50 with mixed distillation

cd Spike-Element-Wise-ResNet/imagenet
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env train_distillation.py \
                                                           --cos_lr_T 320 \
                                                           --model sew_resnet50 -b 128 \
                                                           --model_teacher resnet50 \
                                                           --output-dir ./logs --tb --print-freq 64 \
                                                           --amp --cache-dataset --connect_f ADD --T 4 \
                                                           --lr 0.1 --epoch 320 --data-path /data/for/imagenet \
                                                           --distill_type mixed --teacher_channel 512 --student_channel 512 \

Performance

Main results on ImageNet-1K(ViT-Base here, we use the pretrained checkpoint from torchvision for ResNet.)

Student Teacher T Param. Top-1 Acc Download
CML + Spikingformer-8-384(74.35) ViT-Base(81.78) 4 16.81M 75.48(+1.13) -
CML + Spikingformer-8-512(76.54) ViT-Base(81.78) 4 29.68M 77.24(+0.70) here
CML + Spikingformer-8-768(77.64) ViT-Base(81.78) 4 66.34M 79.93(+2.29) here
SEW-ResNet18(63.18) ResNet18(69.76) 4 11.69M 65.60(+2.42) -
SEW-ResNet34(67.04) ResNet34(71.24) 4 21.79M 77.24(+4.20) here
SEW-ResNet50(67.78) ResNet50(72.32) 4 25.56M 79.93(+4.54) here

Main results on CIFAR-100(teacher-model here)

Student Teacher T Param. Top-1 Acc Download
CML + Spikingformer-4-256(78.19) ViT-Small(82.22) 4 4.15M 79.41(+1.22) here
CML + Spikingformer-2-384(78.87) ViT-Small(82.22) 4 5.76M 80.63(+1.76) here
CML + Spikingformer-4-384(79.98) ViT-Small(82.22) 4 9.32M 81.26(+1.28) here

Main results on CIFAR-10(teacher-model here)

Student Teacher T Param. Top-1 Acc Download
CML + Spikingformer-4-256(94.94) ViT-Small(96.75) 4 4.15M 95.29(+0.35) here
CML + Spikingformer-2-384(95.54) ViT-Small(96.75) 4 5.76M 95.90(+0.36) here
CML + Spikingformer-4-384(95.81) ViT-Small(96.75) 4 9.32M 96.06(+0.25) here

Acknowledgement

Related project: Spikingformer-CML, Spike-Element-Wise-ResNet, spikingjelly.

For help or issues using this git, please submit a GitHub issue.

Contact

For other communications related to this git, please contact sherlock.holmes.xu@sjtu.edu.cn.

About

This is the official project repository for BKDSNN: Enhancing the Performance of Learning-based Spiking Neural Networks Training with Blurred Knowledge Distillation, which has been accepted by ECCV2024.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published