You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function. Our proposed distillation loss includes a feature transform with a newly designed margin ReLU, a new distillation feature position, and a partial L2 distance function to skip redundant information giving adverse effects to the compression of student. In ImageNet, our proposed method achieves 21.65% of top-1 error with ResNet50, which outperforms the performance of the teacher network, ResNet152. Our proposed method is evaluated on various tasks such as image classification, object detection and semantic segmentation and achieves a significant performance improvement in all tasks. The code is available at link
sh tools/slurm_train.sh $PARTITION$JOB_NAME \
configs/distill/mmcls/ofd/ofd_backbone_resnet50_resnet18_8xb16_cifar10.py \
$DISTILLATION_WORK_DIR
Test
sh tools/slurm_test.sh $PARTITION$JOB_NAME \
configs/distill/mmcls/ofd/ofd_backbone_resnet50_resnet18_8xb16_cifar10.py \
$DISTILLATION_WORK_DIR/latest.pth --eval $EVAL_SETTING
Citation
@inproceedings{heo2019overhaul,
title={A Comprehensive Overhaul of Feature Distillation},
author={Heo, Byeongho and Kim, Jeesoo and Yun, Sangdoo and Park, Hyojin and Kwak, Nojun and Choi, Jin Young},
booktitle = {International Conference on Computer Vision (ICCV)},
year={2019}
}