-
Paper:CondenseNet V2: Sparse Feature Reactivation for Deep Networks
-
Origin Repo:jianghaojun/CondenseNetV2
-
Code:cdnv2.py
-
Evaluate Transforms:
# backend: pil # input_size: 224x224 # models: cdnv2_a and cdnv2_b transforms = T.Compose([ T.Resize(256, interpolation='bicubic'), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) # backend: pil # input_size: 224x224 # models: cdnv2_c transforms = T.Compose([ T.Resize(256, interpolation='bilinear'), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ])
-
Model Details:
Model Model Name Params (M) FLOPs (G) Top-1 (%) Top-5 (%) Pretrained Model CondenseNetV2-A cdnv2_a 2.0 0.05 64.38 85.24 Download CondenseNetV2-B cdnv2_b 3.6 0.15 71.89 90.27 Download CondenseNetV2-C cdnv2_c 6.1 0.31 75.87 92.64 Download
-
Citation:
@misc{yang2021condensenet, title={CondenseNet V2: Sparse Feature Reactivation for Deep Networks}, author={Le Yang and Haojun Jiang and Ruojin Cai and Yulin Wang and Shiji Song and Gao Huang and Qi Tian}, year={2021}, eprint={2104.04382}, archivePrefix={arXiv}, primaryClass={cs.CV} }