Skip to content

Commit 9166381

Browse files
humu789fpshuanghuangpengshengLKJackyliukai
authored
[Feature] Add MMRazor quantization (#513)
* [FEATURE] add quant algo `Learned Step Size Quantization` (#346) * update * Fix a bug in make_divisible. (#333) fix bug in make_divisible Co-authored-by: liukai <liukai@pjlab.org.cn> * [Fix] Fix counter mapping bug (#331) * fix counter mapping bug * move judgment into get_counter_type & update UT * [Docs]Add MMYOLO projects link (#334) * [Doc] fix typos in en/usr_guides (#299) * Update README.md * Update README_zh-CN.md Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com> * [Features]Support `MethodInputsRecorder` and `FunctionInputsRecorder` (#320) * support MethodInputsRecorder and FunctionInputsRecorder * fix bugs that the model can not be pickled * WIP: add pytest for ema model * fix bugs in recorder and delivery when ema_hook is used * don't register the DummyDataset * fix pytest * updated * retina loss & predict & tesnor DONE * [Feature] Add deit-base (#332) * WIP: support deit * WIP: add deithead * WIP: fix checkpoint hook * fix data preprocessor * fix cfg * WIP: add readme * reset single_teacher_distill * add metafile * add model to model-index * fix configs and readme * [Feature]Feature map visualization (#293) * WIP: vis * WIP: add visualization * WIP: add visualization hook * WIP: support razor visualizer * WIP * WIP: wrap draw_featmap * support feature map visualization * add a demo image for visualization * fix typos * change eps to 1e-6 * add pytest for visualization * fix vis hook * fix arguments' name * fix img path * support draw inference results * add visualization doc * fix figure url * move files Co-authored-by: weihan cao <HIT-cwh> * [Feature] Add kd examples (#305) * support kd for mbv2 and shufflenetv2 * WIP: fix ckpt path * WIP: fix kd r34-r18 * add metafile * fix metafile * delete * [Doc] add documents about pruning. (#313) * init * update user guide * update images * update * update How to prune your model * update how_to_use_config_tool_of_pruning.md * update doc * move location * update * update * update * add mutablechannels.md * add references Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: jacky <jacky@xx.com> * [Feature] PyTorch version of `PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient`. (#304) * add pkd * add pytest for pkd * fix cfg * WIP: support fcos3d * WIP: support fcos3d pkd * support mmdet3d * fix cfgs * change eps to 1e-6 and add some comments * fix docstring * fix cfg * add assert * add type hint * WIP: add readme and metafile * fix readme * update metafiles and readme * fix metafile * fix pipeline figure * for RFC * Customed FX initialize * add UT init * [Refactor] Refactor Mutables and Mutators (#324) * refactor mutables * update load fix subnet * add DumpChosen Typehint * adapt UTs * fix lint * Add GroupMixin to ChannelMutator (temporarily) * fix type hints * add GroupMixin doc-string * modified by comments * fix type hits * update subnet format * fix channel group bugs and add UTs * fix doc string * fix comments * refactor diff module forward * fix error in channel mutator doc * fix comments Co-authored-by: liukai <liukai@pjlab.org.cn> * [Fix] Update readme (#341) * update kl readme * update dsnas readme * fix url * Bump version to 1.0.0rc1 (#338) update version * init demo * add customer_tracer * add quantizer * add fake_quant, loop, config * remove CPatcher in custome_tracer * demo_try * init version * modified base.py * pre-rebase * wip of adaround series * adaround experiment * trasfer to s2 * update api * point at sub_reconstruction * pre-checkout * export onnx * add customtracer * fix lint * move custom tracer * fix import * TDO: UTs * Successfully RUN * update loop * update loop docstrings * update quantizer docstrings * update qscheme docstrings * update qobserver docstrings * update tracer docstrings * update UTs init * update UTs init * fix review comments * fix CI * fix UTs * update torch requirements Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com> Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com> Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com> Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com> Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com> Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com> Co-authored-by: jacky <jacky@xx.com> Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com> Co-authored-by: humu789 <humu@pjlab.org.cn> * [Features]Quantize pipeline (#350) * init demo * add customer_tracer * add quantizer * add fake_quant, loop, config * remove CPatcher in custome_tracer * demo_try * init version * modified base.py * pre-rebase * wip of adaround series * adaround experiment * trasfer to s2 * update api * point at sub_reconstruction * pre-checkout * export onnx * add customtracer * fix lint * move custom tracer * fix import * update * updated * retina loss & predict & tesnor DONE * for RFC * Customed FX initialize * add UT init * TDO: UTs * Successfully RUN * update loop * update loop docstrings * update quantizer docstrings * update qscheme docstrings * update qobserver docstrings * update tracer docstrings * update UTs init * update UTs init * fix bugs * fix lsq * refactor quantize pipeline * fix quant * WIP: debug qat * fix lsq bugs * fix qat, docstring in progress * TDO: UTs * fix bugs * fix lsq * refactor quantize pipeline * fix quant * WIP: debug qat * fix lsq bugs * fix qat, docstring in progress * fixed DefaultQconfigs name * fix bugs * add comments and fix typos * delete useless codes * fix bugs and add comments * rename prepare_module_dict * update lsq config Co-authored-by: humu789 <humu@pjlab.org.cn> Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com> Co-authored-by: FreakieHuang <frank0huang@foxmail.com> Co-authored-by: pppppM <gjf_mail@126.com> * [Feature] Add `prepare_for_mmdeploy` interface (#365) * remove useless code * fix build graph module import bug * refactor general quant * rename GeneralQuant to MMArchitectureQuant * fix some dtype bugs * add prepare_for_mmdeploy interface * update prepare for mmdeploy args * fix some comments Co-authored-by: humu789 <humu@pjlab.org.cn> * CodeCamp #132 add MinMaxFloorObserver (#376) * add minmaxfloor_observer.py * add MinMaxFloorObserver and normative docstring * add test for MinMaxFloorObserver * Quant go (#409) * add torch observer * add torch fakequant * refactor base quantizer * add QConfigHander and QSchemeHander & finish quantizer_refactor_beta * passed ptq_pipeline * tmp-commit * fix loop and algorithm * delete fakequant * refactor code structure * remove lsq * valid ptq pipeline * wip * fix del functions * fix * fix lint and pytest Co-authored-by: HIT-cwh <2892770585@qq.com> * [Refactor & Doc] Refactor graph_utils and add docstring and pytest (#420) * refactor graph_utils and add docstring and pytest * fix del fakequant * delete useless codes * Merge dev-1.x into quantize (#430) * Fix a bug in make_divisible. (#333) fix bug in make_divisible Co-authored-by: liukai <liukai@pjlab.org.cn> * [Fix] Fix counter mapping bug (#331) * fix counter mapping bug * move judgment into get_counter_type & update UT * [Docs]Add MMYOLO projects link (#334) * [Doc] fix typos in en/usr_guides (#299) * Update README.md * Update README_zh-CN.md Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com> * [Features]Support `MethodInputsRecorder` and `FunctionInputsRecorder` (#320) * support MethodInputsRecorder and FunctionInputsRecorder * fix bugs that the model can not be pickled * WIP: add pytest for ema model * fix bugs in recorder and delivery when ema_hook is used * don't register the DummyDataset * fix pytest * [Feature] Add deit-base (#332) * WIP: support deit * WIP: add deithead * WIP: fix checkpoint hook * fix data preprocessor * fix cfg * WIP: add readme * reset single_teacher_distill * add metafile * add model to model-index * fix configs and readme * [Feature]Feature map visualization (#293) * WIP: vis * WIP: add visualization * WIP: add visualization hook * WIP: support razor visualizer * WIP * WIP: wrap draw_featmap * support feature map visualization * add a demo image for visualization * fix typos * change eps to 1e-6 * add pytest for visualization * fix vis hook * fix arguments' name * fix img path * support draw inference results * add visualization doc * fix figure url * move files Co-authored-by: weihan cao <HIT-cwh> * [Feature] Add kd examples (#305) * support kd for mbv2 and shufflenetv2 * WIP: fix ckpt path * WIP: fix kd r34-r18 * add metafile * fix metafile * delete * [Doc] add documents about pruning. (#313) * init * update user guide * update images * update * update How to prune your model * update how_to_use_config_tool_of_pruning.md * update doc * move location * update * update * update * add mutablechannels.md * add references Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: jacky <jacky@xx.com> * [Feature] PyTorch version of `PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient`. (#304) * add pkd * add pytest for pkd * fix cfg * WIP: support fcos3d * WIP: support fcos3d pkd * support mmdet3d * fix cfgs * change eps to 1e-6 and add some comments * fix docstring * fix cfg * add assert * add type hint * WIP: add readme and metafile * fix readme * update metafiles and readme * fix metafile * fix pipeline figure * [Refactor] Refactor Mutables and Mutators (#324) * refactor mutables * update load fix subnet * add DumpChosen Typehint * adapt UTs * fix lint * Add GroupMixin to ChannelMutator (temporarily) * fix type hints * add GroupMixin doc-string * modified by comments * fix type hits * update subnet format * fix channel group bugs and add UTs * fix doc string * fix comments * refactor diff module forward * fix error in channel mutator doc * fix comments Co-authored-by: liukai <liukai@pjlab.org.cn> * [Fix] Update readme (#341) * update kl readme * update dsnas readme * fix url * Bump version to 1.0.0rc1 (#338) update version * [Feature] Add Autoformer algorithm (#315) * update candidates * update subnet_sampler_loop * update candidate * add readme * rename variable * rename variable * clean * update * add doc string * Revert "[Improvement] Support for candidate multiple dimensional search constraints." * [Improvement] Update Candidate with multi-dim search constraints. (#322) * update doc * add support type * clean code * update candidates * clean * xx * set_resource -> set_score * fix ci bug * py36 lint * fix bug * fix check constrain * py36 ci * redesign candidate * fix pre-commit * update cfg * add build_resource_estimator * fix ci bug * remove runner.epoch in testcase * [Feature] Autoformer architecture and dynamicOPs (#327) * add DynamicSequential * dynamiclayernorm * add dynamic_pathchembed * add DynamicMultiheadAttention and DynamicRelativePosition2D * add channel-level dynamicOP * add autoformer algo * clean notes * adapt channel_mutator * vit fly * fix import * mutable init * remove annotation * add DynamicInputResizer * add unittest for mutables * add OneShotMutableChannelUnit_VIT * clean code * reset unit for vit * remove attr * add autoformer backbone UT * add valuemutator UT * clean code * add autoformer algo UT * update classifier UT * fix test error * ignore * make lint * update * fix lint * mutable_attrs * fix test * fix error * remove DynamicInputResizer * fix test ci * remove InputResizer * rename variables * modify type * Continued improvements of ChannelUnit * fix lint * fix lint * remove OneShotMutableChannelUnit * adjust derived type * combination mixins * clean code * fix sample subnet * search loop fly * more annotations * avoid counter warning and modify batch_augment cfg by gy * restore * source_value_mutables restriction * simply arch_setting api * update * clean * fix ut * [Feature] Add performance predictor (#306) * add predictor with 4 handlers * [Improvement] Update Candidate with multi-dim search constraints. (#322) * update doc * add support type * clean code * update candidates * clean * xx * set_resource -> set_score * fix ci bug * py36 lint * fix bug * fix check constrain * py36 ci * redesign candidate * fix pre-commit * update cfg * add build_resource_estimator * fix ci bug * remove runner.epoch in testcase * update metric_predictor: 1. update MetricPredictor; 2. add predictor config for searching; 3. add predictor in evolution_search_loop. * add UT for predictor * add MLPHandler * patch optional.txt for predictors * patch test_evolution_search_loop * refactor apis of predictor and handlers * fix ut and remove predictor_cfg in predictor * adapt new mutable & mutator design * fix ut * remove unness assert after rebase * move predictor-build in __init__ & simplify estimator-build Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn> * [Feature] Add DCFF (#295) * add ChannelGroup (#250) * rebase new dev-1.x * modification for adding config_template * add docstring to channel_group.py * add docstring to mutable_channel_group.py * rm channel_group_cfg from Graph2ChannelGroups * change choice type of SequentialChannelGroup from float to int * add a warning about group-wise conv * restore __init__ of dynamic op * in_channel_mutable -> mutable_in_channel * rm abstractproperty * add a comment about VT * rm registry for ChannelGroup * MUTABLECHANNELGROUP -> ChannelGroupType * refine docstring of IndexDict * update docstring * update docstring * is_prunable -> is_mutable * update docstring * fix error in pre-commit * update unittest * add return type * unify init_xxx apit * add unitest about init of MutableChannelGroup * update according to reviews * sequential_channel_group -> sequential_mutable_channel_group Co-authored-by: liukai <liukai@pjlab.org.cn> * Add BaseChannelMutator and refactor Autoslim (#289) * add BaseChannelMutator * add autoslim * tmp * make SequentialMutableChannelGroup accpeted both of num and ratio as choice. and supports divisior * update OneShotMutableChannelGroup * pass supernet training of autoslim * refine autoslim * fix bug in OneShotMutableChannelGroup * refactor make_divisible * fix spell error: channl -> channel * init_using_backward_tracer -> init_from_backward_tracer init_from_fx_tracer -> init_from_fx_tracer * refine SequentialMutableChannelGroup * let mutator support models with dynamicop * support define search space in model * tracer_cfg -> parse_cfg * refine * using -> from * update docstring * update docstring Co-authored-by: liukai <liukai@pjlab.org.cn> * tmpsave * migrate ut * tmpsave2 * add loss collector * refactor slimmable and add l1-norm (#291) * refactor slimmable and add l1-norm * make l1-norm support convnd * update get_channel_groups * add l1-norm_resnet34_8xb32_in1k.py * add pretrained to resnet34-l1 * remove old channel mutator * BaseChannelMutator -> ChannelMutator * update according to reviews * add readme to l1-norm * MBV2_slimmable -> MBV2_slimmable_config Co-authored-by: liukai <liukai@pjlab.org.cn> * update config * fix md & pytorch support <1.9.0 in batchnorm init * Clean old codes. (#296) * remove old dynamic ops * move dynamic ops * clean old mutable_channels * rm OneShotMutableChannel * rm MutableChannel * refine * refine * use SquentialMutableChannel to replace OneshotMutableChannel * refactor dynamicops folder * let SquentialMutableChannel support float Co-authored-by: liukai <liukai@pjlab.org.cn> * fix ci * ci fix py3.6.x & add mmpose * ci fix py3.6.9 in utils/index_dict.py * fix mmpose * minimum_version_cpu=3.7 * fix ci 3.7.13 * fix pruning &meta ci * support python3.6.9 * fix py3.6 import caused by circular import patch in py3.7 * fix py3.6.9 * Add channel-flow (#301) * base_channel_mutator -> channel_mutator * init * update docstring * allow omitting redundant configs for channel * add register_mutable_channel_to_a_module to MutableChannelContainer * update according to reviews 1 * update according to reviews 2 * update according to reviews 3 * remove old docstring * fix error * using->from * update according to reviews * support self-define input channel number * update docstring * chanenl -> channel_elem Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: jacky <jacky@xx.com> * support >=3.7 * support py3.6.9 * Rename: ChannelGroup -> ChannelUnit (#302) * refine repr of MutableChannelGroup * rename folder name * ChannelGroup -> ChannelUnit * filename in units folder * channel_group -> channel_unit * groups -> units * group -> unit * update * get_mutable_channel_groups -> get_mutable_channel_units * fix bug * refine docstring * fix ci * fix bug in tracer Co-authored-by: liukai <liukai@pjlab.org.cn> * update new channel config format * update pruning refactor * update merged pruning * update commit * fix dynamic_conv_mixin * update comments: readme&dynamic_conv_mixins.py * update readme * move kl softmax channel pooling to op by comments * fix comments: fix redundant & split README.md * dcff in ItePruneAlgorithm * partial dynamic params for fuseconv * add step_freq & prune_time check * update comments * update comments * update comments * fix ut * fix gpu ut & revise step_freq in ItePruneAlgorithm * update readme * revise ItePruneAlgorithm * fix docs * fix dynamic_conv attr * fix ci Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com> Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com> Co-authored-by: jacky <jacky@xx.com> * [Fix] Fix optional requirements (#357) * fix optional requirements * fix dcff ut * fix import with get_placeholder * supplement the previous commit * [Fix] Fix configs of wrn models and ofd. (#361) * 1.revise the configs of wrn22, wrn24, and wrn40. 2.revise the data_preprocessor of ofd_backbone_resnet50_resnet18_8xb16_cifar10 * 1.Add README for vanilla-wrm. * 1.Revise readme of wrn Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn> * [Fix] Fix bug on mmrazor visualization, mismatch argument in define and use. (#356) fix bug on mmrazor visualization, mismatch argument in define and use. Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com> * fix bug in benchmark_test (#364) fix bug in configs Co-authored-by: Your Name <you@example.com> * [FIX] Fix wrn configs (#368) * fix wrn configs * fix wrn configs * update online wrn model weight * [Fix] fix bug on pkd config. Wrong import filename. (#373) * [CI] Update ci to torch1.13 (#380) update ci to torch1.13 * [Feature] Add BigNAS algorithm (#219) * add calibrate-bn-statistics * add test calibrate-bn-statistics * fix mixins * fix mixins * fix mixin tests * remove slimmable channel mutable and refactor dynamic op * refact dynamic batch norm * add progressive dynamic conv2d * add center crop dynamic conv2d * refactor dynamic directory * refactor dynamic sequential * rename length to depth in dynamic sequential * add test for derived mutable * refactor dynamic op * refactor api of dynamic op * add derive mutable mixin * addbignas algorithm * refactor bignas structure * add input resizer * add input resizer to bignas * move input resizer from algorithm into classifier * remove compnents * add attentive mobilenet * delete json file * nearly(less 0.2) align inference accuracy with gml * move mutate seperated in bignas mobilenet backbone * add zero_init_residual * add set_dropout * set dropout in bignas algorithm * fix registry * add subnet yaml and nearly align inference accuracy with gml * add rsb config for bignas * remove base in config * add gml bignas config * convert to iter based * bignas forward and backward fly * fix merge conflict * fix dynamicseq bug * fix bug and refactor bignas * arrange configs of bignas * fix typo * refactor attentive_mobilenet * fix channel mismatch due to registion of DerivedMutable * update bignas & fix se channel mismatch * add AutoAugmentV2 & remove unness configs * fix lint * recover channel assertion in channel unit * fix a group bug * fix comments * add docstring * add norm in dynamic_embed * fix search loop & other minor changes * fix se expansion * minor change * add ut for bignas & attentive_mobilenet * fix ut * update bignas readme * rm unness ut & supplement get_placeholder * fix lint * fix ut * add subnet deployment in downstream tasks. * minor change * update ofa backbone * minor fix * Continued improvements of searchable backbone * minor change * drop ratio in backbone * fix comments * fix ci test * fix test * add dynamic shortcut UT * modify strategy to fit bignas * fix test * fix bug in neck * fix error * fix error * fix yaml * save subnet ckpt * merge autoslim_val/test_loop into subnet_val_loop * move calibrate_bn_mixin to utils * fix bugs and add docstring * clean code * fix register bug * clean code * update Co-authored-by: wangshiguang <wangshiguang@sensetime.com> Co-authored-by: gaoyang07 <1546308416@qq.com> Co-authored-by: aptsunny <aptsunny@tongji.edu.cn> Co-authored-by: sunyue1 <sunyue1@sensetime.com> * [Bug] Fix ckpt (#372) fix ckpt * [Feature] Add tools to convert distill ckpt to student-only ckpt. (#381) * [Feature] Add tools to convert distill ckpt to student-only ckpt. * fix bug. * add --model-only to only save model. * Make changes accroding to PR review. * Enhance the Abilities of the Tracer for Pruning. (#371) * tmp * add new mmdet models * add docstring * pass test and pre-commit * rm razor tracer * update fx tracer, now it can automatically wrap methods and functions. * update tracer passed models * add warning for torch <1.12.0 fix bug for python3.6 update placeholder to support placeholder.XXX * fix bug * update docs * fix lint * fix parse_cfg in configs * restore mutablechannel * test ite prune algorithm when using dist * add get_model_from_path to MMModelLibrrary * add mm models to DefaultModelLibrary * add uts * fix bug * fix bug * add uts * add uts * add uts * add uts * fix bug * restore ite_prune_algorithm * update doc * PruneTracer -> ChannelAnalyzer * prune_tracer -> channel_analyzer * add test for fxtracer * fix bug * fix bug * PruneTracer -> ChannelAnalyzer refine * CustomFxTracer -> MMFxTracer * fix bug when test with torch<1.12 * update print log * fix lint * rm unuseful code Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: jacky <jacky@xx.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: liukai <your_email@abc.example> * fix bug in placer holder (#395) * fix bug in placer holder * remove redundent comment Co-authored-by: liukai <your_email@abc.example> * Add get_prune_config and a demo config_pruning (#389) * update tools and test * add demo * disable test doc * add switch for test tools and test_doc * fix bug * update doc * update tools name * mv get_channel_units Co-authored-by: liukai <your_email@abc.example> * [Improvement] Adapt OFA series with SearchableMobileNetV3 (#385) * fix mutable bug in AttentiveMobileNetV3 * remove unness code * update ATTENTIVE_SUBNET_A0-A6.yaml with optimized names * unify the sampling usage in sandwich_rule-based NAS * use alias to export subnet * update OFA configs * fix attr bug * fix comments * update convert_supernet2subnet.py * correct the way to dump DerivedMutable * fix convert index bug * update OFA configs & models * fix dynamic2static * generalize convert_ofa_ckpt.py * update input_resizer * update README.md * fix ut * update export_fix_subnet * update _dynamic_to_static * update fix_subnet UT & minor fix bugs * fix ut * add new autoaug compared to attentivenas * clean * fix act * fix act_cfg * update fix_subnet * fix lint * add docstring Co-authored-by: gaoyang07 <1546308416@qq.com> Co-authored-by: aptsunny <aptsunny@tongji.edu.cn> * [Fix]Dcff Deploy Revision (#383) * dcff deploy revision * tempsave * update fix_subnet * update mutator load * export/load_fix_subnet revision for mutator * update fix_subnet with dev-1.x * update comments * update docs * update registry * [Fix] Fix commands in README to adapt branch 1.x (#400) * update commands in README for 1.x * fix commands Co-authored-by: gaoyang07 <1546308416@qq.com> * Set requires_grad to False if the teacher is not trainable (#398) * add choice and mask of units to checkpoint (#397) * add choice and mask of units to checkpoint * update * fix bug * remove device operation * fix bug * fix circle ci error * fix error in numpy for circle ci * fix bug in requirements * restore * add a note * a new solution * save mutable_channel.mask as float for dist training * refine * mv meta file test Co-authored-by: liukai <your_email@abc.example> Co-authored-by: jacky <jacky@xx.com> * [Bug]Fix fpn teacher distill (#388) fix fpn distill * [CodeCamp #122] Support KD algorithm MGD for detection. (#377) * [Feature] Support KD algorithm MGD for detection. * use connector to beauty mgd. * fix typo, add unitest. * fix mgd loss unitest. * fix mgd connector unitest. * add model pth and log file. * add mAP. * update l1 config (#405) * add l1 config * update l1 config Co-authored-by: jacky <jacky@xx.com> * [Feature] Add greedy search for AutoSlim (#336) * WIP: add greedysearch * fix greedy search and add bn_training_mode to autoslim * fix cfg files * fix autoslim configs * fix bugs when converting dynamic bn to static bn * change to test loop * refactor greedy search * rebase and fix greedysearch * fix lint * fix and delete useless codes * fix pytest * fix pytest and add bn_training_mode * fix lint * add reference to AutoSlimGreedySearchLoop's docstring * sort candidate_choices * fix save subnet * delete useless codes in channel container * change files' name: convert greedy_search_loop to autoslim_greedy_search_loop * [Fix] Fix metafile (#422) * fix ckpt path in metafile and readme * fix darts file path * fix docstring in ConfigurableDistiller * fix darts * fix error * add darts of mmrazor version * delete py36 Co-authored-by: liukai <your_email@abc.example> * update bignas cfg (#412) * check attentivenas training * update ckpt link * update supernet log Co-authored-by: aptsunny <aptsunny@tongji.edu.cn> * Bump version to 1.0.0rc2 (#423) bump version to 1.0.0rc2 Co-authored-by: liukai <your_email@abc.example> * fix lint * fix ci * add tmp docstring for passed ci * add tmp docstring for passed ci * fix ci * add get_placeholder for quant * add skip for unittest * fix package placeholder bug * add version judgement in __init__ * update prev commit * update prev commit * update prev commit * update prev commit * update prev commit * update prev commit * update prev commit * update prev commit * update prev commit Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com> Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com> Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com> Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com> Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com> Co-authored-by: jacky <jacky@xx.com> Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com> Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn> Co-authored-by: zengyi <31244134+spynccat@users.noreply.github.com> Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com> Co-authored-by: zhongyu zhang <43191879+wilxy@users.noreply.github.com> Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn> Co-authored-by: Xianpan Zhou <32625100+TinyTigerPan@users.noreply.github.com> Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com> Co-authored-by: qiufeng <44188071+wutongshenqiu@users.noreply.github.com> Co-authored-by: wangshiguang <wangshiguang@sensetime.com> Co-authored-by: gaoyang07 <1546308416@qq.com> Co-authored-by: sunyue1 <sunyue1@sensetime.com> Co-authored-by: liukai <your_email@abc.example> Co-authored-by: Ming-Hsuan-Tu <qrnnis2623891@gmail.com> Co-authored-by: Yivona <120088893+yivona08@users.noreply.github.com> Co-authored-by: Yue Sun <aptsunny@alumni.tongji.edu.cn> * [Docs] Add docstring and unittest about backendconfig & observer & fakequant (#428) * add ut about backendconfig * add ut about observers and fakequants in torch * fix torch1.13 ci * [Docs] Add docstring for `MMArchitectureQuant` & `NativeQuantizer` (#425) * add docstring on mm_architecture& native_quantizer * add naive openvino r18 qat config & dist_ptq.sh * Added a more accurate description * unitest&doc * checkpoint url * unitest * passed_pre_commit * unitest on native_quantizer& fix bugs * remove dist_ptq * add get_placeholder&skipTest * complete arg descriptions * fix import bugs * fix pre-commit * add get_placeholder * add typehint and doctring * update docstring&typehint * update docstring * pre-commit * fix some problems * fix bug * [Docs] Add docstring and unitest about custom tracer (#427) * rename QConfigHandler and QSchemeHandler * add docstring about custom tracer * add ut about custom tracer * fix torch1.13 ci * fix lint * fix ci * fix ci * [Docs & Refactor] Add docstring and UT of other quantizers (#439) * add quantizer docstring and refactor the interface of AcademicQuantizer * add AcademicQuantizer unittest * add TensorRTQuantizer and OpenVINOQuantizer unittest & refactor prepare interface * adapt torch113 ci * fix import * fix lint * update some docstring * fix ci * [Feature&Doc]Modify ptq pipeline and support lsq (#435) * modify ptq pipeline and support lsq * use placeholder * fix lsq && quantloop * add lsq pytest * add quant loop pytest * test lsq observer * fix bug under pt13 * fix reset_min_max_vals * fix bugs under pt13 * fix configs * add get_qconfig_mapping * delete is_qat, add doc and fix pytest * delete useless codes in custom_tracer * skip pytest under pt13 * add todo: check freezebn * fix pytest bugs * fix pytest * fix pytest * fix pytest * [Docs] Add customize_quantization_tutorial (#440) * [Docs] Add quantization user guide (#441) * add quantization user guide * fix layout * fix layout * update README * [Bug] Fix del redundant fakequant (#447) fix del redundant fakequant * [Feature] Add onnx exporters (#475) * fix del redundant fakequant * add onnx exporters * fix onnx exporters and add docstring * fix comments * delete useless codes * fix export_onnx in native quantizer --------- Co-authored-by: pppppM <gjf_mail@126.com> * [Feature]Rewrite the origin model during prepare (#488) * add rewriter * add deploy_cfg arg * modify post_process_for_mmdeploy * fix bugs * add det config * [Feature] Using rewriter in mmrazor when building qmodels. (#490) * add rewriter * add deploy_cfg arg * modify post_process_for_mmdeploy * fix bugs * add det config * replace deepcopy * pop detectors' forward * [Feature] Quantization global optimization (#491) * add trtquantizer * unify all fakequant before deploy * move to aide * add yolox config * pre-rebase * add unittest * add a arg of post_process_for_deploy * test trt yolox deploy * opt quantizer interface * fix rebase * add trt r50 config * update trt setting * del redundant code * fix lint * fix ut of quantizers * del redundant file * fix lint * fix some comments * Fix code syntax in UT (#470) Co-authored-by: 王盟 <unicorn@MacBook-Pro.local> * passed lint and pytest * try to fix ci * [Bug] Try to fix CI (#502) fix lint * [Feature] Support lsq (#501) * support deploy_cfg=None * replace fakequant before load ckpt * add _load_from_state_dict to lsq fakequant * fix pre-commit * test lsq load state dict * change github ci: ubuntu 18.04 to ubuntu 20.04 * get_deploy_model order change back * sync before save ckpt * delete strict=False * test context rewriter * fix pre commit config * try to fix ci * [Bug] Try to fix CI (#502) fix lint --------- Co-authored-by: humu789 <humu@pjlab.org.cn> Co-authored-by: humu789 <88702197+humu789@users.noreply.github.com> * [Feature] Add exporter pytest (#504) * add exporter pytest * fix bugs * delete useless codes * handle onnx * delete useless codes * [Bug] Fix ci converage setting (#508) fix ci converage * [Bug] Fix codecov (#509) * remove codecov in requirements * try to fix ci * del adaround loss * [BUG] Fix quantization loop (#507) * fix quantization loop * fix quant loop * fix quant loop * fix qat configs * [Bug] Fix ci converage setting (#508) fix ci converage * [Bug] Fix codecov (#509) * remove codecov in requirements * try to fix ci * del adaround loss * add freeze_bn_begin to lsq * delete useless codes --------- Co-authored-by: humu789 <88702197+humu789@users.noreply.github.com> * add test ptq * opt ptq pipeline * refactor quant configs * update config path * add summary analyse tool * fix benchmark_test:detnas_frcnn_shufflenet_subnet_coco_1x.py * update quantization README.md * update quantization metafile, readme, config path * update quantization docs * update git main link in workflow * update benchmark_summary_analyse.py * del dmcp results * [Bug] fix a rebase error (#514) fix a rebase error * [Bug] Fix CI (#515) * fix ci * mmcv2.0 need torch1.8+ * Update CI config and Passed (#516) * test ci * update test.yml based on mmcv2.0.0 * [Docs] Fix cwd test accuary (#517) * test ci * update test.yml based on mmcv2.0.0 * update cwd_logits_pspnet result --------- Co-authored-by: P.Huang <37200926+FreakieHuang@users.noreply.github.com> Co-authored-by: huangpengsheng <huangpengsheng@sensetime.com> Co-authored-by: LKJacky <108643365+LKJacky@users.noreply.github.com> Co-authored-by: liukai <liukai@pjlab.org.cn> Co-authored-by: Yang Gao <Gary1546308416AL@gmail.com> Co-authored-by: kitecats <90194592+kitecats@users.noreply.github.com> Co-authored-by: Sheffield <49406546+SheffieldCao@users.noreply.github.com> Co-authored-by: whcao <41630003+HIT-cwh@users.noreply.github.com> Co-authored-by: jacky <jacky@xx.com> Co-authored-by: pppppM <67539920+pppppM@users.noreply.github.com> Co-authored-by: FreakieHuang <frank0huang@foxmail.com> Co-authored-by: pppppM <gjf_mail@126.com> Co-authored-by: L-Icarus <30308843+L-Icarus@users.noreply.github.com> Co-authored-by: HIT-cwh <2892770585@qq.com> Co-authored-by: Yue Sun <aptsunny@tongji.edu.cn> Co-authored-by: zengyi <31244134+spynccat@users.noreply.github.com> Co-authored-by: zengyi.vendor <zengyi.vendor@sensetime.com> Co-authored-by: zhongyu zhang <43191879+wilxy@users.noreply.github.com> Co-authored-by: zhangzhongyu <zhangzhongyu@pjlab.org.cn> Co-authored-by: Xianpan Zhou <32625100+TinyTigerPan@users.noreply.github.com> Co-authored-by: Xianpan Zhou <32625100+PanDaMeow@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: qiufeng <44188071+wutongshenqiu@users.noreply.github.com> Co-authored-by: wangshiguang <wangshiguang@sensetime.com> Co-authored-by: gaoyang07 <1546308416@qq.com> Co-authored-by: sunyue1 <sunyue1@sensetime.com> Co-authored-by: liukai <your_email@abc.example> Co-authored-by: Ming-Hsuan-Tu <qrnnis2623891@gmail.com> Co-authored-by: Yivona <120088893+yivona08@users.noreply.github.com> Co-authored-by: Yue Sun <aptsunny@alumni.tongji.edu.cn> Co-authored-by: Ivan Zhang <51170394+415905716@users.noreply.github.com> Co-authored-by: wm901115nwpu <wmnwpu@gmail.com> Co-authored-by: 王盟 <unicorn@MacBook-Pro.local>
1 parent 677434e commit 9166381

File tree

113 files changed

+10084
-105
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+10084
-105
lines changed

.circleci/test.yml

+9-9
Original file line numberDiff line numberDiff line change
@@ -103,9 +103,9 @@ jobs:
103103
name: Clone Repos
104104
command: |
105105
git clone -b main --depth 1 https://github.com/open-mmlab/mmengine.git /home/circleci/mmengine
106-
git clone -b dev-3.x --depth 1 https://github.com/open-mmlab/mmdetection.git /home/circleci/mmdetection
107-
git clone -b dev-1.x --depth 1 https://github.com/open-mmlab/mmclassification.git /home/circleci/mmclassification
108-
git clone -b dev-1.x --depth 1 https://github.com/open-mmlab/mmsegmentation.git /home/circleci/mmsegmentation
106+
git clone -b main --depth 1 https://github.com/open-mmlab/mmdetection.git /home/circleci/mmdetection
107+
git clone -b 1.x --depth 1 https://github.com/open-mmlab/mmclassification.git /home/circleci/mmclassification
108+
git clone -b main --depth 1 https://github.com/open-mmlab/mmsegmentation.git /home/circleci/mmsegmentation
109109
- run:
110110
name: Build Docker image
111111
command: |
@@ -153,15 +153,15 @@ workflows:
153153
- dev-1.x
154154
- build_cpu:
155155
name: minimum_version_cpu
156-
torch: 1.6.0
157-
torchvision: 0.7.0
158-
python: 3.7.9
156+
torch: 1.8.1
157+
torchvision: 0.9.1
158+
python: 3.7.4
159159
requires:
160160
- lint
161161
- build_cpu:
162162
name: maximum_version_cpu
163-
torch: 1.12.1
164-
torchvision: 0.13.1
163+
torch: 1.13.1
164+
torchvision: 0.14.1
165165
python: 3.9.0
166166
requires:
167167
- lint
@@ -183,7 +183,7 @@ workflows:
183183
jobs:
184184
- build_cuda:
185185
name: minimum_version_gpu
186-
torch: 1.6.0
186+
torch: 1.8.1
187187
# Use double quotation mark to explicitly specify its type
188188
# as string instead of number
189189
cuda: "10.1"
+67
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
import argparse
2+
import os
3+
4+
import mmengine
5+
6+
7+
def parse_args():
8+
parser = argparse.ArgumentParser(
9+
description='Analyse summary.yml generated by benchmark test')
10+
parser.add_argument('file_path', help='Summary.yml path')
11+
args = parser.parse_args()
12+
return args
13+
14+
15+
metric_mapping = {
16+
'Top 1 Accuracy': 'accuracy/top1',
17+
'Top 5 Accuracy': 'accuracy/top5',
18+
'box AP': 'coco/bbox_mAP',
19+
'mIoU': 'mIoU'
20+
}
21+
22+
23+
def compare_metric(result, metric):
24+
expect_val = result['expect'][metric]
25+
actual_val = result['actual'].get(metric_mapping[metric], None)
26+
if actual_val is None:
27+
return None, None
28+
if metric == 'box AP':
29+
actual_val *= 100
30+
decimal_bit = len(str(expect_val).split('.')[-1])
31+
actual_val = round(actual_val, decimal_bit)
32+
error = round(actual_val - expect_val, decimal_bit)
33+
error_percent = round(abs(error) * 100 / expect_val, 3)
34+
return error, error_percent
35+
36+
37+
def main():
38+
args = parse_args()
39+
file_path = args.file_path
40+
results = mmengine.load(file_path, 'yml')
41+
miss_models = dict()
42+
sort_by_error = dict()
43+
for k, v in results.items():
44+
valid_keys = v['expect'].keys()
45+
compare_res = dict()
46+
for m in valid_keys:
47+
error, error_percent = compare_metric(v, m)
48+
if error is None:
49+
continue
50+
compare_res[m] = {'error': error, 'error_percent': error_percent}
51+
if error != 0:
52+
miss_models[k] = compare_res
53+
sort_by_error[k] = error
54+
sort_by_error = sorted(
55+
sort_by_error.items(), key=lambda x: abs(x[1]), reverse=True)
56+
miss_models_sort = dict()
57+
miss_models_sort['total error models'] = len(sort_by_error)
58+
for k_v in sort_by_error:
59+
index = k_v[0]
60+
miss_models_sort[index] = miss_models[index]
61+
save_path = os.path.join(os.path.dirname(file_path), 'summary_error.yml')
62+
mmengine.fileio.dump(miss_models_sort, save_path, sort_keys=False)
63+
print(f'Summary analysis result saved in {save_path}')
64+
65+
66+
if __name__ == '__main__':
67+
main()

.dev_scripts/benchmark_test.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -24,9 +24,9 @@
2424
def parse_args():
2525
parser = argparse.ArgumentParser(
2626
description="Test all models' accuracy in model-index.yml")
27-
parser.add_argument(
28-
'partition', type=str, help='Cluster partition to use.')
2927
parser.add_argument('checkpoint_root', help='Checkpoint file root path.')
28+
parser.add_argument(
29+
'--partition', type=str, help='Cluster partition to use.')
3030
parser.add_argument(
3131
'--job-name',
3232
type=str,
@@ -148,6 +148,7 @@ def create_test_job_batch(commands, model_info, args, port):
148148
if exists:
149149
print(f'{checkpoint} already exists.')
150150
else:
151+
print(f'start downloading {fname}')
151152
wget.download(model_info.weights, str(checkpoint))
152153
print(f'\nSaved in {checkpoint}.')
153154

.github/workflows/build.yml

+7-17
Original file line numberDiff line numberDiff line change
@@ -25,22 +25,12 @@ concurrency:
2525

2626
jobs:
2727
test_linux:
28-
runs-on: ubuntu-18.04
28+
runs-on: ubuntu-20.04
2929
strategy:
3030
matrix:
3131
python-version: [3.7]
32-
torch: [1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0]
32+
torch: [1.8.0, 1.9.0, 1.10.0, 1.11.0, 1.12.0, 1.13.0]
3333
include:
34-
- torch: 1.6.0
35-
torch_version: 1.6
36-
torchvision: 0.7.0
37-
- torch: 1.7.0
38-
torch_version: 1.7
39-
torchvision: 0.8.1
40-
- torch: 1.7.0
41-
torch_version: 1.7
42-
torchvision: 0.8.1
43-
python-version: 3.8
4434
- torch: 1.8.0
4535
torch_version: 1.8
4636
torchvision: 0.9.0
@@ -103,11 +93,11 @@ jobs:
10393
pip install -U openmim
10494
mim install 'mmcv >= 2.0.0rc1'
10595
- name: Install MMCls
106-
run: pip install git+https://github.com/open-mmlab/mmclassification.git@dev-1.x
96+
run: pip install 'mmcls>=1.0.0rc0'
10797
- name: Install MMDet
108-
run: pip install git+https://github.com/open-mmlab/mmdetection.git@dev-3.x
98+
run: pip install git+https://github.com/open-mmlab/mmdetection.git@main
10999
- name: Install MMSeg
110-
run: pip install git+https://github.com/open-mmlab/mmsegmentation.git@dev-1.x
100+
run: pip install git+https://github.com/open-mmlab/mmsegmentation.git@main
111101
- name: Install other dependencies
112102
run: pip install -r requirements.txt
113103
- name: Build and install
@@ -119,8 +109,8 @@ jobs:
119109
coverage report -m
120110
# Upload coverage report for python3.8 && pytorch1.12.0 cpu
121111
- name: Upload coverage to Codecov
122-
if: ${{matrix.torch == '1.12.0' && matrix.python-version == '3.8'}}
123-
uses: codecov/codecov-action@v2
112+
if: ${{matrix.torch == '1.13.0' && matrix.python-version == '3.8'}}
113+
uses: codecov/codecov-action@v3
124114
with:
125115
file: ./coverage.xml
126116
flags: unittests

README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@
2121
<!--算法库 Badges-->
2222

2323
[![PyPI](https://img.shields.io/pypi/v/mmrazor)](https://pypi.org/project/mmrazor)
24-
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmrazor.readthedocs.io/en/dev-1.x/)
24+
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmrazor.readthedocs.io/en/quantize/)
2525
[![badge](https://github.com/open-mmlab/mmrazor/workflows/build/badge.svg)](https://github.com/open-mmlab/mmrazor/actions)
2626
[![codecov](https://codecov.io/gh/open-mmlab/mmrazor/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmrazor)
2727
[![license](https://img.shields.io/github/license/open-mmlab/mmrazor.svg)](https://github.com/open-mmlab/mmrazor/blob/master/LICENSE)
@@ -32,9 +32,9 @@
3232

3333
<!--Note:请根据各算法库自身情况设置项目和链接-->
3434

35-
[📘Documentation](https://mmrazor.readthedocs.io/en/dev-1.x/) |
36-
[🛠️Installation](https://mmrazor.readthedocs.io/en/dev-1.x/get_started/installation.html) |
37-
[👀Model Zoo](https://mmrazor.readthedocs.io/en/dev-1.x/get_started/model_zoo.html) |
35+
[📘Documentation](https://mmrazor.readthedocs.io/en/quantize/) |
36+
[🛠️Installation](https://mmrazor.readthedocs.io/en/quantize/get_started/installation.html) |
37+
[👀Model Zoo](https://mmrazor.readthedocs.io/en/quantize/get_started/model_zoo.html) |
3838
[🤔Reporting Issues](https://github.com/open-mmlab/mmrazor/issues/new/choose)
3939

4040
</div>
@@ -68,7 +68,7 @@ MMRazor is a model compression toolkit for model slimming and AutoML, which incl
6868
- Neural Architecture Search (NAS)
6969
- Pruning
7070
- Knowledge Distillation (KD)
71-
- Quantization (come soon)
71+
- Quantization
7272

7373
It is a part of the [OpenMMLab](https://openmmlab.com/) project.
7474

@@ -86,7 +86,7 @@ Major features:
8686

8787
With better modular design, developers can implement new model compression algorithms with only a few codes, or even by simply modifying config files.
8888

89-
Below is an overview of MMRazor's design and implementation, please refer to [tutorials](https://mmrazor.readthedocs.io/en/dev-1.x/get_started/overview.html) for more details.
89+
Below is an overview of MMRazor's design and implementation, please refer to [tutorials](https://mmrazor.readthedocs.io/en/quantize/get_started/overview.html) for more details.
9090

9191
<div align="center">
9292
<img src="resources/design_and_implement.png" style="zoom:100%"/>
@@ -164,7 +164,7 @@ Please refer to [installation.md](/docs/en/get_started/installation.md) for more
164164

165165
## Getting Started
166166

167-
Please refer to [user guides](https://mmrazor.readthedocs.io/en/dev-1.x/user_guides/index.html) for the basic usage of MMRazor. There are also [advanced guides](https://mmrazor.readthedocs.io/en/dev-1.x/advanced_guides/index.html):
167+
Please refer to [user guides](https://mmrazor.readthedocs.io/en/quantize/user_guides/index.html) for the basic usage of MMRazor. There are also [advanced guides](https://mmrazor.readthedocs.io/en/quantize/advanced_guides/index.html):
168168

169169
## Contributing
170170

configs/distill/mmcls/ofd/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -22,16 +22,16 @@ We investigate the design aspects of feature distillation methods achieving netw
2222

2323
#### Vanilla
2424

25-
| Dataset | Model | Top-1 (%) | Top-5 (%) | Download |
26-
| ------- | ----------------------------------------------------------------------- | --------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
27-
| CIFAR10 | [WRN16-2](../../../vanilla/mmcls/wide-resnet/wrn16-w2_b16x8_cifar10.py) | 93.43 | 99.75 | [model](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn16_2_b16x8_cifar10_20220831_204709-446b466e.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn16_2_b16x8_cifar10_20220831_204709-446b466e.json) |
28-
| CIFAR10 | [WRN28-4](../../../vanilla/mmcls/wide-resnet/wrn28-w4_b16x8_cifar10.py) | 95.49 | 99.81 | [model](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn28_4_b16x8_cifar10_20220831_173536-d6f8725c.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn28_4_b16x8_cifar10_20220831_173536-d6f8725c.json) |
25+
| Dataset | Model | Top-1 (%) | Download |
26+
| ------- | ----------------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
27+
| CIFAR10 | [WRN16-2](../../../vanilla/mmcls/wide-resnet/wrn16-w2_b16x8_cifar10.py) | 93.43 | [model](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn16_2_b16x8_cifar10_20220831_204709-446b466e.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn16_2_b16x8_cifar10_20220831_204709-446b466e.json) |
28+
| CIFAR10 | [WRN28-4](../../../vanilla/mmcls/wide-resnet/wrn28-w4_b16x8_cifar10.py) | 95.49 | [model](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn28_4_b16x8_cifar10_20220831_173536-d6f8725c.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/wide_resnet/wrn28_4_b16x8_cifar10_20220831_173536-d6f8725c.json) |
2929

3030
#### Distillation
3131

32-
| Dataset | Model | Flops(M) | Teacher | Top-1 (%) | Top-5 (%) | Configs | Download |
33-
| ------- | ------- | -------- | ------- | --------- | --------- | ----------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
34-
| CIFAR10 | WRN16-2 | 101 | WRN28-4 | 95.23 | 99.79 | [config](./ofd_backbone_resnet50_resnet18_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20220831_220553-f5d12e61.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20220831_220553-f5d12e61.json) |
32+
| Dataset | Model | Flops(M) | Teacher | Top-1 (%) | Configs | Download |
33+
| ------- | ------- | -------- | ------- | --------- | ----------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
34+
| CIFAR10 | WRN16-2 | 101 | WRN28-4 | 94.21 | [config](./ofd_backbone_resnet50_resnet18_8xb16_cifar10.py) | [model](https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20230417_192216-ace2908f.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20230417_192216-ace2908f.log) |
3535

3636
## Getting Started
3737

configs/distill/mmcls/ofd/metafile.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,6 @@ Models:
3333
- Task: Image Classification
3434
Dataset: CIFAR-10
3535
Metrics:
36-
Top 1 Accuracy: 95.4400
36+
Top 1 Accuracy: 94.21
3737
Config: configs/distill/mmcls/ofd/ofd_backbone_resnet50_resnet18_8xb16_cifar10.py
38-
Weights: https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20220831_220553-f5d12e61.pth
38+
Weights: https://download.openmmlab.com/mmrazor/v1/overhaul/ofd_backbone_resnet50_resnet18_8xb16_cifar10_20230417_192216-ace2908f.pth

configs/nas/mmcls/darts/darts_subnet_1xb96_cifar10_2.0.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
init_cfg=dict(
3838
type='Pretrained',
3939
checkpoint= # noqa: E251
40-
'https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/v1/darts/darts_subnetnet_1xb96_cifar10_acc-97.27_20211222-17e42600_latest.pth', # noqa: E501
40+
'https://download.openmmlab.com/mmrazor/v1/darts/darts_subnetnet_1xb96_cifar10_acc-97.27_20211222-17e42600_latest.pth', # noqa: E501
4141
prefix='architecture.'))
4242

4343
model_wrapper_cfg = None

configs/nas/mmcls/darts/metafile.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ Models:
2525
Top 1 Accuracy: 97.32
2626
Top 5 Accuracy: 99.94
2727
Config: configs/nas/mmcls/darts/darts_subnet_1xb96_cifar10_2.0.py
28-
Weights: https://download.openmmlab.com/mmrazor/v1/darts/darts_subnetnet_1xb96_cifar10_acc-97.32_20211222-23ca1e10.pth
28+
Weights: https://download.openmmlab.com/mmrazor/v1/darts/darts_subnetnet_1xb96_cifar10_acc-97.32_20211222-e5727921_latest.pth

configs/nas/mmdet/detnas/detnas_frcnn_shufflenet_subnet_coco_1x.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
init_cfg=dict(
1010
type='Pretrained',
1111
checkpoint= # noqa: E251
12-
'detnas_subnet_frcnn_shufflenetv2_fpn_1x_coco_bbox_backbone_flops-0.34M_mAP-37.5_20220715-61d2e900_v1.pth', # noqa: E501
12+
'https://download.openmmlab.com/mmrazor/v1/detnas/detnas_subnet_frcnn_shufflenetv2_fpn_1x_coco_bbox_backbone_flops-0.34M_mAP-37.5_20220715-61d2e900_v1.pth', # noqa: E501
1313
prefix='architecture.'))
1414

1515
find_unused_parameters = False

0 commit comments

Comments
 (0)