Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

No module named 'mmselfsup.engine.runner' #745

Open
Syed05 opened this issue Apr 7, 2023 · 11 comments
Open

No module named 'mmselfsup.engine.runner' #745

Syed05 opened this issue Apr 7, 2023 · 11 comments
Assignees

Comments

@Syed05
Copy link

Syed05 commented Apr 7, 2023

After I run the command below:

python tools/train.py configs/selfsup/base/mae_vit-base-p16_Test.py

It get the error below:

return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mmselfsup.engine.runner'

How to solve it?

Thanks.

My config file is:

`base = [
'../base/models/mae_vit-base-p16.py',
'../base/datasets/imagenet_mae.py',
'../base/schedules/adamw_coslr-200e_in1k.py',
'../base/default_runtime.py',
]

train_dataloader = dict(batch_size=2, num_workers=2)

optimizer = dict(
type='AdamW', lr=1.5e-4 * 4096 / 256, betas=(0.9, 0.95), weight_decay=0.05)
optim_wrapper = dict(
type='OptimWrapper',
optimizer=optimizer,
paramwise_cfg=dict(
custom_keys={
'ln': dict(decay_mult=0.0),
'bias': dict(decay_mult=0.0),
'pos_embed': dict(decay_mult=0.),
'mask_token': dict(decay_mult=0.),
'cls_token': dict(decay_mult=0.)
}))

param_scheduler = [
dict(
type='LinearLR',
start_factor=1e-4,
by_epoch=True,
begin=0,
end=40,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=360,
by_epoch=True,
begin=40,
end=400,
convert_to_iter_based=True)
]

train_cfg = dict(max_epochs=400)
default_hooks = dict(
logger=dict(type='LoggerHook', interval=100),
# only keeps the latest 3 checkpoints
checkpoint=dict(type='CheckpointHook', interval=1, max_keep_ckpts=3))

randomness = dict(seed=0, diff_rank_seed=True)
resume = True`

@HelloSZS
Copy link

HelloSZS commented Apr 9, 2023

i encounter this problem too

@HelloSZS
Copy link

HelloSZS commented Apr 9, 2023

And i now switched to 1.0.0rc6 and installed, the problem fixed.

@fangyixiao18
Copy link
Collaborator

And i now switched to 1.0.0rc6 and installed, the problem fixed.

#746
The pr will fix this problem

@1234532314342
Copy link

I also encountered the same problem, it is useless to change the version, how to solve it?

@HelloSZS
Copy link

And i now switched to 1.0.0rc6 and installed, the problem fixed.

#746 The pr will fix this problem

i found that the contents of registry.py in the main branch
https://github.com/open-mmlab/mmselfsup/blob/main/mmselfsup/registry.py

are different from that in the v1.0.0rc6 branch
https://github.com/open-mmlab/mmselfsup/blob/v1.0.0rc6/mmselfsup/registry.py

@fangyixiao18
Copy link
Collaborator

And i now switched to 1.0.0rc6 and installed, the problem fixed.

#746 The pr will fix this problem

i found that the contents of registry.py in the main branch https://github.com/open-mmlab/mmselfsup/blob/main/mmselfsup/registry.py

are different from that in the v1.0.0rc6 branch https://github.com/open-mmlab/mmselfsup/blob/v1.0.0rc6/mmselfsup/registry.py

It is because mmengine update the some logic in registry, so we need to update the this part of code

@fangyixiao18
Copy link
Collaborator

I also encountered the same problem, it is useless to change the version, how to solve it?

please git pull the main branch to update your code, the pr is merged

@Syed05
Copy link
Author

Syed05 commented Apr 10, 2023

Thanks, after git pull main branch its working now. However, can anyone please let me know how to structure the custom data folder?

My folder structure is:

data
------meta
------- train.txt (contains the image paths)
------train
------- .jpg images

when I run the config I am getting following error:

ValueError: class EpochBasedTrainLoop in mmengine/runner/loops.py: class CustomDataset in mmcls/datasets/custom.py: not enough values to unpack (expected 2, got 1)

My config file is:

`# dataset settings
dataset_type = 'mmcls.CustomDataset'
data_root = 'data/'
custom_imports = dict(imports='mmcls.datasets', allow_failed_imports=False)

view_pipeline = [
dict(
type='RandomResizedCrop', size=224, scale=(0.2, 1.), backend='pillow'),
dict(
type='RandomApply',
transforms=[
dict(
type='ColorJitter',
brightness=0.4,
contrast=0.4,
saturation=0.4,
hue=0.1)
],
prob=0.8),
dict(
type='RandomGrayscale',
prob=0.2,
keep_channels=True,
channel_weights=(0.114, 0.587, 0.2989)),
dict(type='RandomGaussianBlur', sigma_min=0.1, sigma_max=2.0, prob=0.5),
dict(type='RandomFlip', prob=0.5),
]

train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='MultiView', num_views=2, transforms=[view_pipeline]),
dict(type='PackSelfSupInputs', meta_keys=['img_path'])
]

train_dataloader = dict(
batch_size=32,
num_workers=8,
drop_last=True,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
collate_fn=dict(type='default_collate'),
dataset=dict(
type=dataset_type,
data_root=data_root,
ann_file='meta/train.txt',
data_prefix=dict(img_path='train/'),
pipeline=train_pipeline))

model = dict(
type='MoCo',
queue_len=65536,
feat_dim=128,
momentum=0.999,
data_preprocessor=dict(
mean=(123.675, 116.28, 103.53),
std=(58.395, 57.12, 57.375),
bgr_to_rgb=True),
backbone=dict(
type='ResNet',
depth=50,
in_channels=3,
out_indices=[4], # 0: conv-1, x: stage-x
norm_cfg=dict(type='BN')),
neck=dict(
type='MoCoV2Neck',
in_channels=2048,
hid_channels=2048,
out_channels=128,
with_avg_pool=True),
head=dict(
type='ContrastiveHead',
loss=dict(type='mmcls.CrossEntropyLoss'),
temperature=0.2))

optimizer = dict(type='SGD', lr=0.03, weight_decay=1e-4, momentum=0.9)
optim_wrapper = dict(type='OptimWrapper', optimizer=optimizer)

param_scheduler = [
dict(type='CosineAnnealingLR', T_max=200, by_epoch=True, begin=0, end=200)
]

train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=200)

default_scope = 'mmselfsup'

default_hooks = dict(
runtime_info=dict(type='RuntimeInfoHook'),
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=10),
sampler_seed=dict(type='DistSamplerSeedHook'),
)

env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)

log_processor = dict(
window_size=10,
custom_cfg=[dict(data_src='', method='mean', windows_size='global')])

vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='SelfSupVisualizer', vis_backends=vis_backends, name='visualizer')

log_level = 'INFO'
load_from = None
resume = False `

@kalikhademi
Copy link

Thanks, after git pull main branch its working now. However, can anyone please let me know how to structure the custom data folder?

My folder structure is:

data ------meta ------- train.txt (contains the image paths) ------train ------- .jpg images

when I run the config I am getting following error:

ValueError: class EpochBasedTrainLoop in mmengine/runner/loops.py: class CustomDataset in mmcls/datasets/custom.py: not enough values to unpack (expected 2, got 1)

My config file is:

`# dataset settings dataset_type = 'mmcls.CustomDataset' data_root = 'data/' custom_imports = dict(imports='mmcls.datasets', allow_failed_imports=False)

view_pipeline = [ dict( type='RandomResizedCrop', size=224, scale=(0.2, 1.), backend='pillow'), dict( type='RandomApply', transforms=[ dict( type='ColorJitter', brightness=0.4, contrast=0.4, saturation=0.4, hue=0.1) ], prob=0.8), dict( type='RandomGrayscale', prob=0.2, keep_channels=True, channel_weights=(0.114, 0.587, 0.2989)), dict(type='RandomGaussianBlur', sigma_min=0.1, sigma_max=2.0, prob=0.5), dict(type='RandomFlip', prob=0.5), ]

train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='MultiView', num_views=2, transforms=[view_pipeline]), dict(type='PackSelfSupInputs', meta_keys=['img_path']) ]

train_dataloader = dict( batch_size=32, num_workers=8, drop_last=True, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), collate_fn=dict(type='default_collate'), dataset=dict( type=dataset_type, data_root=data_root, ann_file='meta/train.txt', data_prefix=dict(img_path='train/'), pipeline=train_pipeline))

model = dict( type='MoCo', queue_len=65536, feat_dim=128, momentum=0.999, data_preprocessor=dict( mean=(123.675, 116.28, 103.53), std=(58.395, 57.12, 57.375), bgr_to_rgb=True), backbone=dict( type='ResNet', depth=50, in_channels=3, out_indices=[4], # 0: conv-1, x: stage-x norm_cfg=dict(type='BN')), neck=dict( type='MoCoV2Neck', in_channels=2048, hid_channels=2048, out_channels=128, with_avg_pool=True), head=dict( type='ContrastiveHead', loss=dict(type='mmcls.CrossEntropyLoss'), temperature=0.2))

optimizer = dict(type='SGD', lr=0.03, weight_decay=1e-4, momentum=0.9) optim_wrapper = dict(type='OptimWrapper', optimizer=optimizer)

param_scheduler = [ dict(type='CosineAnnealingLR', T_max=200, by_epoch=True, begin=0, end=200) ]

train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=200)

default_scope = 'mmselfsup'

default_hooks = dict( runtime_info=dict(type='RuntimeInfoHook'), timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=10), sampler_seed=dict(type='DistSamplerSeedHook'), )

env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl'), )

log_processor = dict( window_size=10, custom_cfg=[dict(data_src='', method='mean', windows_size='global')])

vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='SelfSupVisualizer', vis_backends=vis_backends, name='visualizer')

log_level = 'INFO' load_from = None resume = False `

Did you find any solution for this as I am facing the same problem?

@leonidas-timefly
Copy link

leonidas-timefly commented Jun 19, 2023

To sum up, the solution for this issue is:
Step 1: install mmselfsup version 1.0.0rc6
pip install 'mmselfsup>=1.0.0rc6'
Step 2: go to the directory "mmselfsup/" and change to the main branch with:
git branch
and
git checkout main
Step 3: cover the old version with:
git pull origin main
Then this issue is solved.

However, if you still encounter this issue, you can try to reinstall mmselfsup from the source first, then follow this repair tutorial steps 2-3.

@hadhoryth
Copy link

The problem is the registry, for some reason the location of mmselfsup.engine.runner is not properly updated. The quick fix would be edit the registry.py file and remove locations argument.

RUNNERS = Registry("runner", parent=MMENGINE_RUNNERS)
RUNNER_CONSTRUCTORS = Registry("runner constructor", parent=MMENGINE_RUNNER_CONSTRUCTORS)
LOOPS = Registry("loop", parent=MMENGINE_LOOPS)

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants