You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When I am training a vfnet with a dataset of coco format,I met a problem.
Ouput was as follows:
loading annotations into memory...
Done (t=0.13s)
creating index...
index created!
Traceback (most recent call last):
File "E:\mmdetection\mmdetection\tools\train.py", line 121, in
main()
File "E:\mmdetection\mmdetection\tools\train.py", line 117, in main
runner.train()
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1728, in train
self._train_loop = self.build_train_loop(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1520, in build_train_loop
loop = LOOPS.build(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\build_functions.py", line 123, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\loops.py", line 46, in init
super().init(runner, dataloader)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\base_loop.py", line 26, in init
self.dataloader = runner.build_dataloader(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1370, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\build_functions.py", line 123, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "E:\mmdetection\mmdetection\mmdet\datasets\base_det_dataset.py", line 51, in init
super().init(*args, **kwargs)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\dataset\base_dataset.py", line 247, in init
self.full_init()
File "E:\mmdetection\mmdetection\mmdet\datasets\base_det_dataset.py", line 89, in full_init
self.data_bytes, self.data_address = self._serialize_data()
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\dataset\base_dataset.py", line 768, in _serialize_data
data_bytes = np.concatenate(data_list)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
Did you make any modifications on the code or config? Did you understand what you have modified?
I didn't make any modification except the dataset path in the configs/base/datasets/coco_detection.py.
Yes,I understand what I have modified.
What dataset did you use? Environment
Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
sys.platform: win32
Python: 3.8.20 (default, Oct 3 2024, 15:19:54) [MSC v.1929 64 bit (AMD64)]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce MX350
CUDA_HOME: E:\cuda_toolkit
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: n/a
PyTorch: 2.0.1+cu118
PyTorch compiling details: PyTorch built with:
C++ Version: 199711
MSVC 193431937
Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
The text was updated successfully, but these errors were encountered:
Hello,dear developers! I did meet a trouble,and need your helps!
Checklist
Describe the bug
When I am training a vfnet with a dataset of coco format,I met a problem.
Ouput was as follows:
loading annotations into memory...
Done (t=0.13s)
creating index...
index created!
Traceback (most recent call last):
File "E:\mmdetection\mmdetection\tools\train.py", line 121, in
main()
File "E:\mmdetection\mmdetection\tools\train.py", line 117, in main
runner.train()
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1728, in train
self._train_loop = self.build_train_loop(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1520, in build_train_loop
loop = LOOPS.build(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\build_functions.py", line 123, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\loops.py", line 46, in init
super().init(runner, dataloader)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\base_loop.py", line 26, in init
self.dataloader = runner.build_dataloader(
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\runner\runner.py", line 1370, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\registry\build_functions.py", line 123, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "E:\mmdetection\mmdetection\mmdet\datasets\base_det_dataset.py", line 51, in init
super().init(*args, **kwargs)
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\dataset\base_dataset.py", line 247, in init
self.full_init()
File "E:\mmdetection\mmdetection\mmdet\datasets\base_det_dataset.py", line 89, in full_init
self.data_bytes, self.data_address = self._serialize_data()
File "E:\Anaconda\envs\openmmlab_330\lib\site-packages\mmengine\dataset\base_dataset.py", line 768, in _serialize_data
data_bytes = np.concatenate(data_list)
File "<array_function internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
Reproduction
Did you make any modifications on the code or config? Did you understand what you have modified?
I didn't make any modification except the dataset path in the configs/base/datasets/coco_detection.py.
Yes,I understand what I have modified.
What dataset did you use?
Environment
Please run
python mmdet/utils/collect_env.py
to collect necessary environment information and paste it here.sys.platform: win32
Python: 3.8.20 (default, Oct 3 2024, 15:19:54) [MSC v.1929 64 bit (AMD64)]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce MX350
CUDA_HOME: E:\cuda_toolkit
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: n/a
PyTorch: 2.0.1+cu118
PyTorch compiling details: PyTorch built with:
=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=c
ompute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_37,code=compute_37
k/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj /FS -DUSE_PTHREADPOOL
-DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE, LA
PACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=OFF, TORCH_VERSION=2.0.1, USE_CUDA
=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.2+cu118
OpenCV: 4.11.0
MMEngine: 0.10.6
MMDetection: 3.2.0+cfd5d3a
Error traceback
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
The text was updated successfully, but these errors were encountered: