-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Error during trainning model with CIFAR10 #22
Comments
Hi @hotloo, very very sorry for bothering, could you please help me with following problem? Thank you very much. "ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices start at 0) has shape[2] == 6, but the output's size on that axis is 5." |
I have already solve the problem,just simply delete the defined pool_2d,because there already import a pool_2d. Also, no need to modify ** z = pool_2d(z, ds=poolsize, st=poolstride)** |
Hi Tongmuyuan: Are you able to run the code with cifar10 data? Thanks! |
`INFO:main.utils:e 0, i 0:V_C_class nan, V_E nan, V_C_de nan
ERROR:blocks.main_loop:Error occured during training.
Blocks will attempt to run
on_error
extensions, potentially saving data, before exiting and reraising the error. Note that the usualafter_training
extensions will not be run. The original error will be re-raised and also stored in the training log. Press CTRL + C to halt Blocks immediately.Traceback (most recent call last):
File "run.py", line 660, in
if train(d) is None:
File "run.py", line 509, in train
main_loop.run()
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/main_loop.py", line 197, in run
reraise_as(e)
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/utils/init.py", line 258, in reraise_as
six.reraise(type(new_exc), new_exc, orig_exc_traceback)
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/main_loop.py", line 183, in run
while self._run_epoch():
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/main_loop.py", line 232, in _run_epoch
while self._run_iteration():
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/main_loop.py", line 253, in _run_iteration
self.algorithm.process_batch(batch)
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/blocks/algorithms/init.py", line 287, in process_batch
self._function(*ordered_batch)
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/theano/compile/function_module.py", line 871, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/theano/gof/link.py", line 314, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "/home/julian/anaconda2/envs/ladder/lib/python2.7/site-packages/theano/compile/function_module.py", line 859, in call
outputs = self.fn()
ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices start at 0) has shape[2] == 6, but the output's size on that axis is 5.
Apply node that caused the error: GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)](GpuJoin.0, CudaNdarrayConstant{[[[[ 0.30000001]]]]}, GpuReshape{4}.0, GpuDimShuffle{x,0,x,x}.0)
Toposort index: 1535
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, True, True, True)), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, False, True, True))]
Inputs shapes: [(200, 192, 5, 5), (1, 1, 1, 1), (200, 192, 6, 6), (1, 192, 1, 1)]
Inputs strides: [(4800, 25, 5, 1), (0, 0, 0, 0), (6912, 36, 6, 1), (0, 1, 0, 0)]
Inputs values: ['not shown', CudaNdarray([[[[ 0.30000001]]]]), 'not shown', 'not shown']
Outputs clients: [[GpuElemwise{Composite{Switch(i0, i1, (i2 * i1))},no_inplace}(GpuElemwise{Composite{Cast{float32}(GT(i0, i1))},no_inplace}.0, GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)].0, CudaNdarrayConstant{[[[[ 0.1]]]]}), GpuElemwise{Composite{Cast{float32}(GT(i0, i1))},no_inplace}(GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)].0, CudaNdarrayConstant{[[[[ 0.]]]]})]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
Original exception:
ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices start at 0) has shape[2] == 6, but the output's size on that axis is 5.
Apply node that caused the error: GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)](GpuJoin.0, CudaNdarrayConstant{[[[[ 0.30000001]]]]}, GpuReshape{4}.0, GpuDimShuffle{x,0,x,x}.0)
Toposort index: 1535
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, True, True, True)), CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, False, True, True))]
Inputs shapes: [(200, 192, 5, 5), (1, 1, 1, 1), (200, 192, 6, 6), (1, 192, 1, 1)]
Inputs strides: [(4800, 25, 5, 1), (0, 0, 0, 0), (6912, 36, 6, 1), (0, 1, 0, 0)]
Inputs values: ['not shown', CudaNdarray([[[[ 0.30000001]]]]), 'not shown', 'not shown']
Outputs clients: [[GpuElemwise{Composite{Switch(i0, i1, (i2 * i1))},no_inplace}(GpuElemwise{Composite{Cast{float32}(GT(i0, i1))},no_inplace}.0, GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)].0, CudaNdarrayConstant{[[[[ 0.1]]]]}), GpuElemwise{Composite{Cast{float32}(GT(i0, i1))},no_inplace}(GpuElemwise{Composite{((i0 + (i1 * i2)) + i3)}}[(0, 0)].0, CudaNdarrayConstant{[[[[ 0.]]]]})]]
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
`
There is a error message: ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices start at 0) has shape[2] == 6, but the output's size on that axis is 5.
Do you have some idea about it?
The text was updated successfully, but these errors were encountered: