-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Can mkl-dnn be used in multithread program? #227
Comments
Hi @kobe2000, You can use mkl-dnn in multi-threaded environment you described with few restrictions:
Please also go through issue #199 which also devoted to MKL-DNN thread (un)safety. |
You mean the weights is only what can be shared? Do primitives consume memory much? If I construct them in each thread |
Weights can be definitely shared (you just pass the same memory to different convolution primitives). When I said primitives are stateful I meant primitives might have internal buffers (like the one that is used for reduction) that makes them not thread-safe. In general primitives shouldn't consume too much memory. Especially if we are talking about forward pass only. One more note. In order to make MKL-DNN sequential you might want to set the number of OMP threads to 1, so that the library would not even try to create a parallel region. |
It really helps, Thank you |
Hi,
I want to run same nn model in multipl threads(each is bound to a core) to acheive maximum speed. I can do so using dnn api in MKL. But I doubt I can do so using mkl-dnn, because here operation primitive is built upon memory primitive which difinitely can not be shared among threads. Am I right?
Thanks
The text was updated successfully, but these errors were encountered: