Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Can mkl-dnn be used in multithread program? #227

Closed
kobe2000 opened this issue Apr 25, 2018 · 4 comments
Closed

Can mkl-dnn be used in multithread program? #227

kobe2000 opened this issue Apr 25, 2018 · 4 comments
Labels

Comments

@kobe2000
Copy link

kobe2000 commented Apr 25, 2018

Hi,
I want to run same nn model in multipl threads(each is bound to a core) to acheive maximum speed. I can do so using dnn api in MKL. But I doubt I can do so using mkl-dnn, because here operation primitive is built upon memory primitive which difinitely can not be shared among threads. Am I right?

Thanks

@emfomenk
Copy link

Hi @kobe2000,

You can use mkl-dnn in multi-threaded environment you described with few restrictions:

  • primitives are stateful, so if you want to run same convolution in two different threads you need to create two different primitives

  • Winograd convolution uses global scratchpad to reduce the memory consumption. that makes it not-thread safe. if you want to use Winograd convolution in multi-threaded case please define MKLDNN_ENABLE_CONCURRENT_EXEC (see details]

Please also go through issue #199 which also devoted to MKL-DNN thread (un)safety.

@kobe2000
Copy link
Author

You mean the weights is only what can be shared? Do primitives consume memory much? If I construct them in each thread

@emfomenk
Copy link

Weights can be definitely shared (you just pass the same memory to different convolution primitives).

When I said primitives are stateful I meant primitives might have internal buffers (like the one that is used for reduction) that makes them not thread-safe.

In general primitives shouldn't consume too much memory. Especially if we are talking about forward pass only.

One more note. In order to make MKL-DNN sequential you might want to set the number of OMP threads to 1, so that the library would not even try to create a parallel region.

@kobe2000
Copy link
Author

It really helps, Thank you

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants