-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
model training and prediction #80
Comments
Jet can trigger a processing anywhere, on CPU or GPU. You can use |
Then Jet just used as trigger mechanism. This will not serve any purpose. I can trigger using HTTP rest service. Question is assume model built using GPUs.- I guess JET cannot be used for training say thousands of images, trained model will be deployed only on CPU based machine for real time image classification (without GPU) - then JET serves no purpose except trigger mechanism for model prediction. |
Jet is a Stream processing engine. It basically moves data around, in a distributed and fault-tolerant way and is very efficient at doing this. It connects to sources, can transform, regroup, split, join, enrich and aggregate the data and write them to a destination, you define each step in a pipeline. Image classification is a case of enrichment: an input image is classified and sent to output with the classification result. The classification itself is just triggered by Jet, as you said, and it can run either on CPU or GPU or FPGA or anywhere. In your case you can use Jet to parallelize the classification in a cluster and to make it fault-tolerant. You can also use it to collect and prepare input for the classification from multiple sources. But the classification itself is done by an external library and Jet only calls it. |
Assume a deep learning model is built using google tensorflow for image classification. For prediction gpu is required. How can Hazelcast JET be used for real time image classification? Model runs on gpu address memory space while Hazelcast JET in-memory works on normal CPU RAM?
The text was updated successfully, but these errors were encountered: