This repository contains the full code for building a simple inference dashboard with Streamlit, Tensorflow, and OpenCV. The tutorial can be found here.
Python 3.7 <= version <= Python 3.9
Tensorflow >= 2.5.0
To install necessary packages like Datature Hub, Streamlit, Tensorflow, and OpenCV, run the command below. It is recommended that you do this in a virtual environment of your choice (such as virtualenv or virtualenvwrapper).
pip install -r requirements.txt
To start the app, run the command below.
streamlit run inference_dashboard.py
The dashboard should be running on your local machine at http://localhost:8501. The dashboard currently supports inference with Tensorflow Object Detection models exported from Datature Nexus. There are two ways to load a model into the dashboard.
We leverage Datature SDK to export and download your model from Nexus. To do so, ensure that you have an exported Tensorflow artifact in your Nexus project. You can then enter your secret key in the Project Secret Key
field provided. A list of all Tensorflow exported artifacts will be listed in the Select Model
dropdown. Select the model you wish to load and it will automatically be downloaded and loaded into the dashboard.
Alternatively, you can load a model from a local directory. To do so, enter the path to your Tensorflow SavedModel directory (<DIR>/saved_model
) in the Model Path
field provided. Please also enter the path to the label map file (<DIR>/label_map.pbtxt
) in the Label Map Path
field.
For both methods, you will also need to include the model input size as a comma-separated string of WIDTH,HEIGHT
in the Model Input Size
field. For example, if your model input size is 640x640, you should enter 640,640
in the field. Once the model has been loaded successfully, you can then upload image(s) to run inference on and visualise or download the results.
To stop the app, press Ctrl+C
in the terminal.