Date | What/Why |
---|---|
2022/11/16 |
Initial draft |
2023/01/30 |
Updated UI |
2023/05/26 |
Added support for Local saving |
2023/06/01 |
Changed device ID selection to device name selection |
2023/06/06 |
Added support for Docker build |
2023/06/20 |
Added that the playback order is disturbed by the number of images |
2023/12/22 |
Supported Console Developer Edition and Console Enterprise Edition |
2024/03/27 |
Error when clicking Stop Upload button |
-
This book is functional specifications of "Visualization" to see inference results.
Uses TypeScript for feature development language.
Uses Next.js as the application framework.
Terms/Abbreviations | Meaning |
---|---|
"Console Access Library" |
SDK that provides access to the "Console for AITRIOS" |
"Console for AITRIOS" (Hereafter referred to as "Console") |
A cloud service that provides various functions (Deployment, Retraining, Edge Device Management etc.) to efficiently implement solutions from edge to cloud |
Inference result |
AI-processed metadata among outputs from "Edge Applications" |
Image |
Image data captured by Edge Devices among outputs from "Edge Applications" |
Azure Blob Storage |
Object storage provided by Microsoft Corporation |
Local Storage |
Device storage on which the user runs the sample application |
Local HTTP Server |
Server launched when saving the data output by Edge Device to Local Storage |
Amazon S3 |
Object storage provided by Amazon |
-
"Console Access Library" for TypeScript used by "Visualization"
-
Sample code to deserialize inference results used by "Visualization"
-
Next.js
-
Users can launch "Visualization" in the repository to see inference results uploaded by Edge Devices.
-
Realtime Mode
-
Get and visualize the latest data.
-
Press the Reload button to update the Device Name list.
-
Press the Start Upload button to start uploading inference source images/inference results and initiate the screen display.
-
If the selected AI task is classification, start displaying a list of inference results.
-
-
Press the Stop Upload button to stop uploading inference source images/inference results and stop the screen display.
-
If the selected AI task is classification, stop displaying a list of inference results.
-
-
-
History Mode
-
Historical data associated with a specified subdirectory can be gotten, visualized and saved.
-
Press the Reload button to update the Device Name list.
-
Press the Start Playing button to start getting the past inference source images/inference results and displaying it on the screen.
-
If the selected AI task is classification, start displaying a list of inference results.
-
-
Press the Stop Playing button to stop getting the past inference source images/inference results and displaying it to the screen.
-
If the selected AI task is classification, stop displaying a list of inference results.
-
-
Press the Save Data button to display the Save Data menu.
-
You can select and save the data you want to save from the Save Data menu.
-
-
-
It can be run on Codespaces.
-
The AI model supports Object Detection, Classification and Segmentation.
-
Add the following to "CONNECTION_DESTINATION" in src/common/settings.ts.
-
Below setting values can be set in CONNECTION_DESTINATION.
-
SERVICE.Console
-
SERVICE.Azure
-
SERVICE.AWS
-
SERVICE.Local
-
-
-
Launch the screen.
-
The main screen appears.
-
-
getDeviceInfo is called to get the Device Name list of Edge Devices registered in the "Console".
-
The gotten Device Name list of Edge Devices is set to the drop-down list.
-
Import/Export label and set display labels for inference results.
-
Only label file data is gotten during label import. (The label file itself is not included.)
-
-
Open the Display Setting menu and set display setting on the AI task tab.
-
In case of Realtime Mode
-
Select the Device Name of the Edge Device from the Device Name drop-down list and press Start Upload.
-
startUploadInferenceResult is called to start uploading inference source images and inference results.
-
If the Edge Device is already inferring, StartUploadInferenceResultAPI of "Console" will not return sub directory.
In such case, it will obtain the latest sub directory as an addition.
-
-
getImage and getInference (When "SERVICE.Local" is specified for CONNECTION_DESTINATION, latestInference and specifiedImage) are called periodically to get and display the latest inference source image and inference results.
-
For "SERVICE.Local", though it is possible to shorten the acquisition interval, a separate API is used depending on the obtained timing.
-
when the regular call to getImage or getInference times out, no retry is performed and upload is stopped as is.
-
The extraction layer that determines the Clous service or SDK to be used determines the location for fetching the data based on the connection information available in src/common or the CONNECTION_DESTINATION settings of src/common/settings.ts.
-
When set to "SERVICE.Console", it fetches data from the "Console".
-
When set to "SERVICE.Azure", it fetches data from Azure Blob Storage.
-
When set to "SERVICE.AWS", it fetches data from Amazon S3.
-
When set to "SERVICE.Local", it fetches the data from the path set in "LOCAL_ROOT" of src/common/settings.ts.
-
-
In case of any error in specifying the connection information or connection destination, it throws an error while fetching the data.
-
If the selected AI task is a classification, a list of inference results is also displayed.
-
-
When the Stop Upload button is pressed, stopUploadInferenceResult is called and the upload/display of the inference source image and the inference result are stopped.
-
When the data acquisition does not fit into the Polling Interval, the timeout processing is performed.
-
If the selected AI task is a classification, the list of inference results is also stopped.
-
When connection destination is local, change the file structure of the data saved by Local HTTP Server by using the deviceId and subDirectory.
For the changed file structure, refer the limitations section.
-
-
-
In case of History Mode
-
Select the Device Name of the Edge Device from the Device Name drop-down list.
-
getSubDirectoryList is called to get a list of subdirectories where the inference source images are stored.
-
The list of gotten subdirectories is set in a drop-down list.
-
Select the subdirectory from the Sub Directory drop-down list.
-
getTotalImageCount is called to get the number of inference source images stored in the selected subdirectory.
-
Set the Image Selection/Interval Time and press Start Playing.
-
getImage and getInference are called periodically to get and display the entire number of inference source images and their inference results, one by one, stored in the subdirectory.
-
The extraction layer that determines the Clous service or SDK to be used determines the location for fetching the data based on the connection information available in src/common or the CONNECTION_DESTINATION settings of src/common/settings.ts.
-
When set to "SERVICE.Console", it fetches data from the "Console".
-
When set to "SERVICE.Azure", it fetches data from Azure Blob Storage.
-
When set to "SERVICE.AWS", it fetches data from Amazon S3.
-
When set to "SERVICE.Local", it fetches the data from the path set in "LOCAL_ROOT" of src/common/settings.ts.
-
-
After getting and displaying the last inference source image and inference result of the subdirectory, start again with the first one.
-
If the selected AI task is a classification, a list of inference results is also displayed.
-
When the data acquisition does not fit into the Interval Time, the timeout processing is performed and the acquisition, display of the pre-inference image and the inference result are stopped.
-
-
Pressing the Stop Playing button stops getting and displaying inference source images and inference results.
-
If the selected AI task is a classification, the list of inference results is also stopped.
-
-
Press the Save Data button to open the Save Data menu.
-
Select the data format you want to save from the Select Save Data radio button.
-
Move the Save Range slider to specify the range to save data.
-
Press the Save button.
-
The "Save File As" dialog appears.
-
The user can select the name and destination of the file to be saved.
-
-
When the user selects a filename and destination to save and presses the "Save" button, the Saving dialog appears, displaying the format of the data to be saved, the progress of saving it, and the save progress bar.
-
The save progress and save progress bar are updated from time to time.
-
The progress at the time of saving is as follows:
-
Starts saving: 0%
-
Working directory creation and initialization complete: 20%
-
Get and save inference source images complete: 50%
-
Generating overlaid images: 60%
-
Get, deserialize, and save inference results complete: 80%
-
Generating zip file and getting zip data are complete: 100%
-
-
You can cancel the save by pressing the Cancel button.
-
-
initializeDirectory is called to create a working directory for data saving.
-
Creates the working directory under the name "work" under the src directory.
-
-
saveImage and saveInferences are called to save the specified range of inference source images and inference results to the working directory.
-
If you select Overlaid Image:
createOverlaidImage is called to create and save images by overlaying inference results/timestamp on the inference source images saved in the working directory. Overlaid images are created that reflects the label and Display Setting values set when the Save Data button is pressed. -
createZip is called to create a zip file containing the saved inference source images and inference results in the working directory.
-
getZipData is called to get the data of the created zip file.
-
The file structure of the zip file is as follows.
yyyyMMddHHmmssSSS (1) ∟yyyyMMddHHmmssSSS.jpg (2) ∟yyyyMMddHHmmssSSS.jpg (2) ∟yyyyMMddHHmmssSSS.json (3) ∟yyyyMMddHHmmssSSS.json (3)
(1) Inference start time (2) Original Image: Inference source image file, Overlaid Image: Overlaid image file (The filename is the time at which the inference source image was output) (3) Inference results file (The filename is the time at which the inference source image was output)
-
-
Once the data is gotten, it is written to a user-specified destination file.
-
Once the data is written, the save progress is 100% and the save is complete.
-
The data saved in the working directory is deleted when the working directory is initialized after the next saving process is started.
-
The saved data that remains when the saving is canceled is similarly deleted when the working directory is initialized after the next saving process is started.
-
-
-
Have access to the "Console".
-
When using either the "Console" or Cloud service, prepare the respective connection information.
-
When using "Console", console_access_settings.yaml is available in src/common and the required connection information should be set.
-
When using a cloud service other than "Console", check that src/common contains the [Service name lower case]_access_settings.yaml and the required connection information is set.
-
Example
azure_access_settings.yaml
-
-
When using Local storage, the root of the browsing directory should be mentioned in the "LOCAL_ROOT" of src/common/settings.ts.
-
Example
export const LOCAL_ROOT = 'C:\\any_place\\...'
-
LOCAL_ROOT can only be specified as an absolute path.
-
-
-
A development environment with Node.js has been built.
-
Version of Node.js is v18.
-
-
A TypeScript development environment has been built.
-
TypeScript version is 4.7.
-
Required if you want to run it as is.
-
-
Docker is available.
-
Required if you want to build and use Docker containers.
-
-
An Edge Device is connected to the "Console" and ready to accept operations from the "Console".
-
GET
-
{base_url}/api/image/{deviceId}
-
{base_url}/api/inference/{deviceId}
-
{base_url}/api/subDirectoryList/{deviceId}
-
{base_url}/api/totalImageCount/{deviceId}
-
{base_url}/api/deviceInfo
-
{base_url}/api/getZipData
-
{base_url}/api/latestInference
-
{base_url}/api/specifiedImage
-
-
POST
-
{base_url}/api/startUploadInferenceResult/{deviceId}
-
{base_url}/api/stopUploadInferenceResult/{deviceId}
-
{base_url}/api/initializeDirectory/{deviceId}
-
{base_url}/api/saveImage/{deviceId}
-
{base_url}/api/saveInferences/{deviceId}
-
{base_url}/api/createZip
-
{base_url}/api/createOverlaidImage/{deviceId}
-
-
AI task tab
-
There is a tab for "Object Detection", a tab for "Classification" and a tab for "Segmentation"
-
-
Mode tab
-
There is a tab for "Realtime Mode" and a tab for "History Mode"
-
-
Display Setting menu
-
Sets display settings on the AI task tab
-
-
Timestamp of the image file
-
Inference source image
-
Inference results
-
Inference results raw data
-
Displayed in JSON format
-
-
Display label settings for inference results
-
Displayed in JSON format
-
Users can edit labels
-
-
Import Labels button
-
Imports any label file and displays it in the label settings for inference results
-
The format of the label file is described in the "Example for Object Detection/Classification" under "Assumption/Restriction"
-
-
Export Labels button
-
Exports data in the display label settings for inference results as a label file
-
Label file is saved anywhere on the user PC
-
The format of the label file is described in the "Example for Object Detection/Classification" under "Assumption/Restriction"
-
-
Display Setting menu
-
Sets display settings on the AI task tab
-
-
Timestamp of the image file
-
Inference source image
-
Inference results
-
The inference result with the highest score that is greater than or equal to the threshold value set in the Probability slider in the Display Setting menu is displayed
-
If more than one has the same highest score, the first of them is displayed
-
Does not display inference results if all scores are 0%
-
-
List of inference results
-
Inference results raw data
-
Displayed in JSON format
-
-
Display label settings for inference results
-
Displayed in JSON format
-
Users can edit labels
-
-
8 and 9 are similar to the Import Labels button/Export Labels button on the Object Detection tab
-
Display Setting menu
-
Sets display settings on the AI task tab
-
-
Timestamp of the image file
-
Inference source image
-
Inference source images of Semantic Segmentation are displayed at twice its original size
-
-
Inference results
-
Detects objects pixel by pixel and displays the detection in any color
-
-
Magnification of the inference source images
-
Displays magnification of Semantic Segmentation inference source images
-
-
Inference results raw data
-
Displayed in JSON format
-
-
Label table
-
Manages and displays inference result label information
-
Each label has one row of data in the label table
-
There are four items defined for label data: "Visible"、"ID"、"Label"、"Color"
-
Labels are displayed in order of class ID
-
-
Checkbox to display/hide inference results ("Visible")
-
Displays/Hides the label
-
-
Class ID ("ID")
-
Displays the class ID of the displayed label
-
-
Label ("Label")
-
Users can edit labels
-
If the label is too long, the end is omitted with "…"
-
-
Color picker ("Color")
-
Changes the color of inference results
-
-
Label addition position drop-down
-
Specifies where to add a label to the label table
-
The setting range is 0 to the final ID of the label table + 1, and the maximum value is the final ID of the label table + 1
-
-
Add Label button
-
Adds a row (all columns) at a user-specified position in the label table
-
The initial value when adding a row is as follows:
-
"Visible" :Checked
-
"ID" :Number of the row added
-
"Label" :No initial value, "Label Name" is displayed as a placeholder
-
"Color" :#FFFFFF
-
-
-
Label deletion position drop-down
-
Specifies where to delete a label in the label table
-
The setting range is 0 to the final ID of the label table, and the maximum value is the final ID of the label table
-
-
Delete Label button
-
Deletes a row (all columns) at a user-specified position in the label table
-
-
Import Labels button
-
Imports any label file and sets it to a label table
-
The format of the label file is described in the "Example for Segmentation" under "Assumption/Restriction"
-
-
Export Labels button
-
Exports data currently set in the label table as a label file
-
Saves the label file anywhere on the user PC
-
The format of the label file is described in the "Example for Segmentation" under "Assumption/Restriction"
-
-
Device Name drop-down list
-
Selects the Device Name of the Edge Device registered in the destination connection location specified by the user.
If a string is set that exceeds the length of the Device Name text box, the excess is omitted to "…"
-
-
Reload button
-
Updates the Device Name list in the Device Name drop-down list to the Device Name list of the latest Edge Device registered in "Console"
-
After reload, the Device Name drop-down list is left with no Device Name selected
-
-
Polling Interval slider
-
Slider used to set the polling interval when fetching data from the destination connection location specified by the user
-
The polling interval is displayed numerically to the right of the slider
-
Possible setting range and incremental values of Polling interval will differ on CONNECTION_DESTINATION at "SERVICE.Local" and others.
-
Upload interval of images and inference results can be set from UploadInterval of Command Parameter File. In RealtimeMode, UploadInterval must be made shorter than PollingInterval.
-
Default value of UploadInterval is 30. If nothing is specified, default value will be set. (unit is 1/30sec)
-
-
-
Start Upload/Stop Upload button
-
Starts/Stops uploading images and inference results, and the start/stop fetching and displaying the latest images, inference results from the destination location specified by the user
-
-
Device Name drop-down list
-
Selects the Device Name of the Edge Device registered in the "Console"
-
-
Reload button
-
Updates the Device Name list in the Device Name drop-down list to the Device Name list of the latest Edge Device registered in "Console"
-
After reload, the Device Name drop-down list is left with no Device Name selected
-
-
Sub Directory drop-down list
-
Selects the subdirectory of images stored in the destination connection location specified by the user
-
-
Image Selection slider
-
Sets the index of the inference source image to start displaying
-
The index is displayed numerically to the right of the slider
-
When the value of the slider is changed, updates to inference source image with date and time tied to index
-
-
Interval Time slider
-
Sets the playing interval when updating inference source images
-
The playing interval is displayed numerically to the right of the slider
-
The range of setting value and incremental value of Playing interval differ between CONNECTION_DESTINATION at "SERVICE.Local" and others
-
-
Start Playing/Stop Playing button
-
Starts/Stops updating inference source images
-
-
Save Data button
-
Displays the Save Data menu
-
-
Button to close Display Setting menu
-
Probability slider
-
Sets the probability threshold to display on the AI task tab
-
The threshold is displayed numerically to the right of the slider
-
-
Display Timestamp button
-
Displays/Hides image file timestamps
-
-
1 ~ 3 is similar to the Display Setting menu in the Object Detection tab
-
Display Top Score drop-down list
-
Selects the number of items to display in the list of inference results
-
The maximum value is 20
-
-
Overlay Inference Result button
-
Sets whether to display or hide information with the highest score of the inference result
-
Linked to display inference results in the Classification tab
-
-
Overlay Inference Result Color button
-
Sets the display color for information with the highest score of the inference result
-
Selects any color from the color picker
-
-
Button to close Display Setting menu
-
Transparency slider
-
Sets the transparency of the inference results to display
-
Transparency is indicated by % on the right side of the slider
-
-
Display Timestamp button
-
Displays/Hides image file timestamps
-
-
Save Data menu close button
-
Type radio button
-
Selects data to save
-
Original Image saves inference source images without inference results overlay, and inference result raw data
-
Overlaid Image saves inference results/timestamp-overlaid images and inference result raw data
-
-
-
Start index
-
Specifies the start index to save inference source images in a specified subdirectory
-
It works with the left knob of the range slider
-
Not displayed when there is only one image (including parentheses, letters in parentheses)
-
-
End index
-
Specifies the end index to save inference source images in a specified subdirectory
-
It works with the right knob of the range slider
-
Not displayed when there is only one image (including parentheses, letters in parentheses)
-
-
Range slider
-
Specifies the range to save inference source images in a specified subdirectory
-
If there is only one image, the Range slider is not displayed and the message "Only one item can be saved" is displayed
-
-
Save button
-
Starts saving specified data
-
Saves images and inference results from specified subdirectory in zip format anywhere on the user PC
-
-
Save data format
-
Displays save data format
-
Displays "Saving Original Image" if the data being saved is Original Image
-
Displays "Saving Overlaid Image" if the data being saved is Overlaid Image
-
-
-
Save progress
-
Displays save progress as a percentage
-
-
Save progress bar
-
Displays save progress in progress bar
-
-
Cancel button
-
Cancels saving data
-
-
Use "Visualization" in a Docker container
-
Developers open the "Visualization" repository from any browser and clone the repository.
-
In src/common, place the setting file containing the connection information.
-
Mention the connection destination information in the "CONNECTION_DESTINATION" of src/common/settings.ts.
-
The following values can be set in CONNECTION_DESTINATION.
Setting value
Data acquisition destination
SERVICE.Console
「Console」
SERVICE.Azure
Azure Blob Storage
SERVICE.AWS
Amazon S3
SERVICE.Local
Path set to "LOCAL_ROOT" in src/common/settings.ts
-
When the configuration file of the connection destination that is set in "CONNECTION_DESTINATION" does not exist in src/common or if the required authentication information is not described in the confiuration file, an error dialog is displayed when obtaining the pre-inference image/inference results.
-
-
Create a "Visualization" Docker image by executing the following command on a Dockerfile directly under the repository root directory:
docker build . -t visualization-app
-
Launch "Visualization" by executing the following command:
docker run -p 3000:3000 -d visualization-app
-
-
How to launch in other environments
-
Launch terminal in Codespaces or a directory cloned from the "Visualization" repository.
-
Run the following command to install the packages needed for the cloned "Visualization": (No need for Codespaces since they are automatically installed).
-
If the error "npm ERR! gyp ERR! build error" occurs when running the following command, install the C++ compiler.
npm install
-
-
In src/common, place the setting file containing the connection information.
-
The following values can be set in CONNECTION_DESTINATION.
Setting value
Data acquisition destination
SERVICE.Console
「Console」
SERVICE.Azure
Azure Blob Storage
SERVICE.AWS
Amazon S3
SERVICE.Local
Path set to "LOCAL_ROOT" in src/common/settings.ts
-
When the configuration file of the connection destination that is set in "CONNECTION_DESTINATION" does not exist in src/common or if the required authentication information is not described in the confiuration file, an error dialog is displayed when obtaining the pre-inference image/inference results.
-
-
Run the following command in the terminal to launch "Visualization":
npm run build npm run start
-
-
After "Visualization" is launched, you are transferred to the main screen.
-
Specify any of the tabs [Object Detection]/[Classification]/[Segmentation] from the AI task tab.
-
If necessary, press the [Import Labels] to import labels.
-
Specify any of the tabs [Realtime Mode]/[History Mode] from the AI task tab.
-
Select the AI task.
-
On the Object Detection tab
-
Open the [Display Setting] menu and set the [Probability] slider/[Display Timestamp].
-
The [Probability] slider still works while displaying the image and inference results.
-
If the inference source image is displayed, it works even when stopped.
-
-
Change the display label settings for inference results.
-
Timestamp of image file/inference source image/inference results/inference results raw data are displayed according to the operation of Mode tab.
-
The inference results are displayed in frames with the label name set on the inference source image.
-
-
-
On the Classification tab
-
Open the [Display Setting] menu and set the [Probability] slider/[Display Timestamp] button/[Display Top Score] drop-down list/[Overlay Inference Result] button/[Overlay Inference Result Color] button.
-
The [Probability] slider still works even while displaying images/inference results.
-
If the inference source image is displayed, it works even when stopped.
-
-
Change the display label settings for inference results.
-
Timestamp of image file/inference source image/inference results/list of inference results/inference results raw data are displayed according to the operation of Mode tab.
-
-
On the Segmentation tab
-
Open the [Display Setting] menu and set the [Transparency] slider/[Display Timestamp] button.
-
The [Transparency] slider still works while displaying images, inference results.
-
-
Change the display label settings for inference results from the label table.
-
Set whether to display or hide inference results in the [Visible].
-
Set any label in the [Label].
-
Set any display color of inference results in the [Color].
-
Select a label addition position from the label addition position drop-down and press the [Add Label] to add label data.
-
If the inference source image is displayed, it works even when stopped.
-
-
Select a label deletion position from the label deletion position drop-down and press the [Delete Label] to delete the label data.
-
If the inference source image is displayed, it works even when stopped.
-
-
-
Timestamp of image file/inference source image/inference results/inference results raw data are displayed according to the operation of Mode tab.
-
-
-
Select the mode
-
Realtime Mode tab
-
Set the [Device Name] drop-down list/[Polling Interval] slider and press the [Start Upload] button.
-
When updating the [Device Name] drop-down list, press the [Reload] button.
-
During upload, [Device Name] drop-down list/[Reload] button/[Polling Interval] slider does not function.
-
-
Starts getting the specified inference source images/inference results and displays them on the AI task tab.
-
Press the [Stop Upload] button.
-
-
History Mode tab
-
Set [Device Name] drop-down list/[Sub Directory] drop-down list/[Image Selection] slider/[Interval Time] slider, and press the [Start Polling] button.
-
Press the [Reload] button to update [Device Name] drop-down list.
-
The [Device Name] drop-down list/[Reload] button/[Sub Directory] drop-down list/[Image Selection] slider/[Interval Time] slider don’t work during playing.
-
Changing the value of the [Image Selection] slider during stop playing does not update the inference source image.
-
-
Starts getting the specified inference source images/inference results and displays them on the AI task tab.
-
Press the [Stop Playing] button.
-
Press the [Save Data] button.
-
The Save Data menu appears.
-
While Playing, [Save Data] button does not work
-
-
Select the data to save from the [Select Save Data] radio button.
-
Specify the range to save inference source images in a subdirectory from the [Save Range] slider.
-
Press the [Save] button.
-
Enter a save filename to the "Save File As" dialog, specify any directory, and press the "Save" button.
-
The Saving dialog menu appears.
-
-
Format of the data being saved, save progress, save progress bar, and [Cancel] button appear in the Saving dialog.
-
Press the [Cancel] to cancel saving.
-
-
-
-
If necessary, press the [Export Labels] to export labels.
-
Enter a save filename to the "Save File As" dialog, specify any directory, and press the "Save" button.
-
-
getImage
URL:{base_url}/api/image/{deviceId}?{imagePath}&{numberOfImages}&{skip}&{orderBy}-
Get and return images in the specified imagePath.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
imagePath |
Path to the storage containing the pre-inference image(s) to be fetched |
Not specified |
numberOfImages |
Number of inference source images to get |
Not specified |
skip |
Number of inference source images to skip getting |
Not specified |
orderBy |
Sort order by date and time the inference source image was created |
ASC, DESC |
Return value | Meaning |
---|---|
buff |
Binary data of the gotten inference source image |
timestamp |
Timestamp of the gotten inference source image |
-
getInference
URL : {base_url}/api/inference/{deviceId}?{timestamp}&{aiTask}-
Get a list of inference results linked to inference source images.
-
This API also deserializes inference results.
-
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference results |
Not specified |
timestamp |
Timestamp taken when saving the inference source image |
Not specified |
aiTask |
Type of AI task selected |
ObjectDetection、Classification、Segmentation |
Return value | Meaning |
---|---|
inferences list |
List of inference results linked to inference source images |
-
getSubDirectoryList
URL : {base_url}/api/subDirectoryList/{deviceId}-
Get and return a list of subdirectories where inference source images are stored.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
Return value | Meaning |
---|---|
subDirectory list |
List of subdirectories where inference source images are stored |
-
getTotalImageCount
URL : {base_url}/api/totalImageCount/{deviceId}?{subDirectory}-
Get and return the number of inference source images stored in the selected subdirectory.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
subDirectory |
Subdirectory selected from the list |
Not specified |
Return value | Meaning |
---|---|
total image count |
Number of inference source images stored in the selected subdirectory |
-
getDeviceInfo
URL : {base_url}/api/deviceInfo-
Get the Device Name and Device ID list of Edge Devices registered in the "Console".
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
None |
Return value | Meaning |
---|---|
device list |
Device Name and Device ID list of Edge Devices |
-
getZipData
URL : {base_url}/api/getZipData-
Get the zip file data generated by createZip.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
None |
Return value | Meaning |
---|---|
zipData |
zip file data to save |
-
latestInference
URL : {base_url}/api/latestInference?{aiTask}-
Obtain latest inference result and return that is saved in Local Storage.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
aiTask |
Types of AI Task selected |
ObjectDetection、Classification、Segmentation |
Return value | Meaning |
---|---|
deserializedRawData |
Inference result for display |
inference |
Latest inference result saved in Local Storage |
-
specifiedImage
URL : {base_url}/api/specifiedImage?{timestamp}-
From Local Storage, the File name obtains images that match with the Timestamp.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
timestamp |
Timestamp (file name) of inference image obtained |
Not specified |
Return value | Meaning |
---|---|
buff |
File name saved in Local Storage is binary data of Timestamp image |
-
startUploadInferenceResult
URL : {base_url}/api/startUploadInferenceResult/{deviceId}-
Instructs the Device ID corresponding to the Device Name of the selected Edge Device to start uploading inference source images and inference results.
-
This API also gets a list of command parameter files
-
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device to start uploading inference source images and inference results |
Not specified |
Return value | Meaning |
---|---|
result |
SUCCESS or ERROR string |
outputSubDir |
Directory where images are uploaded |
-
stopUploadInferenceResult
URL : {base_url}/api/stopUploadInferenceResult/{deviceId}-
Instructs the Device ID corresponding to the Device Name of the selected Edge Device to stop uploading images and inference results.
-
In case the destination location is local, revise it to a Local Storage file structure.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device to stop uploading inference source images and inference results |
Not specified |
Request Body’s name | Meaning | Range of parameter |
---|---|---|
subDirectory |
Path storing the image or inference results |
Not specified |
Return value | Meaning |
---|---|
result |
SUCCESS or ERROR string |
-
initializeDirectory
URL : {base_url}/api/initializeDirectory/{deviceId}?{subDirectory}-
Creates and initializes a working directory to save data.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
Request Body’s name | Meaning | Range of parameter |
---|---|---|
subDirectory |
Subdirectory where data is stored |
Not specified |
Return value | Meaning |
---|---|
None |
-
saveImage
URL : {base_url}/api/saveImage/{deviceId}?{subDirectory}?{startIndex}?{endIndex}-
Saves images from the subdirectory selected by the user.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
Request Body’s name | Meaning | Range of parameter |
---|---|---|
subDirectory |
Subdirectory where data is stored |
Not specified |
startIndex |
The start index to save inference source images in subdirectory |
Not specified |
endIndex |
The end index to save inference source images in subdirectory |
Not specified |
Return value | Meaning |
---|---|
None |
-
saveInferences
URL : {base_url}/api/saveInferences/{deviceId}?{subDirectory}?{aiTask}-
Saves the inference results associated with the images in the subdirectory selected by the user.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
Request Body’s name | Meaning | Range of parameter |
---|---|---|
subDirectory |
Subdirectory where data is stored |
Not specified |
aiTask |
Type of AI task selected |
ObjectDetection、Classification、Segmentation |
Return value | Meaning |
---|---|
None |
-
createZip
URL : {base_url}/api/createZip-
Creates a zip file containing the inference source images and inference results stored in the working directory.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
None |
Return value | Meaning |
---|---|
None |
-
createOverlaidImage
URL : {base_url}/api/createOverlaidImage/{deviceId}-
Saves images by overlaying inference results/timestamp on the inference source images saved in the working directory.
-
Query Parameter’s name | Meaning | Range of parameter |
---|---|---|
deviceId |
Device ID of the Edge Device uploading inference source images |
Not specified |
Request Body’s name | Meaning | Range of parameter |
---|---|---|
subDirectory |
Subdirectory where data is stored |
Not specified |
aiTask |
Type of AI task selected |
ObjectDetection、Classification、Segmentation |
isDisplayTs |
Displays/Hides timestamps |
true, false |
labelData |
Inference results display label set |
Not specified |
probability |
Probability threshold to display |
0~100 |
isOverlayIR |
Displays/Hides information with the highest score of the inference result |
true, false |
overlayIRC |
Display color for information with the highest score of the inference result |
Not specified |
labelListData |
List of inference results display label set |
Not specified |
transparency |
Transparency of inference results |
0~100 |
Return value | Meaning |
---|---|
None |
-
UI response time of 1.2 seconds or less.
-
If processing takes more than 5 seconds, then the display during processing can be updated sequentially.
-
Semantic Segmentation is the only type of Segmentation supported this time.
-
Display label settings for inference results are in the following JSON format (Refer to the following example).
-
If the key name is different from the example, an error occurs during import.
-
Labels must be in the order of class ID as detected by the AI model.
-
Example for Object Detection/Classification
{ "label": [ "Label_1", "Label_2", "Label_3" ] }
-
Example for Segmentation
{ "labelList": [ { "isVisible": true, "label": "Label_1", "color": "#000000" }, { "isVisible": false, "label": "Label_2", "color": "#0000ff" }, { "isVisible": true, "label": "Label_3", "color": "#ff0000" } ] }
-
-
Go to the "Console" UI and set the Command Parameter File to following setting: (as of 2023/12/04)
-
Mode=1(Image&Inference Result)
-
FileFormat="JPG"
-
NumberOfInferencesPerMessage=1
-
When using RealtimeMode, UploadInterval must be made smaller than the Polling Interval
-
Other parameters need to be changed depending on the AI model and application content.
-
-
If you select an Edge Device that does not have an AI model or application deployed at runtime, it will not work properly.
-
If you select an AI task that differs from the output data of the AI model, it may result in an error or incorrect data.
-
If there are more than 1000 images in a subdirectory in History Mode or Realtime Mode, the playback order of the images may be disturbed.
-
The file structure when Local HTTP Server saves data in Local Storage is as follows.
image meta Device ID ∟image ∟yyyyMMddHHmmssfff (1) ∟yyyyMMddHHmmssfff.jpg (2) ∟yyyyMMddHHmmssfff.jpg (2) ∟meta ∟yyyyMMddHHmmssfff (1) ∟yyyyMMddHHmmssfff.txt (3) ∟yyyyMMddHHmmssfff.txt (3)
(1) Inference start time (2) Pre-inference image file (file name is the pre-inference image output time) (3) Inference results file (file name is the inference results output time)
-
The data file structure when using Azure Blob Storage/Amazon S3 is as follows.
Device ID ∟image ∟yyyyMMddHHmmssfff (1) ∟yyyyMMddHHmmssfff.jpg (2) ∟yyyyMMddHHmmssfff.jpg (2) ∟metadata ∟yyyyMMddHHmmssfff (1) ∟yyyyMMddHHmmssfff.txt (3) ∟yyyyMMddHHmmssfff.txt (3)
(1) Inference start time (2) Pre-inference image file(file name is the pre-inference image output time) (3) Inference results file (file name is the inference results output time)
-
In case of an error when clicking [Stop Upload] button while using Local Storage, do not execute the process of transferring images and inference results.
Also on clicking [Start Upload] button in this state, the data uploaded just before and the data to be fetched next are mixed up, the following support is required.-
Either transfer or delete the images, inference results available in LOCAL_ROOT.
-
-
When "SERVICE.Console" is specified for CONNECTION_DESTINATION, only images and inference results up to 10 hours after the inference start time can be acquired in History Mode.
-
If the [Stop Upload] button becomes unavailable, the inference stop operation must be performed from the "Console" UI.
-
Image uploads from Edge Devices to the "Console" can experience delays of up to several minutes.