Skip to content

Commit

Permalink
Apply suggestions from code review
Browse files Browse the repository at this point in the history
  • Loading branch information
ad-daniel authored Apr 27, 2022
1 parent 46b50a0 commit 6379098
Show file tree
Hide file tree
Showing 7 changed files with 24 additions and 19 deletions.
3 changes: 2 additions & 1 deletion docs/reference/object-detection-2d-centernet.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,8 @@ In terms of speed, the performance of CenterNet is summarized in the table below
|---------|----------|-----|-----|
| CenterNet | 88 | 19 | 14 |

Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below. The measurement was made on a Jetson TX2 module.
Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below.
The measurement was made on a Jetson TX2 module.

| Method | Memory (MB) | Energy (Joules) - Total per inference |
|-------------------|---------|-------|
Expand Down
3 changes: 2 additions & 1 deletion docs/reference/object-detection-2d-ssd.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,8 @@ In terms of speed, the performance of SSD is summarized in the table below (in F
|---------|----------|-----|-----|
| SSD | 85 | 16 | 27 |

Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below. The measurement was made on a Jetson TX2 module.
Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below.
The measurement was made on a Jetson TX2 module.

| Method | Memory (MB) | Energy (Joules) - Total per inference |
|-------------------|---------|-------|
Expand Down
3 changes: 2 additions & 1 deletion docs/reference/object-detection-2d-yolov3.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,8 @@ In terms of speed, the performance of YOLOv3 is summarized in the table below (i
|---------|----------|-----|-----|
| YOLOv3 | 50 | 9 | 16 |

Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below. The measurement was made on a Jetson TX2 module.
Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below.
The measurement was made on a Jetson TX2 module.

| Method | Memory (MB) | Energy (Joules) - Total per inference |
|-------------------|---------|-------|
Expand Down
14 changes: 8 additions & 6 deletions docs/reference/object-tracking-2d-deep-sort.md
Original file line number Diff line number Diff line change
Expand Up @@ -360,12 +360,14 @@ Parameters:
#### Performance Evaluation

The tests were conducted on the following computational devices:
- **Intel(R) Xeon(R) Gold 6230R CPU on server**
- **Nvidia Jetson TX2**
- **Nvidia Jetson Xavier AGX**
- **Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors**

Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU. Inner FPS refers to the speed of the model when the data is ready. We report FPS (single sample per inference) as the mean of 100 runs.
- Intel(R) Xeon(R) Gold 6230R CPU on server
- Nvidia Jetson TX2
- Nvidia Jetson Xavier AGX
- Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors

Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU.
Inner FPS refers to the speed of the model when the data is ready.
We report FPS (single sample per inference) as the mean of 100 runs.

Full FPS Evaluation of DeepSORT and FairMOT on MOT20 dataset
| Model | TX2 (FPS) | Xavier (FPS) | RTX 2080 Ti (FPS) |
Expand Down
8 changes: 4 additions & 4 deletions docs/reference/object-tracking-2d-fair-mot.md
Original file line number Diff line number Diff line change
Expand Up @@ -418,10 +418,10 @@ Parameters:
#### Performance Evaluation

The tests were conducted on the following computational devices:
- **Intel(R) Xeon(R) Gold 6230R CPU on server**
- **Nvidia Jetson TX2**
- **Nvidia Jetson Xavier AGX**
- **Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors**
- Intel(R) Xeon(R) Gold 6230R CPU on server
- Nvidia Jetson TX2
- Nvidia Jetson Xavier AGX
- Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors

Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU.
Inner FPS refers to the speed of the model when the data is ready.
Expand Down
9 changes: 4 additions & 5 deletions docs/reference/object-tracking-3d-ab3dmot.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,10 +119,10 @@ Parameters:
#### Performance Evaluation

The tests were conducted on the following computational devices:
- **Intel(R) Xeon(R) Gold 6230R CPU on server**
- **Nvidia Jetson TX2**
- **Nvidia Jetson Xavier AGX**
- **Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors**
- Intel(R) Xeon(R) Gold 6230R CPU on server
- Nvidia Jetson TX2
- Nvidia Jetson Xavier AGX
- Nvidia RTX 2080 Ti GPU on server with Intel Xeon Gold processors

Inference time is measured as the time taken to transfer the input to the model (e.g., from CPU to GPU), run inference using the algorithm, and return results to CPU. Inner FPS refers to the speed of the model when the data is ready. We report FPS (single sample per inference) as the mean of 100 runs.

Expand All @@ -148,7 +148,6 @@ AB3DMOT platform compatibility evaluation.
| NVIDIA Jetson Xavier AGX | Pass |



#### References
<a name="#object-tracking-3d-1" href="https://arxiv.org/abs/2008.08063">[1]</a> AB3DMOT: A Baseline for 3D Multi-Object Tracking and New Evaluation Metrics,
[arXiv](https://arxiv.org/abs/2008.08063).
Expand Down
3 changes: 2 additions & 1 deletion docs/reference/semantic-segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,8 @@ In terms of speed, the performance of BiseNet for different input sizes is summa
|1024x1024 |49.11 |3.03 |5.78 |11.02|
|104x2048 |25.07 |1.50 |2.77 |5.44 |

Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below. The measurement was made on a Jetson TX2 module.
Apart from the inference speed, we also report the memory usage, as well as energy consumption on a reference platform in the Table below.
The measurement was made on a Jetson TX2 module.

| Method | Memory (MB) | Energy (Joules) |
|---------|-------------|-----------------|
Expand Down

0 comments on commit 6379098

Please # to comment.