Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Fall and wave detection ROS nodes #423

Merged
merged 44 commits into from
Apr 20, 2023
Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
0e8b30e
Overhauled fall detection node to be able to run on pose messages
tsampazk Mar 14, 2023
070f71b
Added to_ros_box in bridge, needed for overhauled fall detection node
tsampazk Mar 14, 2023
0b64bb8
Updated fall detection node section
tsampazk Mar 14, 2023
7e358d4
Minor fix ros1 fall node
tsampazk Mar 28, 2023
cc5af60
Added to_rox_box in ros2 bridge
tsampazk Mar 28, 2023
6fb3148
Updated ros2 fall detection node
tsampazk Mar 28, 2023
15526b4
Initial version of ros1 wave detection node
tsampazk Mar 28, 2023
4159ee1
Renamed class
tsampazk Mar 28, 2023
91f57ad
Added performance to ros1 wave detection node
tsampazk Mar 29, 2023
16bf750
Minor fixes in fall_detection_node.py
tsampazk Mar 29, 2023
9ab7b8b
Refactored wave_detection_node.py to work similar to fall detection node
tsampazk Mar 29, 2023
1c4af42
Applied minor fixes to ros2 fall_detection_node.py
tsampazk Mar 29, 2023
62e7a9b
Removed unused import
tsampazk Mar 29, 2023
cc65aa7
Fall detection ros1 - visualization mode now publishes bboxes too
tsampazk Apr 4, 2023
73838b7
Fall detection ros2 - visualization mode now publishes bboxes too, fi…
tsampazk Apr 4, 2023
3a031c8
Fall detection ros1 doc minor updates
tsampazk Apr 4, 2023
c217c17
Fall detection ros2 doc updated for newly updated node
tsampazk Apr 4, 2023
ad6f1cb
Wave detection ros1, added missing docstring and wave detection messa…
tsampazk Apr 4, 2023
d9ab291
Added wave detection entry in node index list
tsampazk Apr 4, 2023
8601a18
Added wave detection section entry and fixed minor thing in fall dete…
tsampazk Apr 4, 2023
c86eaa0
Fixed broken link
tsampazk Apr 4, 2023
799ba1d
Added ros2 wave detection node entry in setup.py
tsampazk Apr 4, 2023
831afb8
Added new ros2 wave detection node
tsampazk Apr 4, 2023
82bb00e
Fixed broken link
tsampazk Apr 4, 2023
88efb31
Added wave detection entry in node index list
tsampazk Apr 4, 2023
460e88b
Added wave detection section entry and fixed minor thing in fall dete…
tsampazk Apr 4, 2023
44067ee
Removed unused import
tsampazk Apr 4, 2023
fce47fb
Fixed default ctor argument and simplified if/else as suggested by re…
tsampazk Apr 11, 2023
b75d464
Fixed default ctor argument as suggested by review
tsampazk Apr 11, 2023
e1288bb
Fixed default ctor argument as suggested by review
tsampazk Apr 11, 2023
695f1a1
Fixes as suggested by review
tsampazk Apr 11, 2023
2fdab8e
Re-arranged docstring to match actual order of arguments
tsampazk Apr 11, 2023
e61de5c
Added performance to fall detection ROS1 node
tsampazk Apr 11, 2023
4da20fe
Fixed performance topic name
tsampazk Apr 11, 2023
61ec270
Added performance topic arg entries for wave and fall detection nodes
tsampazk Apr 11, 2023
2aca60c
Re-arranged docstring to match actual order of arguments
tsampazk Apr 11, 2023
96fb809
Added performance to fall detection ROS2 node
tsampazk Apr 11, 2023
b60cb26
Added performance topic arg entries for wave and fall detection nodes
tsampazk Apr 11, 2023
41713e6
Fixed wrong publisher argument in ros2 wave/fall nodes
tsampazk Apr 12, 2023
f365f42
Merge branch 'develop' into fall-wave-ros-nodes
tsampazk Apr 12, 2023
1901f10
Fixed fall_detection_node.py performance measurement
tsampazk Apr 18, 2023
25059a8
Fixed wave_detection_node.py performance measurement
tsampazk Apr 18, 2023
620ac58
Fixed ROS2 fall_detection_node.py performance measurement
tsampazk Apr 18, 2023
e5e7754
Fixed ROS2 wave_detection_node.py performance measurement
tsampazk Apr 18, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 13 additions & 12 deletions projects/opendr_ws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,18 +69,19 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor
1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros-node)
2. [High Resolution Pose Estimation](src/opendr_perception/README.md#high-resolution-pose-estimation-ros-node)
3. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros-node)
4. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
5. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
6. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
7. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros-node)
8. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
9. [Vision Based Panoptic Segmentation](src/opendr_perception/README.md#vision-based-panoptic-segmentation-ros-node)
10. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
11. [Binary High Resolution](src/opendr_perception/README.md#binary-high-resolution-ros-node)
12. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
13. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
14. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node)
15. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
4. [Wave Detection](src/opendr_perception/README.md#wave-detection-ros-node)
5. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
6. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
7. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
8. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros-node)
9. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
10. [Vision Based Panoptic Segmentation](src/opendr_perception/README.md#vision-based-panoptic-segmentation-ros-node)
11. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
12. [Binary High Resolution](src/opendr_perception/README.md#binary-high-resolution-ros-node)
13. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
14. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
15. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-nodes)
16. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)

## RGB + Infrared input
1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,28 @@ def from_ros_face(self, ros_hypothesis):
confidence=ros_hypothesis.score)
return category

def to_ros_box(self, box):
"""
Converts an OpenDR BoundingBox into a Detection2D that can carry the same information.
The bounding box is represented by its center coordinates as well as its width/height dimensions.
:param box: OpenDR bounding box to be converted
:type box: engine.target.BoundingBox
:return: ROS message with the Detection2D including the bounding box
:rtype: vision_msgs.msg.Detection2D
"""
ros_box = Detection2D()
ros_box.bbox = BoundingBox2D()
ros_box.results.append(ObjectHypothesisWithPose())
ros_box.bbox.center = Pose2D()
ros_box.bbox.center.x = box.left + box.width / 2.
ros_box.bbox.center.y = box.top + box.height / 2.
ros_box.bbox.size_x = box.width
ros_box.bbox.size_y = box.height
ros_box.results[0].id = int(box.name)
if box.confidence:
ros_box.results[0].score = box.confidence
return ros_box

def to_ros_boxes(self, box_list):
"""
Converts an OpenDR BoundingBoxList into a Detection2DArray msg that can carry the same information.
Expand Down
1 change: 1 addition & 0 deletions projects/opendr_ws/src/opendr_perception/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ catkin_install_python(PROGRAMS
scripts/pose_estimation_node.py
scripts/hr_pose_estimation_node.py
scripts/fall_detection_node.py
scripts/wave_detection_node.py
scripts/object_detection_2d_nanodet_node.py
scripts/object_detection_2d_yolov5_node.py
scripts/object_detection_2d_detr_node.py
Expand Down
111 changes: 99 additions & 12 deletions projects/opendr_ws/src/opendr_perception/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,32 +120,119 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../op

You can find the fall detection ROS node python script [here](./scripts/fall_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md).
Fall detection uses the toolkit's pose estimation tool internally.
Fall detection is rule-based and works on top of pose estimation.

<!-- TODO Should add information about taking advantage of the pose estimation ros node when running fall detection, see issue https://github.com/opendr-eu/opendr/issues/282 -->
This node normally runs on `detection mode` where it subscribes to a topic of OpenDR poses and detects whether the poses are fallen persons or not.
By providing an image topic the node runs on `visualization mode`. It also gets images, performs pose estimation internally and visualizes the output on an output image topic.
Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.

#### Instructions for basic usage:
- #### Instructions for basic usage in `detection mode`:

1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
1. Start the node responsible for publishing poses. Refer to the [pose estimation node above](#pose-estimation-ros-node).

2. You are then ready to start the fall detection node:

```shell
rosrun opendr_perception fall_detection_node.py
```
The following optional arguments are available:
The following optional arguments are available and relevant for running fall detection on pose messages only:
- `-h or --help`: show a help message and exit
- `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`)
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`)
- `-ip or --input_pose_topic INPUT_POSE_TOPIC`: topic name for input pose, `None` to stop the node from running detections on pose messages (default=`/opendr/poses`)
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/fallen`)

3. Detections are published on the `detections_topic`

- #### Instructions for `visualization mode`:

1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).

2. You are then ready to start the fall detection node in `visualization mode`, which needs an input image topic to be provided:

```shell
rosrun opendr_perception fall_detection_node.py -ii /usb_cam/image_raw
```
The following optional arguments are available and relevant for running fall detection on images. Note that the
`input_rgb_image_topic` is required for running in `visualization mode`:
- `-h or --help`: show a help message and exit
- `-ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`None`)
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image (default=`/opendr/image_fallen_annotated`)
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/fallen`)
- `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
- `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy

3. Default output topics:
- Output images: `/opendr/image_fallen_annotated`
- Detection messages: `/opendr/fallen`
- Default output topics:
- Detection messages: `/opendr/fallen`
- Output images: `/opendr/image_fallen_annotated`

For viewing the output, refer to the [notes above.](#notes)
For viewing the output, refer to the [notes above.](#notes)

**Notes**

Note that when the node runs on the default `detection mode` it is significantly faster than when it is provided with an
input image topic. However, pose estimation needs to be performed externally on another node which publishes poses.
When an input image topic is provided and the node runs in `visualization mode`, it runs pose estimation internally, and
consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel.
The node can run in both modes in parallel or only on one of the two. To run the node only on `visualization mode` provide
the argument `-ip None` to disable the `detection mode`. Detection messages on `detections_topic` are published in both modes.

### Wave Detection ROS Node

You can find the wave detection ROS node python script [here](./scripts/wave_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
The node is based on a [wave detection demo of the Lightweight OpenPose tool](../../../../projects/python/perception/pose_estimation/lightweight_open_pose/demos/wave_detection_demo.py).
Wave detection is rule-based and works on top of pose estimation.

This node normally runs on `detection mode` where it subscribes to a topic of OpenDR poses and detects whether the poses are waving or not.
By providing an image topic the node runs on `visualization mode`. It also gets images, performs pose estimation internally and visualizes the output on an output image topic.
Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.

- #### Instructions for basic usage in `detection mode`:

1. Start the node responsible for publishing poses. Refer to the [pose estimation node above](#pose-estimation-ros-node).

2. You are then ready to start the wave detection node:

```shell
rosrun opendr_perception wave_detection_node.py
```
The following optional arguments are available and relevant for running fall detection on pose messages only:
- `-h or --help`: show a help message and exit
- `-ip or --input_pose_topic INPUT_POSE_TOPIC`: topic name for input pose, `None` to stop the node from running detections on pose messages (default=`/opendr/poses`)
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/wave`)

3. Detections are published on the `detections_topic`

- #### Instructions for `visualization mode`:

1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).

2. You are then ready to start the wave detection node in `visualization mode`, which needs an input image topic to be provided:

```shell
rosrun opendr_perception wave_detection_node.py -ii /usb_cam/image_raw
```
The following optional arguments are available and relevant for running wave detection on images. Note that the
`input_rgb_image_topic` is required for running in `visualization mode`:
- `-h or --help`: show a help message and exit
- `-ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`None`)
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image (default=`/opendr/image_wave_annotated`)
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/wave`)
- `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
- `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy

- Default output topics:
- Detection messages: `/opendr/wave`
- Output images: `/opendr/image_wave_annotated`

For viewing the output, refer to the [notes above.](#notes)

**Notes**

Note that when the node runs on the default `detection mode` it is significantly faster than when it is provided with an
input image topic. However, pose estimation needs to be performed externally on another node which publishes poses.
When an input image topic is provided and the node runs in `visualization mode`, it runs pose estimation internally, and
consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel.
The node can run in both modes in parallel or only on one of the two. To run the node only on `visualization mode` provide
the argument `-ip None` to disable the `detection mode`. Detection messages on `detections_topic` are published in both modes.

### Face Detection ROS Node

Expand Down
Loading