You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It uses argparse as the node in the link you provided in 1. Through the arguments, the input image topic, the output image topic as well as the detection message topic can be modified, along with the device (cpu, cuda). Specific to the pose estimation node there is also the option of accelerate to make the node run faster but be less accurate.
This can serve as a template to follow in the other ROS1/2 nodes, using argparse and specific argument names and default values for the input and output topics, with minor modifications depending on the node, but keeping consistency across all of them.
In order for the toolkit to be more homogeneous we would need to choose between those two options:
Use the input topics as an option to pass to the rosrun like : rosrun perception test.py input_topic:=/input. This option would require to create a roslaunch file where you would need to modify there the input topics.
For example this package is using this method : https://github.com/opendr-eu/opendr/blob/tx2_install/projects/opendr_ws/src/perception/scripts/speech_command_recognition.py
Use ROS parameters : Each function will take rosparams as topic input. This option will be to create a config.yaml file once with all the parameters that you would need to set up the ROS params.
For example this package is using this method : https://github.com/opendr-eu/opendr/blob/tx2_install/projects/opendr_ws/src/perception/scripts/object_detection_2d_centernet.py
is using rosparam
For now the ROS nodes in OpenDR are using both of those options.
The second option would be a bit better.
The text was updated successfully, but these errors were encountered: