Skip to content

Third assignment of Research Track I regarding the implementation of a robot capable of moving, autonomousely or with manual commands, inside a room while avoiding obstacled

Notifications You must be signed in to change notification settings

marcomacchia99/SLAM_Robot

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Assignment 3 (final assignment) - Research Track 1


Introduction

The final assignment deals with a robot moving into an initlially unknown enviroment. Depending on the mode selected, the robot can drive autonomously, reaching a given goal, being driven by the user and being driven by the user, assisting them to avoid collisions.

Installing and running

The simulator requires ROS (Robot-Operating-Systems) to be installed on the machine. In particular, the Noetic Release of ROS was used.

For this particular simulation there are some required elements:

If you don't have any of these elements you can run the following instructions:

$ git clone https://github.com/CarmineD8/slam_gmapping.git
$ sudo apt-get install ros-<your_ros_distro>-navigation
$ sudo apt install xterm

After doing that, you are ready to launch the simulation! A launch file, called launcher.launch, is provided to run all the required nodes.

this is its structure:

<launch>
    <include file="$(find RT1_Assignment3)/launch/simulation_gmapping.launch"/>
    <include file="$(find RT1_Assignment3)/launch/move_base.launch"/>
    <node pkg="RT1_Assignment3" type="mainController" name="mainController" output="screen" required="true" launch-prefix="xterm -fa 'Monospace' -fs 11 -e"/>
</launch>

Notice that launch-prefix of mainController node contains some rules regarding the font family and the font size, you are completely free to change it!

Simulation environment

After launching the simulation using the provided commands two programs will open, Gazebo and Rviz.

Gazebo is an open-source 3D robot simulator. Here's the simulation view from Gazebo:

alt text

ROS generate the environment based on the file house.world, stored into the world folder.

Initially the robot knows only what he can see, here's the image showing his initial known map.

alt text

After some time the robot has explored and mapped all the surrounding walls using his laser scan.

We can now see the full map into rviz, as shown below:

alt text

MainController node

The mainController node is the first node, spawned with the launcher.launch file. This node simply prompts some instruction in the xterm console, then it detects and interprets the user inputs.

The user can:

  • 1 - Reach autonomousely a given position
  • 2 - Drive the robot with the keyboard
  • 3 - Drive the robot with the keyboard with automatic collision avoidance
  • 4 - Reset simulation
  • 0 - Exit from the program

Generally speaking, this simulation includes a non-blocking getchar function, ideal to speed up the program execution and to improve the user experience.

I found this function in teleop_twist_keyboard_cpp repository, and it temporarily edits the system settings in order to catch immediately what the user writes.

Remember that the termios.h library is required, so don't remove it!

Finally, based on the input received, the mainController node runs the selected node, using system() function.

For example, if the number 1 is pressed, this command is executed:

system("rosrun RT1_Assignment3 reachPoint");

ReachPoint node

The reachPoint node implements the first required feature. In fact it sets a new goal for the robot according to what the user wants.

At his initial state, the node request the x and y coordinates of the goal, then it generates a new message of type move_base_msgs/MoveBaseActionGoal. The message is then published into the /move_base/goal topic.

When the message is published, the robot starts looking for a valid path which can lead to the goal, and he followes it.

During the navigation, the user can at any time:

  • stop the navigation by pressing the q key, or
  • exit the node by pressing CTRL-C key

If one of this keys is pressed, a message of type actionlib_msgs/GoalID is generated and then published into the /move_base/cancel topic. In particular, every goal is tracked by the node with its id, randomly generated by the node itself using rand() function, so sending the goal cancel message is quite easy.

In order to know if the robot has reached the goal or if the robot can't reach it a /move_base/status message handler is implemented. It continousely checks the messages published into that topic, in particular it looks for the status code.

Initially the status code is 1, meaning that the robot is following his path. When the robot stop there are two possibilities: if the code equals 3 (succeded) then it means that the goal has been successfully reached, otherwise if the robot can't reach the goal the status code will be set to 4 (aborted).

Based on the status code the node displays the result in the console, then it asks if the user wants to select a new goal or exit from this node.

DriveWithKeyboard node

The driveWithKeyboard node let the user drive the robot using the keyboard.

Here, I decided to edit the teleop_twist_keyboard node, starting from the .cpp version which I found in the teleop_twist_keyboard_cpp repository.

In particular I cleaned the user interface, and I added the possibility to reset the linear and the angular speed. Also there's a new possibility to safely quit from the execution using the well-known CTRL-C combination.

The node simply checks the user imputs according to the instructions prompted in the console, and it publish the new speed to the /cmd_vel topic.

The speed is computed as the relative speed multiplied with the selected direction, which is an integer defined between -1 and 1. 1 means that the robot must go forward or turn left, -1 means backward (or turn right), 0 means stop.

Here's the computations in terms of code:

//define variables for vel direction
int lin=0; //linear direction
int ang =0; //angular direction


vel.angular.z = turn_speed * ang;
vel.linear.x = speed * lin;

The user can use a 3x3 input keys as a joystick. Here's the keys:

Turn left Don't turn Turn right
Go forward u i o
Dont' go j k l
Go backward m , .

Also, the user can set the robot linear and angular speed, using this set of commands:

Change linear and angular Change linear only Change angular only
Increase q w e
Reset a s d
Decrease z x c

DriveWithKeyboardAssisted node

The driveWithKeyboardAssisted node, similar to the node above, let the user drive the robot using the keyboard, assisting him during the navigation.

In particular, the node reads the same identical user inputs as the driveWithKeyboard node, but it also checks what the robot's laser scanner sees. To do so, the node subscribes to the /scan topic, and it uses the message received to detect walls too close to the robot. This topic is composed by 720 ranges, in which there are all the detected distances. the sensor can see from -90 to 90 degrees, so each sensor has 1/4 of degree of view.

After a message from /scan is recieved, the node enters inside the checkWalls function, that filters all the ranges taking only the one from:

  • -90° to -55° referred to the walls on the right,
  • -17.5° to 17.5° referred to the walls in front of the robot,
  • 55° to 90° referred to the walls on the left.

The function then checks the minimum distance inside this ranges, and if a wall is closer than wall_th = 1 (meter) it prevents the robot from getting too close to it. In particular, if the front wall is too close the robot can't advance, while if one of the walls on the left or on the right is too close the robot can't turn in that direction.

To actuate this security feature the functions simply edits the linear and angualar direction according to the rules above, setting them to 0 when required.

Finally, a red danger warning string is prompted to the user.

Flowchart

alt text

Project graph

Here's the project graph which explains the relationship within the nodes. Keep in mind that this is a graph generated while forcing the execution of all the three nodes at the same time, just to get a complete graph. During the normal execution this doesn't happen.

The graph can be generated using this command:

$ rqt_graph

alt text

Conclusion and future improvements

I'm happy with the result I obtain, also because the project is a bit challenging.

Unfortunately I can only test the simulation on Rviz and not on Gazebo, because VirtualBox didn't actually like running Gazebo flawlessly, so the VM can only render the 3D simulation at 2 or 3 FPS.

Regarding the future improvements:

  • The simulation uses the feedback provived by base_scan/status to check if the robot has reached the goal. Hower this topic takes quite a lot of time to detect that the robot is arrived, so the user received the response after some seconds. A possible improvement may fix this, checking the robot actual position and compute if he has reached the goal, so that it can be prompted instantly.

  • In the first navigation mode every goal's x and y coordinates are accepted, and then, a posteriori, the simulation check if the robot is actually capable of reaching that position. A control function can be implemented, comparing the goal's coordinates with a given set of acceptable coordinates that are inside the map.

About

Third assignment of Research Track I regarding the implementation of a robot capable of moving, autonomousely or with manual commands, inside a room while avoiding obstacled

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 71.9%
  • CMake 28.1%