UGV Beast PI ROS2 4. 2D mapping based on lidar
| ||
4. 2D Mapping Based on LiDAR
4.1 2D Mapping based on Gmapping
4.1.1 Gmapping introduction
Gmapping is a simultaneous positioning and mapping (SLAM) algorithm for robot navigation and map construction. It is based on the RBpf particle filter algorithm to help robots position themselves and build maps in unknown environments, which separates the positioning and mapping processes, first performing localization and then mapping.. The particle filter algorithm is an early mapping algorithm. Its basic principle is that the robot continuously obtains surrounding environment information through movement and observation, gradually reduces the uncertainty of its own position, and finally obtains accurate positioning results. Use the map and motion model of the previous moment to predict the pose at the current moment, then calculate the weights, resample, update the particle map, and so on.
Advantages of Gmapping:
- Indoor maps can be constructed in real time, and the amount of calculation required to construct small scene maps is small and the accuracy is high;
- Processes noise through particle filtering technology, which has strong robustness;
- Suitable for indoor robot navigation with a flat environment.
Disadvantages of Gmapping:
- Limited support for large-scale or highly dynamic environments. When building a large map, the amount of memory and calculation required will increase. It is not suitable for building large scene maps;
- There is no loop detection, so the map may be misaligned when the loop is closed. Although increasing the number of particles can close the map, it comes at the expense of increased calculations and memory.
- Gmapping is a 2D SLAM and cannot handle complex 3D scenes.
4.1.2 Start Gmapping mapping node
Before starting the mapping node, by default you have completed the main program and remotely connected to the Docker container according to the content in Chapter 1 UGV Beast PI ROS2 1. Preparation.
In a new Docker container terminal, click the "⭐" symbol in the left sidebar, double-click to open Docker's remote terminal, enter username: root, password: ws.
In the container, go to the workspace of the ROS 2 project for that product:
cd /home/ws/ugv_ws
Place the robot in the room where you need to build the map, and start the mapping node:
ros2 launch ugv_slam gmapping.launch.py use_rviz:=true
At this time, the map displayed on the RViz interface will only show the area scanned by the lidar. If there are still unscanned areas that need to be mapped, you can control the movement of the robot to scan and map.
In a new Docker container terminal, run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into Raspberry Pi) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment.
When controlling the movement of the chassis, if the robot needs to be out of your sight for mapping, you can use the OAK camera to view the picture to control the movement of the robot to prevent the robot from moving and colliding after it is out of sight.
In a new Docker container terminal, enable the OAK camera:
ros2 launch ugv_vision oak_d_lite.launch.py
As shown in the figure, the OAK camera is successfully enabled.
Then click Add in the lower left corner of the RViz interface, select By topic, find /oak, select the Image of /oak/rgb/image_rect, and click OK. Thus, you can see the OAK camera screen appearing in the interface in the lower left corner of RViz.
4.1.3 Save map
After the map is constructed, keep the mapping node running and save the map in a new Docker container terminal. Open a new Docker container terminal, click the "⭐" symbol in the left sidebar, double-click to open Docker's remote terminal, enter username: root, password: ws.
In the container, go to the workspace of the ROS 2 project for that product:
cd /home/ws/ugv_ws
Add executable permissions to the map saving script:
chmod +x ./save_2d_gmapping_map.sh
Then run the map saving script, as shown below, the map is saved successfully:
./save_2d_gmapping_map.sh
The details in this script are as follows:
cd /home/ws/ugv_ws/src/ugv_main/ugv_nav/maps ros2 run nav2_map_server map_saver_cli -f ./map
After executing the above script file, a 2D raster map named map will be saved. The map is saved in the /home/ws/ugv_ws/src/ugv_main/ugv_nav/maps directory. You can see that two files are generated in the above directory, one is map.pgm and the other is map.yaml.
- map.pgm: This is a raster image of the map (usually a grayscale image file);
- map.yaml: This is the configuration file of the map.
Then the Gmapping mapping node can be closed via Ctrl+C.
4.2 2D Mapping based on Cartographer
4.2.1 Cartographer introduction
Cartographer is a set of SLAM algorithms based on graph optimization launched by Google. The main goal of the algorithm is to achieve real-time SLAM by computing resource consumption. It allows the robot to build a 2D or 3D map of the environment in real time, while tracking the robot's pose.
Advantages of Cartographer:
- High-precision mapping: Cartographer has high accuracy when processing 2D and 3D maps, especially in complex environments. Its accuracy mainly benefits from sensor combination and global optimization techniques, such as loop closure detection.
- Strong sensor combination capability: It can process data from multiple sensors at the same time, such as LiDAR, IMU (Inertial Measurement Unit), GPS, etc. By combining the data of different sensors, the accuracy of positioning and map construction can be improved.
- Global optimization: Through loop closure detection and global optimization, errors accumulated during long-term operation are effectively reduced, ensuring the overall consistency and accuracy of the map.
- Real-time: It can generate maps and track robot pose in real time while the robot is moving, and is suitable for real-time SLAM applications in dynamic environments.
Disadvantages of Cartographer:
- High computing resource consumption: High requirements on computing resources, especially when processing 3D SLAM. Its multi-sensor combination and global optimization algorithms require a lot of CPU and memory resources and may not be suitable for devices with low computing resources.
- Complex configuration: There are many configuration options, requiring users to deeply understand the meaning of each parameter and make fine adjustments to ensure optimal performance in a specific environment.
4.2.2 Start Cartographer mapping node
In the container, go to the workspace of the ROS 2 project for that product:
cd /home/ws/ugv_ws
Place the robot in the room where you need to build the map, and start the mapping node:
ros2 launch ugv_slam cartographer.launch.py use_rviz:=true
At this time, the map displayed on the RViz interface will only show the area scanned by the lidar. If there are still unscanned areas that need to be mapped, you can control the movement of the robot to scan and map.
In a new Docker container terminal, run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into Raspberry Pi) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment.
When controlling the movement of the chassis, if the robot needs to be out of your sight for mapping, you can use the OAK camera to view the picture to control the movement of the robot to prevent the robot from moving and colliding after it is out of sight.
In a new Docker container terminal, enable the OAK camera:
ros2 launch ugv_vision oak_d_lite.launch.py
As shown in the figure, the OAK camera is successfully enabled.
Then click Add in the lower left corner of the RViz interface, select By topic, find /oak, select the Image of /oak/rgb/image_rect, and click OK. Thus, you can see the OAK camera screen appearing in the interface in the lower left corner of RViz.
4.2.3 Save map
After the map is constructed, keep the mapping node running and save the map in a new Docker container terminal. Open a new Docker container terminal, click the "⭐" symbol in the left sidebar, double-click to open Docker's remote terminal, enter username: root, password: ws.
In the container, go to the workspace of the ROS 2 project for that product:
cd /home/ws/ugv_ws
Add executable permissions to the map saving script:
chmod +x ./save_2d_cartographer_map.sh
Then run the map saving script, as shown below, the map is saved successfully:
./save_2d_cartographer_map.sh
The details in this script are as follows:
cd /home/ws/ugv_ws/src/ugv_main/ugv_nav/maps ros2 run nav2_map_server map_saver_cli -f ./map && ros2 service call /write_state cartographer_ros_msgs/srv/WriteState "{filename:'/home/ws/ugv_ws/src/ugv_main/ugv_nav/maps/map.pbstream'}"
After executing the above script file, a map named map will be saved. The map is saved in /home/ws/ugv_ws/src/ugv_main/ugv_nav/maps. You can see three files generated in this directory, which are map.pgm, map.yaml, and map.pbstram.
Note: If you have used Gmapping to create a map before, the map created here will overwrite the one created previously. If you want to build a different map, you can change the name of the map you created. Just change the map in ./map section of the save2d_cartographer script to the name of the map you want to build. For example: if you need to create a map named room, change the second line of instructions in the script to ros2 run nav2_map_server map_saver_cli -f ./room && ros2 service call /write_state cartographer_ros_msgs/srv/WriteState "{filename:'/home/ws/ugv_ws/src/ugv_main/ugv_nav/maps/room.pbstream'}"