UGV Beast PI ROS2 11. Gazebo Simulation Debugging
| ||
11. Gazebo Simulation Debugging
This chapter mainly introduces the simulation and debugging of robots. When we don’t have a physical robot at hand, we can verify robot algorithms, architecture, etc. through Gazebo, a three-dimensional robot physics simulation platform. We provide Gazebo robot models and complete function packages for simulation and debugging on virtual machines to help users conduct system verification and testing in the early stages of development.
11.1 Gazebo introduction
Gazebo is a 3D dynamic simulator that accurately and efficiently simulates groups of robot in complex indoor and outdoor environments. Although similar to game engines, Gazebo offers higher-fidelity physics simulations, supporting complex environment and sensor simulations. Users can create 3D models, set up environments, and simulate various sensors (such as lidar, cameras, etc.).
Key aspects of the Gazebo simulation platform include:
- Physics engine: Gazebo uses multiple physics engines (such as ODE, Bullet, DART, etc.) to provide accurate dynamics simulation and can handle physical phenomena such as collision, friction, and gravity.
- 3D modeling: Users can use existing model libraries or create custom 3D models through software such as Blender. These models can be robots, obstacles, or environmental elements.
- Environment settings: Gazebo allows users to design complex simulation environments, including cities, indoor scenes, etc., and can freely configure terrain and lighting.
- Sensor simulation: Gazebo supports multiple sensor types, such as lidar, camera, IMU, etc., and can provide real-time data streams to facilitate algorithm testing.
- ROS integration: The combination with ROS allows users to easily use Gazebo as a simulation environment for algorithm development and testing, and supports ROS themes and services.
- User interface: Gazebo provides an intuitive graphical user interface. Users can monitor the simulation process and adjust parameters in real time through visual tools.
11.2 Load virtual machine image
We provide Ubuntu images configured with Gazebo simulation, robot models and complete feature packages for users to use directly. This tutorial is suitable for using a virtual machine on a Windows computer to load the Ubuntu image for Gazebo simulation debugging.
11.2.1 Download Ubuntu image with software configured
Use the Ubuntu image of the configured software, there is no need to install and configure the Gazebo simulation environment by yourself, and conduct the simulation test of the product according to the subsequent tutorials.
- Download link:
Download and decompress the image, all files are image files. The disk file system of some virtual machines does not support separate files above 4G, so the configured Ubuntu image is divided into multiple files.
11.2.2 Install Oracle VM VirtualBox virtual machine
Download and install Oracle VM VirtualBox, which is a free virtual machine software that allows you to run a virtual operating system on your own computer. We run the virtual machine on a Windows system computer to install the Ubuntu operating system, and then install and configure ROS2 on the Ubuntu operating system to control the robotic arm.
It should be noted that although ROS2 has a Windows version, there is not much information about the Windows version of ROS2, so we provide a virtual machine solution by default to run ROS2.
Click Oracle VM VirtualBox official download link, the installation process is very simple, just keep clicking Next. If it is already installed, skip this step.
11.3 Load image into virtual machine software
-
On the left toolbar, click New.
-
Set the name, set the type to Linux, and set the version to Ubuntu (64-bit), click Next.
-
Set the memory size to 4096MB and the processor to 2, then click Next.
-
Select Do not add a virtual hard disk, click Next to display the configuration of the new virtual computer, click Finish and a warning will pop up, then click Continue.
-
Select the virtual machine you just created and select Settings.
-
Select Storage and click the + sign on the far right of the controller to add a virtual hard disk.
-
Select Register, add the ws.vmdk image file decompressed earlier and click Select in the lower right corner, and confirm to save.
-
Then select Display, check Enable 3D Acceleration and click OK, double-click the virtual computer you just created on the left to run it.
After successfully running the virtual machine computer, you can learn how to use Gazebo simulation to control the robot according to the following content.
11.4 Enter Docker container
In the host terminal, first allow unauthorized users to access the graphical interface and enter the command:
xhost + Note: After each virtual machine rerun, you need to open the visualization in the Docker container, and this step must be performed.
Then execute the script that enters the Docker container:
. ros_humble.sh
Enter 1 to enter the Docker container, and the username will change to root, as shown in the following figure.
11.5 Load Gazebo robot model
The image model defaults to a 4WD and 6 wheels UGV Rover, so we can directly load the Gazebo simulation environment and the robot model of the UGV Rover, and start the corresponding ROS 2 node:
ros2 launch ugv_gazebo bringup.launch.py
The startup needs to wait for a short period of time. As shown below, the startup is successful. This command needs to keep running in subsequent steps.
11.6 Use Joystick or Keyboard Control
11.6.1 Joystick control
Plug the joystick receiver into your computer, click on Devices above Oracle VM VirtualBox → USB → the name of the device with the word XBox, and the device name is preceded by a √ to indicate that the controller is connected to the virtual machine.
Press Ctrl+Alt+T to open a new terminal window, execute the script that goes into the Docker container, and enter 1 to enter the Docker container:
. ros_humble.sh
Run the joystick control node in the container:
ros2 launch ugv_tools teleop_twist_joy.launch.py
Then turn on the switch on the back of the joystick, and you can control the movement of the robot model when you see the red light on the joystick. Note: There are three function keys on the joystick: the key below R is used to lock or unlock, the left joystick -- forward or backward, the right joystick -- turn left or right.
You can close the joystick control node by pressing Ctrl+C.
11.6.2 Keyboard control
Close the joystick control node, and then run the joystick control node in the container terminal window to run keyboard control node:
ros2 run ugv_tools keyboard_ctrl
Keep this window active (that is, make sure you are in the terminal window interface when operating the keys), and control the movement of the robot model through the following keys:
keyboard key | Operation description | keyboard key | Operation description | keyboard key | Operation description |
Letter U | Left forward | Letter I | Straight ahead | Letter O | Right forward |
---|---|---|---|---|---|
Letter J | Turn left | Letter K | Stop | Letter L | Turn right |
Letter M | Left backward | Symbol , | Straight backward | Symbol . | Right backward |
You can close the keyboard control node by pressing Ctrl+C.
11.7 Map
11.7.1 2D Mapping
1. 2D Mapping based on Gmapping
Keep loading the Gazebo robot model running, press Ctrl+Alt+T to open a new terminal window, execute the script that goes into the Docker container, and enter 1 to enter the Docker container:
. ros_humble.sh
Run to launch the mapping node in the container:
ros2 launch ugv_gazebo gmapping.launch.py
At this time, the map displayed on the RViz interface will only show the area scanned by the lidar in the Gazebo simulation map. If there are still unscanned areas that need to be mapped, you can use the joystick or keyboard to control the movement of the robot to scan and map.
In a new terminal window, execute the script that goes into the Docker container, enter 1 to enter the Docker container, and run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into a virtual machine) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment.
After the mapping is completed, keep the mapping node running. In a new terminal window, execute the script that enters the Docker container, enter 1 to enter the Docker container, and add executable permissions to the map saving script:
cd ugv_ws/ chmod +x ./save_2d_gmapping_map_gazebo.sh
Then run the map saving script, as shown below, the map is saved successfully:
./save_2d_gmapping_map_gazebo.sh
The details in this script are as follows:
cd /home/ws/ugv_ws/src/ugv_main/ugv_gazebo/maps ros2 run nav2_map_server map_saver_cli -f ./map
After executing the above script file, a 2D raster map named map will be saved. The map is saved in the /home/ws/ugv_ws/src/ugv_main/ugv_gazebo/maps directory. You can see that two files are generated in the above directory, one is map.pgm and the other is map.yaml.
- map.pgm: This is a raster image of the map (usually a grayscale image file);
- map.yaml: This is the configuration file of the map.
Then the Gmapping mapping node can be closed via Ctrl+C.
2. 2D Mapping based on Cartographer
Keep loading the Gazebo robot model running, press Ctrl+Alt+T to open a new terminal window, execute the script that goes into the Docker container, and enter 1 to enter the Docker container:
. ros_humble.sh
Run to launch the mapping node in the container:
ros2 launch ugv_gazebo cartographer.launch.py
At this time, the map displayed on the RViz interface will only show the area scanned by the lidar in the Gazebo simulation map. If there are still unscanned areas that need to be mapped, you can use the joystick or keyboard to control the movement of the robot to scan and map.
In a new terminal window, execute the script that goes into the Docker container, enter 1 to enter the Docker container, and run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into a virtual machine) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment.
After the mapping is completed, keep the mapping node running. In a new terminal window, execute the script that enters the Docker container, enter 1 to enter the Docker container, and add executable permissions to the map saving script:
cd ugv_ws/ chmod +x ./save_2d_cartographer_map_gazebo.sh
Then run the map saving script, as shown below, the map is saved successfully:
./save_2d_cartographer_map_gazebo.sh
The details in this script are as follows:
cd /home/ws/ugv_ws/src/ugv_main/ugv_gazebo/maps ros2 run nav2_map_server map_saver_cli -f ./map && ros2 service call /write_state cartographer_ros_msgs/srv/WriteState "{filename: '/home/ws/ugv_ws/src/ugv_main/ugv_gazebo/maps/map.pbstream'}"
After executing the above script file, a 2D raster map named map will be saved. The map is saved in the /home/ws/ugv_ws/src/ugv_main/ugv_gazebo/maps directory. You can see that three files are generated in the directory,which are map.pgm, map.yaml and map.pbstram.
Then the Cartographer mapping node can be closed via Ctrl+C.
11.7.2 3D Mapping
1. Visualizing in RTAB-Map
RTAB-Map (Real-Time Appearance-Based Mapping) is an open source algorithm for Simultaneous Localization and Mapping (SLAM), which is widely used in robot navigation, autonomous vehicles, drones and other fields. It uses data from visual and lidar sensors to build an environment map and perform positioning. It is a SLAM method based on loop closure detection.
Keep the loaded Gazebo robot model running and start the visualization node of RTAB-Map in the container:
ros2 launch ugv_gazebo rtabmap_rgbd.launch.py
In a new Docker container terminal, run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into a virtual machine) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment. After the mapping is completed, press Ctrl+C to exit the mapping node, and the system will automatically save the map. The default saving path of the map is ~/.ros/rtabmap.db.
2. Visualizing in RViz
Keep the loaded Gazebo robot model running and start the visualization node of RTAB-Map in the container:
ros2 launch ugv_gazebo rtabmap_rgbd.launch.py use_rviz:=true
In a new Docker container terminal, run either the joystick control or keyboard control node:
#Joystick control (make sure the joystick receiver is plugged into a virtual machine) ros2 launch ugv_tools teleop_twist_joy.launch.py #Keyboard control (keep the running keyboard control node active) ros2 run ugv_tools keyboard_ctrl
In this way, you can control the movement of the chassis to realize the mapping of the surrounding environment. After the mapping is completed, press Ctrl+C to exit the mapping node, and the system will automatically save the map. The default saving path of the map is ~/.ros/rtabmap.db.
Before you start navigating, make sure you have built a map of the environment called map, and if you haven't followed the previous tutorial, you need to build a map according to the previous tutorial.
After the mapping is completed, then start the navigation, we provide a variety of autonomous navigation modes, you can choose one of the following autonomous navigation modes for robot navigation.
- AMCL algorithm
Adaptive Monte Carlo Localization (AMCL) is a particle filter-based positioning algorithm in ROS 2 that uses 2D lidar to estimate the position and direction (i.e. posture) of the robot in a given known map. AMCL is mainly used for mobile robot navigation. It matches existing maps with laser sensors (such as lidar) to calculate the robot's position and direction in the map. The core idea is to represent the possible position of the robot through a large number of particles, and gradually update these particles to reduce the uncertainty of the robot's pose.
Advantages of AMCL:
- Adaptive particle number: AMCL will dynamically adjust the number of particles based on the uncertainty of the robot's position.
- Suitable for dynamic environments: While AMCL assumes a static environment, it can handle a small number of dynamic obstacles, such as pedestrians and other moving objects, to a certain extent, which makes it more flexible in practical applications.
- Reliable positioning capability: AMCL's positioning effect in known maps is very reliable. Even if the robot's pose is initially uncertain, it can gradually converge to the correct pose.
AMCL assumes that the map is known and you cannot create the map yourself. It also relies on high-quality static maps that are matched to sensor data. If there is a big difference between the map and the real environment, the positioning effect will be affected. AMCL is often used for autonomous navigation of mobile robots. During the navigation process, the robot can determine its own pose through AMCL and rely on known maps for path planning and obstacle avoidance.
In the container, start navigation based on the AMCL algorithm. After successful startup, you can see the RViz screen of the previously built map:
ros2 launch ugv_gazebo nav.launch.py use_localization:=amcl
Then, you can determine the initial position of the robot according to the subsequent tutorials.
- EMCL algorithm
EMCL is an alternative Monte Carlo localization (MCL) package to AMCL. Unlike AMCL, KLD sampling and adaptive MCL are not implemented. Instead, extended resets and other features are implemented. EMCL does not rely entirely on adaptive particle filtering, but introduces methods such as extended reset to improve positioning performance. EMCL implements the extended reset strategy, a technique for improving the quality of particle sets to better handle uncertainty and drift in positioning.
Start the navigation based on the EMCL algorithm. After successful startup, you can see the RViz screen of the previously built map:
ros2 launch ugv_gazebo nav.launch.py use_localization:=emcl
Then, you can determine the initial position of the robot according to the subsequent tutorials.
- Pure positioning based on Cartographer
Cartographer is an open-source Google system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.
Cartographer system architecture overview: You can see that the optional inputs on the left include depth information, odometer information, IMU data, and fixed Frame attitude.
For more tutorials, please refer to Cartographer official document and project address.
Start pure positioning based on Cartographer. After successful startup, you can see the RViz screen of the previously built map:
Note: The navigation mode based on Cartographer's pure positioning can only be used after using Cartographer to build the map.
ros2 launch ugv_gazebo nav.launch.py use_localization:=cartographer
Then, you can determine the initial position of the robot according to the subsequent tutorials.
- DWA algorithm
Dynamic Window Approaches (DWA) is a suboptimal method based on predictive control theory, because it can safely and effectively avoid obstacles in an unknown environment, and has the characteristics of small computational effort, rapid response and strong operability. The DWA algorithm is a local path planning algorithm.
The core idea of this algorithm is to determine a sampling speed space that satisfies the mobile robot's hardware constraints in the speed space (v, ω) based on the current position and speed status of the mobile robot, and then calculate the trajectories of the mobile robot within a certain period of time under these speed conditions. trajectory, and evaluate the trajectories through the evaluation function, and finally select the speed corresponding to the trajectory with the best evaluation as the movement speed of the mobile robot. This cycle continues until the mobile robot reaches the target point.
Start the navigation based on the DWA algorithm. After successful startup, you can see the RViz screen of the previously built map:
ros2 launch ugv_gazebo nav.launch.py use_localplan:=dwa
Then, you can determine the initial position of the robot according to the subsequent tutorials.
- TEB algorithm
TEB stands for Time Elastic Band Local Planner. This method performs subsequent corrections on the initial global trajectory generated by the global path planner to optimize the robot's motion trajectory and belongs to local path planning. During the trajectory optimization process, the algorithm has a variety of optimization objectives, including but not limited to: overall path length, trajectory running time, distance from obstacles, passing intermediate way points, and compliance with robot dynamics, kinematics, and geometric constraints.
Start the navigation based on the TEB algorithm. After successful startup, you can see the RViz screen of the previously built map:
ros2 launch ugv_gazebo nav.launch.py use_localplan:=teb
Then, you can determine the initial position of the robot according to the subsequent tutorials.
The map navigation mode introduced above is based on the 2D construction of LiDAR. For the 3D map built according to previous tutorial, please refer to the navigation startup method in this subsection.
Enable nav positioning:
ros2 launch ugv_gazebo rtabmap_localization_launch.py
You need to wait for the 3D data to be loaded, wait for a period of time, and then you can start navigation as shown in the figure below.
In a new terminal, turn on navigation and choose one of the two navigation modes:
- DWA algorithm
ros2 launch ugv_gazebo nav_rtabmap.launch.py use_localplan:=dwa
- TEB algorithm
ros2 launch ugv_ngazebo nav_rtabmap.launch.py use_localplan:=teb
Choose a navigation mode based on the map created above to start the navigation, then proceed with the following content.
11.8.2 Initialize the robot's position
By default, when navigation is started, the robot initially has no idea where it is and the map waits for you to provide it with an approximate starting location.
First, find the robot's location on the map and check the actual location of your robot. Manually set the robot's initial pose in RViz. Click the 2D Pose Estimate button and indicate the robot's location on the map. The direction of the green arrow is the direction the robot pan-tilt is facing forward.
Keep the operation of the navigation terminal, set the approximate initial pose of the robot, and ensure that the navigation ensures that the actual position of the robot is roughly corresponding to the ground. You can also control the robot through the keyboard on a new terminal to simply move and rotate it to assist in initial positioning.
ros2 run ugv_tools keyboard_ctrl
11.8.3 Send target pose
Select a target location for the robot on the map. You can use the Nav2 Goal tool to send the target location and direction to the robot. Indicate the location (target point) that the robot wants to navigate to automatically on the RViz map. The direction of the green arrow is the direction the robot pan-tilt is facing forward.
Once the target pose is set, the navigation will find the global path and begin navigating to move the robot to the target pose on the map. Now you can see the robot moving towards the actual target location.
In the lower left corner of the RViz interface, there is a Nav2 RViz2 plug-in [Waypoint/Nav Through Poses Mode], which can switch the navigation mode. Click the [Waypoint/Nav Through Poses Mode] button to switch to the multi-point navigation mode.
Then use Nav2 Goal in the RViz2 toolbar to give multiple target points to move. After setting, click [Start Waypoint Following] in the lower left corner to start path planning navigation. The robot will move according to the order of the selected target points. After reaching the first target point, it will automatically go to the next target point without any further operation. The robot will stop if it reaches the last target point.