21 Line Following Autonomous Driving with OpenCV

From Waveshare Wiki
Jump to: navigation, search

In this tutorial, we'll use the basic functionalities of OpenCV to detect yellow lines (default color) in the image and control the direction of the chassis based on the position of these lines. Please note that in this example, the chassis won't move. Instead, we'll only showcase the algorithms using OpenCV on the image. For safety reasons, we won't integrate motion control in this tutorial, as it's heavily influenced by external factors. Users should fully understand the code's functionality before adding corresponding motion control features.

If you want to control the robot's movement through this example, please refer to the "Python Chassis Motion Control" section to add the relevant motion control functions (our open-source example is located in robot_ctrl.py).

Preparation

Since the product automatically runs the main program at startup, which occupies the camera resource, this tutorial cannot be used in such situations. You need to terminate the main program or disable its automatic startup before restarting the robot.
It's worth noting that because the robot's main program uses multi-threading and is configured to run automatically at startup through crontab, the usual method sudo killall python typically doesn't work. Therefore, we'll introduce the method of disabling the automatic startup of the main program here.
If you have already disabled the automatic startup of the robot's main demo, you do not need to proceed with the section on Terminate the Main Demo.

Terminate the Main Demo

1. Click the "+" icon next to the tab for this page to open a new tab called "Launcher."
2. Click on "Terminal" under "Other" to open a terminal window.
3. Type bash into the terminal window and press Enter.
4. Now you can use the Bash Shell to control the robot.
5. Enter the command: crontab -e.
6. If prompted to choose an editor, enter 1 and press Enter to select nano.
7. After opening the crontab configuration file, you'll see the following two lines:
@reboot ~/ugv_pt_rpi/ugv-env/bin/python ~/ugv_pt_rpi/app.py >> ~/ugv.log 2>&1
@reboot /bin/bash ~/ugv_pt_rpi/start_jupyter.sh >> ~/jupyter_log.log 2>&1
8. Add a # character at the beginning of the line with ……app.py >> …… to comment out this line.
#@reboot ~/ugv_pt_rpi/ugv-env/bin/python ~/ugv_pt_rpi/app.py >> ~/ugv.log 2>&1
@reboot /bin/bash ~/ugv_pt_rpi/start_jupyter.sh >> ~/jupyter_log.log 2>&1
9. Press Ctrl + X in the terminal window to exit. It will ask you Save modified buffer? Enter Y and press Enter to save the changes.
10. Reboot the device. Note that this process will temporarily close the current Jupyter Lab session. If you didn't comment out ……start_jupyter.sh >>…… in the previous step, you can still use Jupyter Lab normally after the robot reboots (JupyterLab and the robot's main program app.py run independently). You may need to refresh the page.
11. One thing to note is that since the lower machine continues to communicate with the host through the serial port, the upper machine may not start up properly during the restart process due to the continuous change of serial port levels. Taking the case where the upper machine is a Raspberry Pi, after the Raspberry Pi is shut down and the green LED is constantly on without the green LED blinking, you can turn off the power switch of the robot, then turn it on again, and the robot will restart normally.
12. Enter the reboot command: sudo reboot.
13. After waiting for the device to restart (during the restart process, the green LED of the Raspberry Pi will blink, and when the frequency of the green LED blinking decreases or goes out, it means that the startup is successful), refresh the page and continue with the remaining part of this tutorial.

Example

The following code block can be run directly:

1. Select the code block below.
2. Press Shift + Enter to run the code block.
3. Watch the real-time video window.
4. Press STOP to close the real-time video and release the camera resources.

If you cannot see the real-time camera feed when running:

  • Click on Kernel -> Shut down all kernels above.
  • Close the current section tab and open it again.
  • Click STOP to release the camera resources, then run the code block again.
  • Reboot the device.

Note

If you use the USB camera you need to uncomment frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB).

Features of this Section

After running the following code block, you can place a yellow tape in front of the camera and observe if there are contours of the yellow tape in the black screen. Try to detect the yellow tape using two target detection lines.

import cv2  # Import OpenCV library for image process  
import imutils, math  # Library to aid image processing and mathematical operations 
from picamera2 import Picamera2 # Library to access Raspberry Pi Camera 
import numpy as np
from IPython.display import display, Image # Display images on Jupyter Notebook 
import ipywidgets as widgets # Widgets for creating interactive interfaces, such as buttons
import threading # Used to create new threads for asynchronous execution of tasks

# Stop button
# ================
stopButton = widgets.ToggleButton(
    value=False,
    description='Stop',
    disabled=False,
    button_style='danger', # 'success', 'info', 'warning', 'danger' or ''
    tooltip='Description',
    icon='square' # (FontAwesome names without the `fa-` prefix)
)


# findline autodrive

# Upper sampling line, 0.6 for position, higher values
sampling_line_1 = 0.6

# Lower sampling line, the value needs to be greater than sampling_line_1 and less than 1."
sampling_line_2 = 0.9

# Detect slope impact  
slope_impact = 1.5

# The effect of line position detected by the lower detection line on turning
base_impact = 0.005

# The speed impact on turning 
speed_impact = 0.5

# Line tracking speed
line_track_speed = 0.3

# Effect of slope on patrol speed
slope_on_speed = 0.1

# Color of target line, HSV color space
line_lower = np.array([25, 150, 70])
line_upper = np.array([42, 255, 255])

def view(button):
    # If you are using a CSI camera you need to comment out the picam2 code and the camera code.
    # Since the latest versions of OpenCV no longer support CSI cameras (4.9.0.80), you need to use picamera2 to get the camera footage
    # picam2 = Picamera2() #Create Picamera2 example 
    # Set camera parameters and video format and size
    # picam2.configure(picam2.create_video_configuration(main={"format": 'XRGB8888', "size": (640, 480)}))
    # picam2.start() #Start camera 

    camera = cv2.VideoCapture(-1)  #Create camera example
    #Set resolution
    camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    
    display_handle=display(None, display_id=True)
    
    while True:
        # img = picam2.capture_array()
        _, img = camera.read() #Capture one frame from the camera 
        # frame = cv2.flip(frame, 1) # if your camera reverses your image

        height, width = img.shape[:2]
        center_x, center_y = width // 2, height // 2
        # Image preprocessing includes color space conversion, Gaussian blur, and color range filtering, etc.
        hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

        line_mask = cv2.inRange(hsv, line_lower, line_upper)  # Filter out target lines based on color ranges
        line_mask = cv2.erode(line_mask, None, iterations=1)  # Eroding operation to remove noise
        line_mask = cv2.dilate(line_mask, None, iterations=1)  # Expansion operation enhances the target line

        # Detect the target line based on the positions of the upper and lower sampling lines, and calculate steering and velocity control signals according to the detection results
        sampling_h1 = int(height * sampling_line_1)
        sampling_h2 = int(height * sampling_line_2)
        
        get_sampling_1 = line_mask[sampling_h1]
        get_sampling_2 = line_mask[sampling_h2]

        # Calculate the width of the target line at the upper and lower sampling lines
        sampling_width_1 = np.sum(get_sampling_1 == 255)
        sampling_width_2 = np.sum(get_sampling_2 == 255)

        if sampling_width_1:
            sam_1 = True
        else:
            sam_1 = False
        if sampling_width_2:
            sam_2 = True
        else:
            sam_2 = False

        # Get the edge index of the target line at the upper and lower sampling lines
        line_index_1 = np.where(get_sampling_1 == 255)
        line_index_2 = np.where(get_sampling_2 == 255)

        # If the target line is detected at the upper sampling line, calculate the center position of the target line
        if sam_1:
            sampling_1_left  = line_index_1[0][0]  # Index of the leftmost index of the upper sampling line target line
            sampling_1_right = line_index_1[0][sampling_width_1 - 1]  # Index to the far right of the upper sampling line target line
            sampling_1_center= int((sampling_1_left + sampling_1_right) / 2)  # Index of the center of the upper sampling line target line
        # If a target line is detected at the lower sampling line, calculate the target line center position
        if sam_2:
            sampling_2_left  = line_index_2[0][0]
            sampling_2_right = line_index_2[0][sampling_width_2 - 1]
            sampling_2_center= int((sampling_2_left + sampling_2_right) / 2)

        # Initialize steering and speed control signals
        line_slope = 0
        input_speed = 0
        input_turning = 0
        
        # If the target line is detected at both sampling lines, calculate the slope of the line, and then calculate velocity and steering control signals based on the slope and the position of the target line.
        if sam_1 and sam_2:
            line_slope = (sampling_1_center - sampling_2_center) / abs(sampling_h1 - sampling_h2) # Calculate the slope of the line
            impact_by_slope = slope_on_speed * abs(line_slope) # Calculate the effect on velocity based on the slope
            input_speed = line_track_speed - impact_by_slope # Calculated speed control signal
            input_turning = -(line_slope * slope_impact + (sampling_2_center - center_x) * base_impact) #+ (speed_impact * input_speed) # Calculating steering control signals
        elif not sam_1 and sam_2: # If the target line is detected only at the lower sampling line
            input_speed = 0 # Set speed to 0
            input_turning = (sampling_2_center - center_x) * base_impact # Calculating steering control signals
        elif sam_1 and not sam_2: # If the target line is detected only at the upper sample line
            input_speed = (line_track_speed / 3) # slow down
            input_turning = 0 # No steering
        else: # If neither sampling line detects the target line
            input_speed = - (line_track_speed / 3) # backward
            input_turning = 0 # No turning

        # base.base_json_ctrl({"T":13,"X":input_speed,"Z":input_turning})

        cv2.putText(line_mask, f'X: {input_speed:.2f}, Z: {input_turning:.2f}', (center_x+50, center_y+0), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
        # Visualization operations include drawing lines at the positions of the sampling lines, marking the sampling results, and displaying steering and velocity control signals
        cv2.line(line_mask, (0, sampling_h1), (img.shape[1], sampling_h1), (255, 0, 0), 2)
        cv2.line(line_mask, (0, sampling_h2), (img.shape[1], sampling_h2), (255, 0, 0), 2)

        if sam_1:
            # Draw green marker lines at the ends of the target line at the upper sample line
            cv2.line(line_mask, (sampling_1_left, sampling_h1+20), (sampling_1_left, sampling_h1-20), (0, 255, 0), 2)
            cv2.line(line_mask, (sampling_1_right, sampling_h1+20), (sampling_1_right, sampling_h1-20), (0, 255, 0), 2)
        if sam_2:
            # Draw green marker lines at the ends of the target line at the lower sampling line
            cv2.line(line_mask, (sampling_2_left, sampling_h2+20), (sampling_2_left, sampling_h2-20), (0, 255, 0), 2)
            cv2.line(line_mask, (sampling_2_right, sampling_h2+20), (sampling_2_right, sampling_h2-20), (0, 255, 0), 2)
        if sam_1 and sam_2:
            # If the target line is detected at both the upper and lower sample lines, draw a red line from the center of the upper sample line to the center of the lower sample line.
            cv2.line(line_mask, (sampling_1_center, sampling_h1), (sampling_2_center, sampling_h2), (255, 0, 0), 2)
        
        _, frame = cv2.imencode('.jpeg', line_mask)
        display_handle.update(Image(data=frame.tobytes()))
        if stopButton.value==True:
            #picam2.close()  # If yes, close the camera
            cv2.release() # if yes, close the camera
            display_handle.update(None)


# Display the "Stop" button and start the thread that displays the function
# ================
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()