30 OpenCV Color Tracking

From Waveshare Wiki
Jump to: navigation, search

OpenCV Color Tracking

In this chapter, we add some functions to control peripheral interfaces in OpenCV. For example, the camera's pan-tilt will move, and please keep your hands or other fragile objects away from its rotation radius.

Preparation

As the product will run the main demo by default, and the main demo will occupy the camera resources, in this case, this tutorial is not applicable. Please terminate the main demo or reboot the robot after disabling the auto-running of the main demo.

It's worth noting that because the robot's main demo uses multi-threading and is configured to run automatically at startup through crontab, the usual method "sudo killall python" typically doesn't work. Therefore, we'll introduce the method of disabling the automatic startup of the main program here.

If you have disabled the boot autorun of the robot's main program, you do not need to execute the Terminate Main Demo section below.

Terminate Main Demo

1. Click the "+" icon next to the tab for this page to open a new tab called "Launcher."

2. Click on "Terminal" in Other, and open the terminal window.

3. Input bash in the terminal window and press Enter.

4. Now you can use "Bash Shell" to control the robot.

5. Input the command: crontab -e

6. If you are asked what the editor to be used, you can input 1 and then press Enter, select to use "Nano".

7. Open "crontab" config file, and you can see:

@reboot ~/ugv_pt_rpi/ugv-env/bin/python ~/ugv_pt_rpi/app.py >> ~/ugv.log 2>&1
@reboot /bin/bash ~/ugv_pt_rpi/start_jupyter.sh >> ~/jupyter_log.log 2>&1

8. Add # in front of ……app.py >> …… to comment out this line.

# @reboot ~/ugv_pt_rpi/ugv-env/bin/python ~/ugv_pt_rpi/app.py >> ~/ugv.log 2>&1
@reboot /bin/bash ~/ugv_pt_rpi/start_jupyter.sh >> ~/jupyter_log.log 2>&1

9. On the terminal interface, press "Ctrl + X" to exit, and it will query Save modified buffer?, and press Y and Enter to save the modification.

10. Reboot the device. Note that this process will temporarily close the current Jupyter Lab session. If you didn't comment out ……start_jupyter.sh >>…… in the previous step, you can still use Jupyter Lab normally after the robot reboots (JupyterLab and the robot's main program app.py run independently). You may need to refresh the page.

11. One thing to note is that since the lower machine continues to communicate with the host through the serial port, the host may not start up properly during the restart process due to the continuous change of serial port levels. Taking the case where the host is a Raspberry Pi, after the Raspberry Pi power down and the green LED is constantly on without the green LED blinking, you can turn off the power switch of the robot, then turn it on again, and the robot will restart normally.

12. Input the command to reboot: sudo reboot

13. After waiting for the device to restart (during the restart process, the green LED of the Raspberry Pi will blink, and when the frequency of the green LED blinking decreases or goes out, it means that the startup is successful), refresh the page and continue with the remaining part of this tutorial.

Demo

Directly run the following demo:

1. Choose the following demo.

2. Run it by Shift + Enter.

3. View the real-time video window.

4. Press STOP to stop the real-time video and release the camera resources.

If you cannot see the real-time camera feed when running:

  • Click on Kernel -> Shut down all kernels above.
  • Close the current section tab and open it again.
  • Click STOP to release the camera resources, then run the code block again.
  • Reboot the device.

Run the Demo

In this chapter of the tutorial, the camera pan-tilt will rotate, make sure your hands or other fragile objects are away from the rotation radius of the camera pan-tilt.
We detect the blue ball by default in the demo, please make sure there are no blue objects in the background of the screen to affect the color recognition function, you can also change the detection color (HSV color space) through secondary development.

import matplotlib.pyplot as plt
import cv2
from picamera2 import Picamera2
import numpy as np
from IPython.display import display, Image
import ipywidgets as widgets
import threading

# Stop button
# ================
stopButton = widgets.ToggleButton(
    value=False,
    description='Stop',
    disabled=False,
    button_style='danger', # 'success', 'info', 'warning', 'danger' or ''
    tooltip='Description',
    icon='square' # (FontAwesome names without the `fa-` prefix)
)


def gimbal_track(fx, fy, gx, gy, iterate):
    global gimbal_x, gimbal_y
    distance = math.sqrt((fx - gx) ** 2 + (gy - fy) ** 2)
    gimbal_x += (gx - fx) * iterate
    gimbal_y += (fy - gy) * iterate
    if gimbal_x > 180:
        gimbal_x = 180
    elif gimbal_x < -180:
        gimbal_x = -180
    if gimbal_y > 90:
        gimbal_y = 90
    elif gimbal_y < -30:
        gimbal_y = -30
    gimbal_spd = int(distance * track_spd_rate)
    gimbal_acc = int(distance * track_acc_rate)
    if gimbal_acc < 1:
        gimbal_acc = 1
    if gimbal_spd < 1:
        gimbal_spd = 1
    base.base_json_ctrl({"T":self.CMD_GIMBAL,"X":gimbal_x,"Y":gimbal_y,"SPD":gimbal_spd,"ACC":gimbal_acc})
    return distance


# Display function
# ================
def view(button):
    picam2 = Picamera2()
    picam2.configure(picam2.create_video_configuration(main={"format": 'XRGB8888', "size": (640, 480)}))
    picam2.start()
    display_handle=display(None, display_id=True)

    color_upper = np.array([120, 255, 220])
    color_lower = np.array([ 90, 120,  90])
    min_radius = 12
    track_color_iterate = 0.023
    
    while True:
        frame = picam2.capture_array()
        # frame = cv2.flip(frame, 1) # if your camera reverses your image

        # uncomment this line if you are using USB camera
        # frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

        img = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        blurred = cv2.GaussianBlur(img, (11, 11), 0)
        hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV)
        mask = cv2.inRange(hsv, color_lower, color_upper)
        mask = cv2.erode(mask, None, iterations=5)
        mask = cv2.dilate(mask, None, iterations=5)

        cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL,
            cv2.CHAIN_APPROX_SIMPLE)
        cnts = imutils.grab_contours(cnts)
        center = None

        height, width = img.shape[:2]
        center_x, center_y = width // 2, height // 2

        if len(cnts) > 0:
            # find the largest contour in the mask, then use
            # it to compute the minimum enclosing circle and
            # centroid
            c = max(cnts, key=cv2.contourArea)
            ((x, y), radius) = cv2.minEnclosingCircle(c)
            M = cv2.moments(c)
            center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"]))

            # only proceed if the radius meets a minimum size
            if radius > min_radius:
                distance = gimbal_track(center_x, center_y, center[0], center[1], track_color_iterate) #
                cv2.circle(overlay_buffer, (int(x), int(y)), int(radius), (128, 255, 255), 1)
        
        
        _, frame = cv2.imencode('.jpeg', frame)
        display_handle.update(Image(data=frame.tobytes()))
        if stopButton.value==True:
            picam2.close()
            display_handle.update(None)
            
            
# Run
# ================
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()