Jetson 19 Color Recognition Based on OpenCV
In this tutorial, we'll integrate some functions to modify frame images within OpenCV, such as blurring, color space conversion, erosion, and dilation.
Preparation
Since the product automatically runs the main program at startup, which occupies the camera resource, this tutorial cannot be used in such situations. You need to terminate the main program or disable its automatic startup before restarting the robot.
It's worth noting that because the robot's main program uses multi-threading and is configured to run automatically at startup through crontab, the usual method sudo killall python typically doesn't work. Therefore, we'll introduce the method of disabling the automatic startup of the main program here.
If you have already disabled the automatic startup of the robot's main demo, you do not need to proceed with the section on Terminate the Main Demo.
Terminate the Main Demo
- 1. Click the + icon next to the tab for this page to open a new tab called "Launcher."
- 2. Click on Terminal under Other to open a terminal window.
- 3. Type bash into the terminal window and press Enter.
- 4. Now you can use the Bash Shell to control the robot.
- 5. Enter the command: sudo killall -9 python.
Demo
The following code block can be run directly:
- 1. Select the code block below.
- 2. Press Shift + Enter to run the code block.
- 3. Watch the real-time video window.
- 4. Press STOP to close the real-time video and release the camera resources.
If you cannot see the real-time camera feed when running:
- Click on Kernel -> Shut down all kernels above.
- Close the current section tab and open it again.
- Click STOP to release the camera resources, then run the code block again.
- Reboot the device.
Execution
By default, we detect blue balls in the example. Ensure that there are no blue objects in the background to avoid interfering with the color recognition function. You can also modify the detection color (in the HSV color space) through secondary development.
import cv2 import imutils from picamera2 import Picamera2 # Library for accessing Raspberry Pi Camera import numpy as np # Library for mathematical calculations from IPython.display import display, Image # Library for displaying images in Jupyter Notebook import ipywidgets as widgets # Library for creating interactive widgets such as buttons import threading # Library for creating new threads to execute tasks asynchronously # Create a "Stop" button that users can click to stop the video stream # ================================================================ stopButton = widgets.ToggleButton( value=False, description='Stop', disabled=False, button_style='danger', # 'success', 'info', 'warning', 'danger' or '' tooltip='Description', icon='square' # (FontAwesome names without the `fa-` prefix) ) # Define the display function to process video frames and recognize objects of specific colors def view(button): # If you are using a CSI camera you need to comment out the picam2 code and the camera code. # Since the latest versions of OpenCV no longer support CSI cameras (4.9.0.80), you need to use picamera2 to get the camera footage # picam2 = Picamera2() # Create an instance of Picamera2 # Configure camera parameters, set the format and size of the video # picam2.configure(picam2.create_video_configuration(main={"format": 'XRGB8888', "size": (640, 480)})) # picam2.start() # Start the camera camera = cv2.VideoCapture(-1) #Create camera example #Set resolution camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) display_handle=display(None, display_id=True) # Create a display handle to update displayed images i = 0 # Define the color range to be detected color_upper = np.array([120, 255, 220]) color_lower = np.array([90, 120, 90]) min_radius = 12 # Define the minimum radius for detecting objects while True: # img = picam2.capture_array() # Capture a frame from the camera _, img = camera.read() # frame = cv2.flip(frame, 1) # if your camera reverses your image blurred = cv2.GaussianBlur(img, (11, 11), 0) # Apply Gaussian blur to the image to remove noise hsv = cv2.cvtColor(blurred, cv2.COLOR_BGR2HSV) # Convert the image from BGR to HSV color space mask = cv2.inRange(hsv, color_lower, color_upper) # Create a mask to retain only objects within a specific color range mask = cv2.erode(mask, None, iterations=5) # Apply erosion to the mask to remove small white spots mask = cv2.dilate(mask, None, iterations=5) # Apply dilation to the mask to highlight the object regions # Find contours in the mask cnts = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # Extract contours center = None # Initialize the center of the object if len(cnts) > 0: # Find the largest contour in the mask, then use # it to compute the minimum enclosing circle and # centroid c = max(cnts, key=cv2.contourArea) # Find the largest contour ((x, y), radius) = cv2.minEnclosingCircle(c) # Compute the minimum enclosing circle of the contour M = cv2.moments(c) # Compute the moments of the contour center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) # Compute the center of the contour based on moments if radius > min_radius: # If the radius of the minimum enclosing circle is greater than the predefined minimum radius, draw circles and center points cv2.circle(img, (int(x), int(y)), int(radius), (128, 255, 255), 1) # Draw the minimum enclosing circle _, frame = cv2.imencode('.jpeg', img) # Encode the frame to JPEG format display_handle.update(Image(data=frame.tobytes())) # Update the displayed image if stopButton.value==True: # Check if the "Stop" button has been pressed #picam2.close() # If yes, close the camera cv2.release() # if yes, close the camera display_handle.update(None) # Clear the displayed content # Display the "Stop" button and start a thread to execute the display function # ================================================================ display(stopButton) thread = threading.Thread(target=view, args=(stopButton,)) thread.start()