Jetson 17 Face Recognition Based on OpenCV
This chapter introduces how to use OpenCV to compare feature databases and achieve face recognition. Although this method is not as efficient as MediaPipe's solution, it allows for the detection of other objects by replacing the feature database file.
Preparation
Since the product automatically runs the main program at startup, which occupies the camera resource, this tutorial cannot be used in such situations. You need to terminate the main program or disable its automatic startup before restarting the robot.
It's worth noting that because the robot's main program uses multi-threading and is configured to run automatically at startup through crontab, the usual method sudo killall python typically doesn't work. Therefore, we'll introduce the method of disabling the automatic startup of the main program here.
If you have already disabled the automatic startup of the robot's main demo, you do not need to proceed with the section on Terminate the Main Demo.
Terminate the Main Demo
- 1. Click the "+" icon next to the tab for this page to open a new tab called "Launcher."
- 2. Click on "Terminal" under "Other" to open a terminal window.
- 3. Type bash into the terminal window and press Enter.
- 4. Now you can use the Bash Shell to control the robot.
- 5. Enter the command: sudo killall -9 python.
Example
The following code block can be run directly:
- 1. Select the code block below.
- 2. Press Shift + Enter to run the code block.
- 3. Watch the real-time video window.
- 4. Press STOP to close the real-time video and release the camera resources.
If you cannot see the real-time camera feed when running:
- Click on Kernel -> Shut down all kernels above.
- Close the current section tab and open it again.
- Click STOP to release the camera resources, then run the code block again.
- Reboot the device.
Features of This Chapter
The face feature database file is located in the same path as this .ipynb file. You can change the faceCascade variable to modify what needs to be detected. You'll need to replace the current haarcascade_frontalface_default.xml file with other feature files.
When the code block runs successfully, you can position the robot's camera on a face, and the area containing the face will be automatically highlighted on the screen.
import cv2 # Import the OpenCV library for image processing from picamera2 import Picamera2 # Library for accessing the Raspberry Pi Camera import numpy as np # Library for mathematical calculations from IPython.display import display, Image # Library for displaying images in Jupyter Notebook import ipywidgets as widgets # Library for creating interactive widgets like buttons import threading # Library for creating new threads to execute tasks asynchronously # Load the Haar cascade classifier for face detection faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') # Create a "Stop" button for users to stop the video stream by clicking on it # ================ stopButton = widgets.ToggleButton( value=False, description='Stop', disabled=False, button_style='danger', # 'success', 'info', 'warning', 'danger' or '' tooltip='Description', icon='square' # (FontAwesome names without the `fa-` prefix) ) # Define a display function to process video frames and perform face detection # ================ def view(button): # If you are using a CSI camera you need to comment out the picam2 code and the camera code. # Since the latest versions of OpenCV no longer support CSI cameras (4.9.0.80), you need to use picamera2 to get the camera footage # picam2 = Picamera2() # Create Picamera2 example # picam2.configure(picam2.create_video_configuration(main={"format": 'XRGB8888', "size": (640, 480)})) # Configure camera parameters # picam2.start() # Start camera camera = cv2.VideoCapture(-1) # Create camera example #Set resolution camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640) camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480) display_handle=display(None, display_id=True) # Create a display handle to update the displayed image i = 0 avg = None while True: # frame = picam2.capture_array() _, frame = camera.read() # Capture a frame of image from camera # frame = cv2.flip(frame, 1) # if your camera reverses your image img = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) # Convert the image from RGB to BGR because OpenCV defaults to BGR gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Convert the image to grayscale because face detection is typically performed on grayscale images # Perform face detection using the cascade classifier faces = faceCascade.detectMultiScale( gray, scaleFactor=1.2, minNeighbors=5, minSize=(20, 20) ) if len(faces): for (x,y,w,h) in faces: # Loop through all detected faces cv2.rectangle(frame,(x,y),(x+w,y+h),(64,128,255),1) # Draw a rectangle around the detected face _, frame = cv2.imencode('.jpeg', frame) # Encode the frame as JPEG format display_handle.update(Image(data=frame.tobytes())) if stopButton.value==True: # picam2.close() # If yes, close the camera cv2.release() # If yes, close the camera display_handle.update(None) # Display the "Stop" button and start a thread to execute the display function # ================ display(stopButton) thread = threading.Thread(target=view, args=(stopButton,)) thread.start()