Jetson 15 OpenCV Motion Detection

From Waveshare Wiki
Jump to: navigation, search

This tutorial utilizes OpenCV to detect changes in the scene. You can set a threshold for how much change is detected, and adjusting this threshold allows you to modify the sensitivity of the motion detection.

This chapter requires an understanding of the preceding chapters.

Preparation

Since the product automatically runs the main program at startup, which occupies the camera resource, this tutorial cannot be used in such situations. You need to terminate the main program or disable its automatic startup before restarting the robot.
It's worth noting that because the robot's main program uses multi-threading and is configured to run automatically at startup through crontab, the usual method sudo killall python typically doesn't work. Therefore, we'll introduce the method of disabling the automatic startup of the main program here.
If you have already disabled the automatic startup of the robot's main demo, you do not need to proceed with the section on Terminate the Main Demo.

Terminate the Main Demo

1. Click the "+" icon next to the tab for this page to open a new tab called "Launcher."
2. Click on "Terminal" under "Other" to open a terminal window.
3. Type bash into the terminal window and press Enter.
4. Now you can use the Bash Shell to control the robot.
5. Enter the command: sudo killall -9 python.

Example

The following code block can be run directly:

1. Select the code block below.
2. Press Shift + Enter to run the code block.
3. Watch the real-time video window.
4. Press STOP to close the real-time video and release the camera resources.

If you cannot see the real-time camera feed when running:

  • Click on Kernel -> Shut down all kernels above.
  • Close the current section tab and open it again.
  • Click STOP to release the camera resources, then run the code block again.
  • Reboot the device.

Notes

If you are using a USB camera, you need to uncomment the line frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB).

Features of This Chapter

You need to adjust some parameters to increase the threshold (sensitivity) of OpenCV for detecting changes in the scene. The lower the threshold value, the more sensitive OpenCV is to changes in the scene.

Running

When you run the code block, you can see the real-time feed from the camera. You can wave your hand in front of the camera, and the program will automatically outline the areas of change with green boxes.

import cv2
from picamera2 import Picamera2
import numpy as np
from IPython.display import display, Image
import ipywidgets as widgets
import threading

import imutils  # Library for simplifying image processing tasks

threshold = 2000  # Set the threshold for motion detection

# Create a "Stop" button to control the process
# ===================================================
stopButton = widgets.ToggleButton(
    value=False,
    description='Stop',
    disabled=False,
    button_style='danger',  # 'success', 'info', 'warning', 'danger' or ''
    tooltip='Description',
    icon='square'  # Button icon (FontAwesome name without the `fa-` prefix)
)


# Display function definition, used to capture and process video frames, while performing motion detection
# ===================================================
def view(button):
    # If you are using a CSI camera you need to comment out the picam2 code and the camera code
    # Since the latest versions of OpenCV no longer support CSI cameras (4.9.0.80), you need to use picamera2 to get the camera footage
    # picam2 = Picamera2()  # Create a Picamera2 instance
    # picam2.configure(picam2.create_video_configuration(main={"format": 'XRGB8888', "size": (640, 480)}))  # Configure camera parameters
    # picam2.start()  # Start the camera
    
    camera = cv2.VideoCapture(-1)  #Create camera example
    #Set resolution
    camera.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
    
    display_handle = display(None, display_id=True)
    i = 0

    avg = None  # Used to store the average frame

    while True:
        # frame = picam2.capture_array()  # Capture a frame from the camera
        # frame = cv2.flip(frame, 1) # if your camera reverses your image
        _, frame = camera.read() # Capture a frame image from camera 

        img = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)  # Convert frame color from RGB to BGR
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  # Convert the frame to grayscale
        gray = cv2.GaussianBlur(gray, (21, 21), 0)  # Apply Gaussian blur to the grayscale image
        if avg is None:  # If the average frame does not exist, create it
            avg = gray.copy().astype("float")
            continue

        try:
            cv2.accumulateWeighted(gray, avg, 0.5)  # Update the average frame
        except:
            continue

        frameDelta = cv2.absdiff(gray, cv2.convertScaleAbs(avg))  # Calculate the difference between the current frame and the average frame

        # Apply a threshold to find contours in the difference image
        thresh = cv2.threshold(frameDelta, 5, 255, cv2.THRESH_BINARY)[1]
        thresh = cv2.dilate(thresh, None, iterations=2)
        cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        cnts = imutils.grab_contours(cnts)
        # Iterate through contours
        for c in cnts:
            # Ignore contours that are too small
            if cv2.contourArea(c) < threshold:
                continue
            # Calculate the bounding box of the contour and draw a rectangle around it
            (mov_x, mov_y, mov_w, mov_h) = cv2.boundingRect(c)
            cv2.rectangle(frame, (mov_x, mov_y), (mov_x + mov_w, mov_y + mov_h), (128, 255, 0), 1)  # Draw a rectangle around the moving area

        _, frame = cv2.imencode('.jpeg', frame)  # Encode the processed frame in JPEG format
        display_handle.update(Image(data=frame.tobytes()))  # Update the displayed image
        if stopButton.value == True:  # Check if the "Stop" button is pressed
            #picam2.close()  # If yes, close the camera
            cv2.release() # if yes, close the camera
            display_handle.update(None)  # Clear the displayed image


# Display the stop button and start the video stream display thread
# ===================================================
display(stopButton)
thread = threading.Thread(target=view, args=(stopButton,))
thread.start()