1. Examples - matt-s-clark/godot-gdextension-opencv GitHub Wiki
Video Capture
This is a simple reproduction of the camera feed, you can use the open button to restart the video capture and the release to stop it. This example is a nice start if you wish to process live video.
Core
This scene have four live feeds, it will get a comparison frame at the start and the current frame to show some of the core array operations. You can reset the comparison frame with the "Reset comparison" button. From left to right and top to bottom they show:
- Current frame transposed
- The bitwise and operator between the comparison frame and the current frame
- The current frame subtracted from the comparison frame
- The max between the current frame and the comparison frame
Cascade classifier
This scene searches for an object based on the pre-trained XML file it loads on ready. It will draw a box around the detected object. Threshold and Morphology This scene have four live feeds used to show thresholding and morphological operations. The video is converted to gray scale, binarized with thresholding and modified with morphological operations. You can change the threshold type, the threshold value in case it's not adaptative, and the size of the kernels used in the morphological operations. From left to right and top to bottom they show:
- Current frame
- Threshold
- Morphological closing
- Morphological gradient
Noise Filtering
This scene have four live feeds used to show different options for filtering noise. You can change the "force" of the bilateral filter, the size of the kernels for the gaussian and median blur. From left to right and top to bottom they show:
- Current frame
- Median blur
- Gaussian blur
- Bilateral filter
Background Subtractor
An example of KNN (K-nearest neighbors) and MOG2 (Gaussian Mixture v2) algorithms for Background/Foreground Segmentation. These are video analysis methods used to separate the pixels of an image that changes more often from those who doesn't. Can be used to get a clean plate from a scene with a diversity of moving objects. The controls in this scene enable you to turn on/off the calculation of the background, switch between knn and mog2 and change the output to the background generated by the subtractor.
Face Detector And Recognition
An example of two implementations of Deep neural networks, yunet face detection and sface face recognition. This scene will instantiate a text input, asking for a name, close to each face it recognizes as different enough. It doesn't save the faces in any type of persistent way. A feature is generated from each identified face and compared to the ones saved. If it matches one the card is moved based on it's position, else if it's different enough, its added to the save features and a new card is instantiated.
Trackers
Trackers are used to update the position of a bounding box based on the movement of a region of an image. This example implements the KCF (Kernelized Correlation Filter) and CSRT (Correlation Filter with Channel and Spatial Reliability) contrib trackers. You can select a region at any time by clicking and dragging and this bounding box will be updated. The controls enable you to turn on/off each tracker individually.