In this project we address the problem of detection and tracking of moving sequences in complex scenarios. The ideal goal of segmentation is to identify the semantically meaningful components of an image and grouping the pixels belonging to such components. While it is impossible to segment static objects in image at the present stage, it is more practical to segment moving objects from dynamic scene with the aid of motion information contained in it. The aim of image segmentation is to segment the video frame into background, foreground, objects, and sub objects with different characteristics so that the mesh can represent the motion of objects perfectly.
IMAGE SEGMENTATION BASED DETECTION
~Temporal Multiscale Decomposition
STEPS IN THE PROJECT—PROJECT FLOW
~Read the video sequence
~Perform the segmentation based detection using Temporal Multiscale Decomposition.
~Then we have done the post processing for the sequence.
~Finally then track the detected output using EM algorithm.
~Store the output images.
Prime focus of this project is to detect & track moving objects in video at low- to moderate- resolution and frame rate. A flexible tracking pipeline is used that allows to investigate different combinations of foreground extraction, feature extraction and motion correspondence algorithms. In feature extraction, features such as the centroid, the size, and the average pixel intensity of each moving object could be extracted. These are then used in tracking algorithms such as Kalman filters to track the objects from one frame to the next. Additional logic is incorporated for track maintenance to determine constraints such as the number of frames over which an object must be tracked successfully for it to be assigned a track and the number of frames over which no object is assigned to an existing track, making the track disappear.
The algorithm is implemented in four steps which as illustrated;
(1) In the first step, a possibly apparent camera motion is estimated and compensated using an 8 parameter motion model.
(2) In the second step, an apparent scene cut or strong camera pan is detected by evaluating the median-squared error (MSE) between the two successive frames, considering only background regions of the previous frame. In case of a scene cut the algorithm is reset.
3) The third step is a change detection mask (CDM) module; . First, an initial CDM called CDMi between two successive frames is generated by a relaxation technique, using local thresholds which consider the state of neighboring pixels. In order to get temporally stable object regions, a memory for change detection masks (CDM) is then applied to make use the previous segmentation results. The updated CDM (CDMu) is then simplified using morphological closing operator to generate the final CDM for object detection.
(4) By the fourth step, the same technique as explained above is used to obtain an initial moving object mask (OMi) from the CDM. It is then adapted to luminescence edges of the corresponding frame, resulting in the final object region.
Post processing techniques are applied to reduce the noise. The background elimination method uses concept of least squares to compare the accuracies of the current algorithm with the already existing algorithms. The background registration method uses background subtraction which improves the adaptive background mixture model and makes the system learn faster and more accurately, as well as adapt effectively to changing environments.