Occlusion-aware 3-D multiple object tracking for visual surveillance

Download
2013
Topçu, Osman
This thesis work presents an occlusion-aware particle filter framework for online tracking of multiple people with observations from multiple cameras with overlapping fields of view for surveillance applications. Surveillance problem involves inferring motives of people from their actions, deduced from their trajectories. Visual tracking is required to obtain these trajectories and it is a challenging problem due to motion model variations, size and illumination changes and especially occlusions between moving objects. By the expense of increasing number of cameras, tracking in 3D world coordinates is preferred over its 2D counterpart due to decreased viewpoint dependency and better occlusion handling through ordering of the targets. In this novel algorithm, our contributions are (1) observation error is increased in accordance with the estimated occlusion probability so that uncertainty in object location during any occlusion is taken into consideration, (2) increased observation error causes particles to expand, that in turn widens the association gate so that location uncertainty during occlusion is included into the association gate decreasing the chance to lose the object during occlusion, (3) observation error is allowed to change by the dimension of the associated silhouette so that distant and close objects can be tracked without parameter adjustment, while the particles are also saved from degeneration, when silhouettes are merged during occlusion. The improved performance of the proposed tracker is demonstrated in comparison with other state-of-the-art trackers using PETS 2009, as well as EPFL and PETS 2006 datasets. Furthermore, the proposed algorithm is extended for the surveillance systems whose cameras are not synchronized in time. Time diff erence between cameras are estimated by Gaussian mixture kernel density estimator and compensated by particle filter trackers. The experiments indicate that the proposed algorithm is able to work, when the amount of time di fference between cameras is within one second. Finally, the position and velocity states of the proposed algorithm is divided into linear and nonlinear parts in a Rao-Blackwellized fashion. In this formulation, velocity is marginalized by a Kalman filter, while the object position is filtered through a particle filter. It is shown that the resulting algorithm performs competitively, when number of particles are reduced without performance degradation.

Suggestions

Visual-inertial sensor fusion for 3D urban modeling
Sırtkaya, Salim; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2013)
In this dissertation, a real-time, autonomous and geo-registered approach is presented to tackle the large scale 3D urban modeling problem using a camera and inertial sensors. The proposed approach exploits the special structures of urban areas and visual-inertial sensor fusion. The buildings in urban areas are assumed to have planar facades that are perpendicular to the local level. A sparse 3D point cloud of the imaged scene is obtained from visual feature matches using camera poses estimates, and planar ...
Multi-camera video surveillance : detection, occlusion handling, tracking and event recognition
Akman, Oytun; Alatan, Abdullah Aydın; Department of Electrical and Electronics Engineering (2007)
In this thesis, novel methods for background modeling, tracking, occlusion handling and event recognition via multi-camera configurations are presented. As the initial step, building blocks of typical single camera surveillance systems that are moving object detection, tracking and event recognition, are discussed and various widely accepted methods for these building blocks are tested to asses on their performance. Next, for the multi-camera surveillance systems, background modeling, occlusion handling, tr...
OCCLUSION-AWARE 3D MULTIPLE OBJECT TRACKER WITH TWO CAMERAS FOR VISUAL SURVEILLANCE
Topcu, Osman; Alatan, Abdullah Aydın; ERCAN, ALİ ÖZER (2014-08-29)
An occlusion-aware multiple deformable object tracker for visual surveillance from two cameras is presented. Each object is tracked by a separate particle filter tracker, which is initiated upon detection of a new person and terminated when s/he leaves the scene. Objects are considered as 3D points at their centre of masses as if their mass density is uniform. Point objects and corresponding silhouette centroids in two views together with the epipolar geometry they satisfy resulted in a practical tracking m...
Robust Automatic Target Recognition in FLIR imagery
Soyman, Yusuf (2012-04-24)
In this paper, a robust automatic target recognition algorithm in FLIR imagery is proposed. Target is first segmented out from the background using wavelet transform. Segmentation process is accomplished by parametric Gabor wavelet transformation. Invariant features that belong to the target, which is segmented out from the background, are then extracted via moments. Higher-order moments, while providing better quality for identifying the image, are more sensitive to noise. A trade-off study is then perform...
3D Face Reconstruction Using Stereo Images and Structured Light
OZTURK, Ahmet Oguz; Halıcı, Uğur; ULUSOY PARNAS, İLKAY; AKAGUNDUZ, Erdem (2008-04-22)
In this paper, the 3D face scanner that we developed using stereo cameras and structured light together is presented. Structured light having a pattern of vertical lines is used to create feature points and to match them easily. 3D point cloud obtained by stereo analysis is post processed to obtain the 3D model in obj format.
Citation Formats
O. Topçu, “Occlusion-aware 3-D multiple object tracking for visual surveillance,” Ph.D. - Doctoral Program, Middle East Technical University, 2013.