Started in: | 2009 |
Contact person: | Ralf Reulke |
Staff involved: | Dominik Rueß, Kristian Manthey |
The omnipresent cameras in public areas and facilities are mainly controlled by human staff. Cost reduction often results in employing only one person to analyze a multitude of screens. In addition, supporting motion detectors produce many false alarms. Currently, there are no systems able to identify objects and different events, which would allow for efficient surveillance, less false alarms and cheaper personnel costs. The goal of this project is to develop a system being able to reconstruct a 3D scene from at least two cameras with a large stereo basis. This can, in turn, be used to extract location and motion of objects and eventually recognize certain “critical” situations. The tracking of objects and persons is also to be implemented, allowing scene and event descriptions.
Figure 1: Two views on a traffic intersection with a wide stereo baseline (right). The left picture shows the recontruction of the 3D features matched between the two input images. There is also a trjaectory of a pedestrian. |
To extract 3D points and scenes temporal and spatial correspondences have to be established. In terms of wide baseline scenarios, investigation into suitable methods to extract and describe salient points (“features”) is to be performed (to mention some: SIFT, SURF, ASIFT, SUSAN, MSER, Harris-Corner, MSER). Significant reduction of transferred data can be achieved by computing these features within the cameras.
Figure 2: Additional epipolar constraints. With the ellipsis tangents as epipolar lines, the region based feature matching can be improved. |
Based on the 3D scene and the consequential object segmentation, trajectories are supposed to be extracted. Kalman or Particle filters will be employed to reduce noisy data and for prediction purposes. The tracking of features within single cameras is also examined. The extracted trajectories allow for classification of the current scene, i.e. whether there are suspicious activities or not.
© 2024 - Computer Vision, Department of Computer Science, HU Berlin - - Legal Notice