A bottom-up approach for human behaviour understanding is presented, using a multi-camera system.
The proposed methodology classifies behaviour as normal or abnormal, by treating short-term behaviour classification and trajectory classification as two different classification problems.
Based on that assumption, a set of calculated features provide input to two one-class classifiers: a Support Vector Machine and a continuous Hidden Markov Model treated as an one-class classifier
The superposition of foreground object projections on a common plane may create artifacts which can seriously disorientate a human detector by creating false positives. We present a method which eliminates those artifacts by using only geometrical information thus contributing to robust human detection for multiple views.
The views from three cameras are combined to extract accurately the position on the floor and thus the trajectory. Simultaneously the human short term behavior is extracted (walking, running, abrupt, standing etc).
The positions of three persons is extracted using three cameras (their positions in blue). The silhouettes, the bounding rectangles and the respective postions on a common coordinate frame are extracted in real time and displayed. No color or other information information is used for tracking.
The videos from three caeras, the foreground regions and the feet positions are extracted. The feet positions are correctly calculated despite the ambiguities of the projection.