学位論文 Dynamic Scenes and Appearance Modeling for Robust Object Detection and Matching Based on Co-occurrence Probability

梁, 棟

2015-03-25
内容記述
Detecting moving objects plays a crucial role in anintelligent surveillance system.Object detection is often integrated with various tasks,such as tracking objects, recognizing their behavioursand alerting when abnormal events occur.However, it suffers from non-stationary backgroundin surveillance scenes, especially in two potentiallydynamic cases: (1)sudden illumination variation, such as outdoorsunlight changes and indoor lights turning on/off;(2)burst physical motion, such as the motion of indoorartificial background, which include fans, escalatorsand auto-doors, and the motion of natural background,which include fountain, ripple on water surface andswaying tree.If the actual background includes a combination of anyof these factors, it becomes much more difficult todetect objects.Traditional algorithms,i.e.Gaussian Mixture Model(GMM)and Kernel Density Estimation (KDE) handle gradualillumination changes by building the statisticalbackground models progressively using long-term leaningframes. In practice, however, this kind of independentpixel-wise model often fail to avoid mistakenly integrating foreground elements into the background,and it is difficult to adapt to sudden illuminationchange and burst motion.On the other hand, spatial-dependence model, i.e.Grayscale Arranging Pairs(GAP) and Statistical ReachFeature(SRF), shows promising performance underillumination change and other dynamic background.This study proposes a novel framework to build abackground model for object detection, which isevolved from GAP method and SRF method.It is brightness-invariant and able to tolerate burstmotion.We name it Co-ocurrence Probability based Pixel Pairs(CP3).In order to model the dynamic background, spatialpixel pairs with high temporal Co-ocurrence probabilityare employed to represent each other by using thestable intensity differential increment between a pixelpair which is much more reliable than the intensity ofa single pixel, especially when the intensity of asingle pixel changes dramatically over time.The model performs robust detection under outdoor andindoor extreme environments. Compared with theindependent pixel-wise background modelling methods,CP3 determines stable Co-ocurrence pixel pairs,instead of building the parameterized/non-parametrizedmodel for a single pixel.These pixel pairs maintain a reliable background model,which can be used to capture structural backgroundmotion and cope with local and global illuminationchanges.As a spatial-dependence method, CP3 does not predefine/assume any local operator, subspace orblock for an observed pixel, but it does its best effortto select those qualified supporting pixels which couldmaintain reliable linear relationship with the targetpixels.Moreover, based on the single Gaussian model of thedifferential value of the pixel pair, it provides anaccurate detection criterion even the gray-scale dynamic range is compressed under weak illumination.The proposed method can be used for modelling theappearance of an image to realize image matching.Theoretically speaking, both the object detection andimage matching can be seen as a model matching problem.The differences between the two tasks is that theobject detection is to seek the regions of interest(ROI)which violate/mismatch the background model, while theimage matching is to seek the ROI which can match theimage model optimally. Therefore, in this study, wefurther extend the use of CP3 to the robust imagematching task.This thesis is organized into the following chapters:Chapter 1 introduces related works in object detectionand image matching.Some general problems are involved and discussed.Furthermore, the motivations and contributions of thisresearch are described.Chapter 2 presents the details of CP3 background modelbased on o-ocurrence pixel pairs for object detection.We test it on several surveillance video datasets forboth qualitative and quantitative analyses.Experiments using several challenging datasets (Heavyfog, PETS-2001, AIST-INDOOR, Wallflower and asupermarket surveillance application) prove the robustand competitive performance of object detection invarious indoor and outdoor environments.For quantitative analysis, Precision (also known aspositive predictive value), Recall (also known assensitivity) and F-measure (a weighted harmonic meanof the Precision and Recall) are utilized.The three evaluation metrics measure the exactness,fidelity and the completeness of foreground.We compare our algorithm with three methods:(1) GMM method, which is a standardized method amongindependent pixel-wise models;(2) Sheikh's KDE method as a representative methodamong spatially dependent models;(3) our previous method GAP. In addition, we alsopropose an accelerated version of CP3, which can effectively reduce the time cost of the background modelling stage.Chapter 3 proposes the framework of CP3 for modellingthe appearance of an image to realize image matching.We detail the learning phase and present the similaritymeasure procedure and present the experimental results.Although an additional learning stage is necessary, theexperiment results show that the proposed method isrobust under several imaging cases and it alsooutperforms SRF.Chapter 4 presents the discussions of the proposedmethods, concludes the main contributions of our study,and shows the future works of this study.
4, 6, 75p
Hokkaido University(北海道大学). 博士(情報科学)
本文を読む

http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/58976/1/Liang_Dong.pdf

このアイテムのアクセス数:  回

その他の情報