International Journal of Image, Graphics and Signal Processing(IJIGSP)

ISSN: 2074-9074 (Print), ISSN: 2074-9082 (Online)

Published By: MECS Press

IJIGSP Vol.11, No.2, Feb. 2019

EFF-ViBE: An Efficient and Improved Background Subtraction Approach based on ViBE

Full Text (PDF, 990KB), PP.1-14

Views:24   Downloads:8


Elie Tagne Fute, Lionel L. Sop Deffo, Emmanuel Tonye

Index Terms

Background subtraction;ViBE;adaptive radius;cumulative mean;pixel counting;ghost


Background subtraction plays an important role in intelligent video surveillance since it is one of the most used tools in motion detection. If scientific progress has enabled to develop sophisticated equipment for this task, algorithms used should be improved as well. For the past decade a background subtraction technique called ViBE is gaining the field. However, the original algorithm has two main drawbacks. The first one is ghost phenomenon which appears if the initial frame contains a moving object or in the case of a sudden change in the background situations. Secondly it fails to perform well in complicated background. This paper presents an efficient background subtraction approach based on ViBE to solve these two problems. It is based on an adaptive radius to deal with complex background, on cumulative mean and pixel counting mechanism to quickly eliminate the ghost phenomenon and to adapt to sudden change in the background model.

Cite This Paper

Elie Tagne Fute, Lionel L. Sop Deffo, Emmanuel Tonye, "EFF-ViBE: An Efficient and Improved Background Subtraction Approach based on ViBE", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.11, No.2, pp. 1-14, 2019.DOI: 10.5815/ijigsp.2019.02.01


[1]S. S. Hossain, S. Khalid, C. Nabendu, Moving Object Detection Using Background Subtraction, Springer Briefs in Computer Sciences Sciences, Sprnger, Cham, 2014.

[2]G. Yao, T. Lei, J. Zhong, P. Jiang, W. Jia, Comparative evaluation of background subtraction algorithms in remote scene videos capturedby mwir sensors, Sensors (2017).

[3]F. Zeng, G. Zhang, J. Jiang, Text image with complex background filtering method based on harris corner-point detection, journal of software 8 (8) (2013).

[4]D. Li, Y. Li, F. He, S. Wang, Object detection in image with complex background, Third International Conference on Multimedia Technology(2013).

[5]O. Barnich, M. V. Droogenbroeck, Vibe: A powerful random technique to estimate the background in video sequences, International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2009) 19–24.

[6]C. Stauffer, E. Grimson, Adaptive background mixture models for real-time tracking, Computer Vision and Pattern Recognition (1999) 246–252.

[7]B. Han, X. Lin, Update the GMMs via adaptive Kalman filtering,International Society for Optical Engineering (2005) 1506–1515 

[8]Y. Hong, Y. Tan, J. Tian, J. Liu, Accurate dynamic scene model for moving object detection, International Conference on Image Processing (ICIP) (2007) 157–160.

[9]W. Zhang, X. Fang, X. Yang, J. Wu, Spatio-temporal Gaussian mixture model to detect moving objects in dynamic scenes., Journal of Electronic Imaging (2007). 

[10]P. Tang, L. Gao, Z. Liu, Salient moving object detection using stochastic approach filtering, Fourth International Conference on Image and Graphics (ICIG (2007) 530–535.

[11]M. Harville, A framework for high-level feedback to adaptive, perpixel, mixture-of-Gaussian background models, 7th European Conference on Computer Vision (ECCV) (2002) 543–560 

[12]M. Cristani, V. Murino, A spatial sampling mechanism for effective background subtraction, 2nd International Conference on Computer Vision Theory and Applications (VISAPP) (2007) 403–410 

[13]A. Elgammal, D. Harwood, L. Davis, Non-parametric model for background subtraction, European Conference on Computer Vision(2000).

[14]A. Elgammal, R. Duraiswami, D. Harwood, L. Davis, Background and foreground modeling using nonparametric kernel density estimation for visual surveillance, Proceedings of the IEEE 90 (2002) 1151–1163.

[15]Park, C. Lee, Bayesian rule-based complex background modeling and foreground detection optical engineering, Optical Engineering  (2010)

[16]P. Angelov, P. S. Tehran, R. Ramezani, An approach to automatic real-time novelty detection, object identification, and tracking in video streams based on recursive density estimation and evolving Takagi-Sugeno Fuzzy systems, International Journal of  Intelligent Systems (2011) 189–205.

[17]K. Kim, T. H. Chalidabhongse, D. Harwood, L. Davis, Real-timeforeground background segmentation using codebook model, Real time imaging 11 (2005) 172–185.

[18]N. M. Oliver, B. Rosario, A. P. Pentland, A bayesian computer vision system for modeling human interactions, IEEE transactions on pattern analysis and machine intelligence 22 (2000) 831–843.

[19]D. Kuzin, O. Isupova, L. Mihaylova, Compressive sensing approaches for autonomous object detection in video sequences, Sensor Data Fusion: Trends, Solutions, Applications (SDF) (2015).

[20]V. Cevhe Aswin, S. M. F., D. Dikpal, R. R. G., B. R. Chellappa,Compressive sensing for background subtraction, European Conferenceon Computer Vision (2008) 155–168.

[21]V. Cevher, A. Sankaranarayanan, M. F., D. Dikpal, R. R. G., B. R. Chellappa, Background subtraction using spatio-temporal group sparsity recovery, European Conference on Computer Vision(2008) 155–168.

[22]A. Azeroual, K. Afdel, Ieee transactions on circuits and systems for video technology: Background subtraction based on low-rank and structured sparse decomposition, IEEE Trans. Image Process 24 (08) (2017) 2502–2514.

[23]A. Azeroual, K. Afdel, Fast image edge detection based on faber schauder wavelet and otsu threshold, Heliyon 3 (2017), doi: 10.1016/j.heliyon.2017.e00485.

[24]M. Nishio, C. Nagashima, S. Hirabayashi, A. Ohnishi, K. Sasaki, T. Sagawa, M. Hamada, T. Yamashita., Convolutional auto-encoderfor image denoising of ultralow-dose ct, Heliyon 3 (2017), doi:10.1016/j.heliyon.2017.e00393. 

[25]M. Braham, M. V. Droogenbroeck, Deep background subtraction with scene-specific convolutional neural networks, International Conference on Systems, Signals and Image Processing (2016).

[26]Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceeding of IEEE 86 (1998) 2278–2324.

[27]O. BARNICH, M. V. DROOGENBROECK, Vibe : A universal background subtraction algorithm for video sequences., IEEE Transactions on Image Processing (2011) 1709–1724.

[28]A. Elgammal, D. Harwood, L. Davis, Non-parametric model for background subtraction, 6thEuropean Conference on Computer Vision Part II, Springer 1843 (2000) 751–767.

[29]H. Wang, D. Suter, A consensus-based method for tracking: Modeling background scenario and foreground appearance, Pattern Recognition40 (2007) 1091–1105. 

[30]H. Batao, Y. Shaohua, An improved background subtraction method based on vibe, Chinese Conference on Pattern Recognition, Springer 662 (2016) 356–368 

[31]L. Chang, Zhenghua, Y. Ren, Improved adaptive vibe and the application for segmentation of complex background, Hindawi Publishing Corporation, Mathematical Problems in Engineering (2016).

[32]Itseez, Open source computer vision library, (2015).

[33]Y. Wang, G. P. M. Jodoin, F. Porikli, J. Konrad, P. Ishwar, Y. Benezeth, P. Ishwar, Cdnet 2014: An expanded change detection benchmark dataset, 2014 IEEE Conference eon computer Vision and Pattern Recognition Workshops (2014) 16–21.

[34]B. Laugraud, vibe-sources, source code in c/c++! original implementation. example for opencv, University of Liege, Belgium, /bitstream/2268/145853/5/vibe- (2014).

[35]A. Sobral, Bgslibrary: An opencv c++ background subtraction library,In Proceedings of the 2013 IX Workshop de Viso Computacional, Rio de Janeiro, Brazil (2013) 3–5.

[36]M. Hofmann, P. Tiefenbacher, G. Rigoll, Background segmentation with feedback: The pixel-based adaptive segmenter, In Proceedings IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence , RI, USA (2012).

[37]St-Charles, G. A. Bilodeau, R. Bergevin, Flexible background subtraction with self-balanced local sensitivity, In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA (2014) 414–419.