Wednesday, February 19, 2014

Static background subtraction using OpenCV

With background subtraction you can eliminate the background and focus on the actual object for further processing (detection, recogization, ...). Some algorythms provided in the OpenCV library are "learning" the background. The problem is if an object doesn't move, it will become the background as well. If you have a fixed camera - and so always the same background - you can just calculate the diffrence of two images.

Here's the code I used (you can download it here):
1:   #include <opencv2/core/core.hpp>   
2:   #include <opencv2/highgui/highgui.hpp>   
3:   #include <opencv2/imgproc/imgproc.hpp>   
4:   #include <iostream>   
5:   using namespace cv;   
6:   int main( int argc, char** argv )   
7:   {   
8:    // Read image given by user   
9:    Mat src = imread( "/users/christian/documents/programming/other/imgs/background.jpg", 0); // 1:color, 0:grayscale   
10:    Mat dst = imread("/users/christian/documents/programming/other/imgs/backtest.jpg", 0);   
11:    // background subtraction   
12:    Mat diff;   
13:    absdiff(src, dst, diff);   
14:    threshold(diff, diff, 10, 255, CV_THRESH_BINARY); // grayscale needed   
15:    // Show image in window   
16:    imshow("original", src);   
17:    imshow("new", dst);   
18:    imshow("diff", diff);   
19:    // Wait until user presses key   
20:    waitKey();   
21:    return 0;   
22:   }   
All it does is loading two images, calculating the differance, applying a threshold to highlight the object and displaying the 3 images.


This is just a theoretical example. Because here I use a static image to calculate the differnce, this method is very light sensitve. Lighting will also change the background and will no longer match exactly the stored background.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

1 comment: