And here we are, our very first demo:
The video analysis simply execute keyboard shortcuts to interact with the OS.
Canard worked on a Windows/Linux keystroke system while I created a naive gesture detection algorithm. We then just had to plug the two.
Another day of serious work during the day, hacking in the evening.
We now have an image of the movement from which we will be able to extract descriptors. (We will use it to look for known patterns).
Of course, everything is done real-time with a pretty good frame-rate.
Displaying the movement and computing the global direction
Note the small line at the top left of the visualization window, it shows the global direction of the movement. In this sequence, I moved my arm toward my head.
We used method from OpenCV that implement this paper. You can always check out our code on github.
Canard took the time to do some code refactoring and successfully built in on Windows.
For months, I’ve been talking with my friend Canard about a project of interacting with a computer without touching it. We are convinced that it can be done with a simple webcam and that the “depth” information is not needed for simple interactions. (So no need to buy a kinect or two webcams).
Yesterday we started hacking on this. We are using OpenCV. We first discussed about the different steps of doing it. But as we explored OpenCV, we changed some of these previous decisions because we found some built-in functions.
After investigation, we decided that the first part of our system will be using a method named “Real-time Motion Template Gradients” by James Davis and Gary Bradski (you can read the paper here)
The very first step was to get the difference between two frames of the webcam video, what you can see in the next image is the movement : black means no movement, colored means there was a difference between the two frames:
Visualisation of the difference between frames of a real time video
You can check out the code (and fork us) on our github repository.
Stay tunned to follow our progress.