The initialization code
To detect motion in a video we need to compare at least two frames. We will use typed arrays to store the lightness data of the previous frames:
We want two frame buffers - a single one results in a heavily flickering motion video but the more frames we store the more motion blur we will see. Two seems like a good value for demonstration purposes.
Illustrating lightness changes
The main draw() function from part 1 did not change except that we now call markLightnessChanges() for every frame. This is also the probably most interesting function of the whole demo:
We determine the lightness value of every pixel in the canvas and compare it to its values in the previously captured frames. If the difference to one of those buffers exceeds a specific threshold the pixel will be black, if not it becomes transparent.
Blend mode difference
The simple method we use to detect motion is called a blend mode difference. That is a quite fancy word to say: we compare two images (also called layers or frames) by putting them on top of each other and subtracting the bottom from the top layer. In this example we do it for every pixel’s L-value of the HSL color model.
If the current frame is identical to the previous one, the lightness difference will be exactly zero for all pixels. If the frames differ because something in that picture has moved then there is a good chance that lightness values change where motion occured. A small threshold ensures that we ignore noise in the signal.
Demo and screencast
That is all! Take a look at the live demo or watch the screencast below:
You can create some really great demos with this simple technique. Here is a neat one of a xylophone you can play by waving your hands (which unfortunately does not work in Firefox).
Whatever your ideas may be, I encourage you to fiddle around with the small demos I provided in my three getUserMedia() examples so far and let me know if you built something amazing!