Traffic uses computer vision to identify the motion of “blobs,” points or regions in the camera’s frame that differ in color and brightness from the rest of the frame. In any given image, even that of a white wall, the computer can detect many many blobs, usually resulting in an excess of data that is either redundant or noisy. Traffic sorts through several layers of computer vision and motion data to find the 5 most prominent blobs (people) in the frame, which is delivered to Max/Msp/Jitter and beyond.
Out of the box, Traffic can be used to set up a multi-track sound installation – for “Sonic Geography” I used 4 channels of audio played on 4 speakers placed in each corner of the installation. As visitors moved around the room they could modulate and explore the sounds being played.
Traffic has several prerequisites to work successfully:
– External camera device, preferably lightweight, to be mounted overhead.
– A computer with Max/Msp/Jitter and JM Pelletier’s cv.jit library
– A clear floor space, so that objects on the floor don’t get confused with people.