MotionScan captures the views from 32 cameras arrayed as stereo pairs to capture the face of an actor. The video stream from these camera pairs are processed in a computer using Stereo Vision techniques to produce a 3 model for each frame from the combined video streams. Included in the data is a composite texture map of all the views and a normal map. The processing of the data is done in the background to the capture process and various parameters can be tuned depending on what the customer requires in terms of quality and resolution. Once these parameters have been selected the data is placed in a server queue for processing. The speed at which the data can be completed is a function of the number of processors that are available for processing. Depth Analysis currently uses a 64 blade cluster to process data but more can added if the customer requires quicker turnaround.
There is no user intervention required during the processing stage so the data is unadulterated or interpreted in any way.