Apple Research Labs
Williams sets out to create a system for animating a face using video input. It extends upon the work done by Parke and Waters by attempting to map texture and expression with continuous motion as input. Using current technologies, both human features and human performance can, in the opinion of the author, be aquired, edited, and abstracted with sufficient detail and precision to serve dramatic purposes.
To create the model, a real head was sculpted in plaster and photos were taken from different angles. The scanned data, along with the photographic information, was used to create a warping rule for texture mapping. The result was a cylindrical texture map that could be wrapped around the 3-D facial image. Some points on the face could not be seen so they had to be healed by interpolating between the surrounding points. To get an even more accurate photo, they had the model stand on a turntable and took a peripheral photo while she rotated 360 degrees. The final model can be stretched in an unrealistic way, resembling a latex mask in some respects.
To do the animation, small dots were put onto the model's skin and then tracked as she went through some facial expressions. Using this as input, the computer generated face copied each change in the facial expression. The reference points on the model were duplicated on the computerised face and the calculations for each movement of a reference point resulted in the appropriate alterations to the facial mesh. This work is a proof of the concept being valid, and will be continued and expanded on in the future. [Synopsis by Valarie Hall]