Facial motion capture
Facial motion capture, or facial MoCap, is a subset of the motion capture field, frequently used for machine vision systems. A user?s face is viewed, usually full on, and relative head motion filtered out. The individual motion of the cheeks, lips, chin, eyes and eyebrows are studied, and used for facial expression recognition.
Below, we offer a selection of links from our resource databases which may match this term.
Related Dictionary Entries for Facial motion capture:
Resources in our database matching the Term Facial motion capture:
Results by page      
BBC article about how to add authenticity to VR, goes beyond graphics, also encompassing extensive use of motion capture to catalogue how stance, gait and the tiny movements of facial muscles combine when people display different emotions.
Horses and other Animals in Motion is a collection of, as the title says, 45 sets of photographs of horses hauling, walking, trotting, etc., plus sequences of donkeys, an ox, pig, dog, cat, deer and other animals capture details of anatomy and movement. These images, were taken by the definitive expert in the field, Eadweard Muybridge.
An expressive face is a work of art. Constantly moving and changing. Lips, brows, frown lines, each is in constant motion. Stop Staring analyses facial structures and movements, then shows animators how to bring life to the faces of their characters.
Promising work by QuinteQ on real-time motion capture without excessive hardware, holds promise for MoCap use in public VR.
MoCap - Motion Capture - for all its impressive abilities, has definite limitations in terms of sensory fidelity, the expense and bulk of the rig. Gesture control is cheap and captures every little movement, but easily overwhelmed. Is a hybrid system possible?
Resource Type not Available
Motion sensors are starting to creep into a whole plethora of applications. They are the linch pins of haptics, of 3D pointers, of stress based sensor networks and locomotive VR interfaces. Yet, there's a problem. Small, discrete motion sensors, tiny enough to be built into larger devices the size say, of a Wii-remote or a 6 ounce HMD, are extremely difficult and expensive to produce.
We have known for some time that different cultures perceive different facial expressions as conveying different emotional states, and likewise in different cultures different facial expressions are made. Rather than having ream after ream of options for facial expression sequence files, might there be a far better way to handle such regional differences in recognising avatar-based visual emotional states?
The problem with photofit and sketch artists is, that human memory is not geared to remember fine facial features, even of people they know well. How then, to take advantage of facial recognition when looking for a suspect's identity?
Resource Type not Available
Industry News containing the Term Facial motion capture:
Results by page
A recent patent filing by defense contractor Lockheed Martin gives us a peek into a portable virtual reality simulator the company is cooking up.
The patent application is titled: "Portable immersive environment using motio...
(Press Release) The inertial motion capture suit Moven developed by Xsens Technologies B.V. has won the Overijssel Innovation Award 2007.
The suit is based on Xsens' inertial sensor technology allowing total freedom of move...
The CAPTECH2004 Workshop on Modelling and Motion Capture Techniques for Virtual Environments takes place on 9, 10 & 11 December 2004, in Zermat, Switzerland.
An international workshop to stimulate discussion on the current an...
(Press Release) Xsens Technologies, creator of Moven, a leading camera-less motion capture solution, has announced that Sony Picture Imageworks and independent console videogames developer Insomniac Games are new customers of the technology...
Flashing a wink and a smirk might be second nature for some people, but computer animators can be hard-pressed to depict such an expression realistically. Now scientists at Disney Research, Pittsburgh, and Carnegie Mellon University's Robo...