Marker-based Facial motion capture
Old-school facial motion capture systems were marker based. Several hundred points were tagged laboriously onto a user?s face, which could then be tracked via cameras. Obviously the main disadvantages to this method were the sheer number of hours it took to set up to use it, and because the markers effectively covered the face, all fine detail was lost.
Below, we offer a selection of links from our resource databases which may match this term.
Related Dictionary Entries for Marker-based Facial motion capture:
Resources in our database matching the Term Marker-based Facial motion capture:
BBC article about how to add authenticity to VR, goes beyond graphics, also encompassing extensive use of motion capture to catalogue how stance, gait and the tiny movements of facial muscles combine when people display different emotions.
Horses and other Animals in Motion is a collection of, as the title says, 45 sets of photographs of horses hauling, walking, trotting, etc., plus sequences of donkeys, an ox, pig, dog, cat, deer and other animals capture details of anatomy and movement. These images, were taken by the definitive expert in the field, Eadweard Muybridge.
An expressive face is a work of art. Constantly moving and changing. Lips, brows, frown lines, each is in constant motion. Stop Staring analyses facial structures and movements, then shows animators how to bring life to the faces of their characters.
Promising work by QuinteQ on real-time motion capture without excessive hardware, holds promise for MoCap use in public VR.
Motion sensors are starting to creep into a whole plethora of applications. They are the linch pins of haptics, of 3D pointers, of stress based sensor networks and locomotive VR interfaces. Yet, there's a problem. Small, discrete motion sensors, tiny enough to be built into larger devices the size say, of a Wii-remote or a 6 ounce HMD, are extremely difficult and expensive to produce.
We have known for some time that different cultures perceive different facial expressions as conveying different emotional states, and likewise in different cultures different facial expressions are made. Rather than having ream after ream of options for facial expression sequence files, might there be a far better way to handle such regional differences in recognising avatar-based visual emotional states?
MoCap - Motion Capture - for all its impressive abilities, has definite limitations in terms of sensory fidelity, the expense and bulk of the rig. Gesture control is cheap and captures every little movement, but easily overwhelmed. Is a hybrid system possible?
Resource Type not Available
Industry News containing the Term Marker-based Facial motion capture:
Results by page
The CAPTECH2004 Workshop on Modelling and Motion Capture Techniques for Virtual Environments takes place on 9, 10 & 11 December 2004, in Zermat, Switzerland.
An international workshop to stimulate discussion on the current an...
(Press Release) The inertial motion capture suit Moven developed by Xsens Technologies B.V. has won the Overijssel Innovation Award 2007.
The suit is based on Xsens' inertial sensor technology allowing total freedom of move...
A recent patent filing by defense contractor Lockheed Martin gives us a peek into a portable virtual reality simulator the company is cooking up.
The patent application is titled: "Portable immersive environment using motio...
Flashing a wink and a smirk might be second nature for some people, but computer animators can be hard-pressed to depict such an expression realistically. Now scientists at Disney Research, Pittsburgh, and Carnegie Mellon University's Robo...
It is well known that people use head motion during conversation to convey a range of meanings and emotions, and that women use more active head motion when conversing with each other than men use when they talk with each other.