Setting up Faceware’s tools in a voice-booth is an incredibly simple process – and the setup utilized by Formosa will work in 99% of all similar booths. Below, performance capture supervisor Christopher Jones describes the process and approach.
How do you set up a Faceware ProHD headcam?
“The ProHD headcam arrives in Pelican case and can be unpacked and set up within 15 minutes. Fitting an actor is an intuitive process, and helmet sizing is simple: with three different helmet sizes and three different thicknesses of padding for the front, top, and back, a secure fit can be found by trying different choices until the actor finds the one that’s comfortable.
“Next, an adjustable belt holding the battery pack is put on the actor, and finally the camera is attached to the helmet. An operator will frame up the camera directly in front of the actor’s face and set focus. The directors and producers can then view the realtime HD feed from the camera in full resolution in the control room to gauge the nuance of performance.”
What are the benefits of simultaneous voice and face capture?
“It’s a growing trend we’ve seen for years. From a creative standpoint the actor knows the cohesiveness of their emotional expression will stay true to their intended performance. This includes really hitting the lip sync and bringing more of their true intent to their character’s animation.
“From a practical standpoint, there will be certain jobs that cannot be won without that capacity for simultaneous capture. It’s also a possible service revenue stream: a VO studio can bill clients for capture equipment, labor, post work, and media management.”
How do you sync the performance with Pro Tools software?
“This process is straightforward, and can be accomplished in many ways, in part due to the HD-SDI video signal produced by the ProHD Headcam. Most often that signal is routed into a Digital Disk Recorder that accepts audio and timecode from Pro Tools. At Formosa, their deck is setup to record automatically when Pro Tools records, using the Timecode to trigger recording and keep things perfectly in sync for post.”
How is the data then processed in post using Faceware’s Analyzer and Retargeter?
“It’s in post that the Faceware Creative Suite Pipeline plays its part. Once the performance is captured, the data is run through Faceware’s Analyzer software to track even the most minute details of an actor’s facial performance. This is performed quickly and easily via an intuitive workflow; technicians can easily read and trim their imported footage by time-code, frame, or job length, streamlining the animation pipeline.
“The Autotrack feature furthers this efficiency by tracking the performance with one button, or the technician can create a custom Tracking Model to refine and pinpoint the exact performance desired. These global tracking models can be shared between users for greater efficiency across an entire animation team.
“The data is then exported to Faceware’s Retargeter – a plugin to Autodesk Maya, Motionbuilder, and 3ds Max. Retargeter uses the tracking data exported from Analyzer to produce high-quality, lifelike facial animation.
“Retargeter operates with any character or rig; if you can keyframe it, Retargeter can drive it. Users can easily teach Retargeter how they want their rig to work with a simple Character Setup process. Just load your Analyzer performance data and use the intuitive pose-based workflow to achieve realistic, high-quality animation. The Shared Pose Libraries can further increase consistency and speed. If time is short, the AutoSolve feature can automate the animation process with command-line access and batch commands.”
- JVC Flexes Its Muscles For Strongman Competition Livestreams - November 22, 2021
- CONNECT ramps up its NMS SaaS offer with a new cloud-powered solution - November 18, 2021
- Chulalongkorn University Uses Blackmagic Design Workflow for Online and Streaming Classes - August 25, 2020