A framework for locally retargeting and rendering facial performance

Ko Yun Liu*, Wan Chun Ma, Chun Fa Chang, Chuan Chang Wang, Paul Debevec

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

8 Citations (Scopus)


We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details.

Original languageEnglish
Pages (from-to)159-167
Number of pages9
JournalComputer Animation and Virtual Worlds
Issue number2-3
Publication statusPublished - 2011 Apr


  • expression synthesis
  • facial retargeting
  • normal map blending

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'A framework for locally retargeting and rendering facial performance'. Together they form a unique fingerprint.

Cite this