A framework for locally retargeting and rendering facial performance

Ko Yun Liu*, Wan Chun Ma, Chun Fa Chang, Chuan Chang Wang, Paul Debevec

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

6 引文 斯高帕斯(Scopus)

摘要

We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details.

原文英語
頁(從 - 到)159-167
頁數9
期刊Computer Animation and Virtual Worlds
22
發行號2-3
DOIs
出版狀態已發佈 - 2011 四月 1

ASJC Scopus subject areas

  • 軟體
  • 電腦繪圖與電腦輔助設計

指紋

深入研究「A framework for locally retargeting and rendering facial performance」主題。共同形成了獨特的指紋。

引用此