HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Hokkaido University Sustainability Weeks >
Sustainability Weeks 2009 >
2009 APSIPA Annual Summit and Conference >

A Component-based Face Synthesizing Method

Files in This Item:
MA-L1-4.pdf1.4 MBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/39638

Title: A Component-based Face Synthesizing Method
Authors: Chiang, Cheng-Chin Browse this author
Chen, Zhih-Wei Browse this author
Yang, Chen-Ning Browse this author
Issue Date: 4-Oct-2009
Publisher: Asia-Pacific Signal and Information Processing Association, 2009 Annual Summit and Conference, International Organizing Committee
Journal Title: Proceedings : APSIPA ASC 2009 : Asia-Pacific Signal and Information Processing Association, 2009 Annual Summit and Conference
Start Page: 24
End Page: 30
Abstract: The active appearance models (AAM) is a popular tool in object tracking. An AAM is featured by its integrated modeling of deformations in both shapes and textures. Therefore, in addition to the object tracking, AAM is also a good visual synthesizer. The other strength of the AAM is its compact representations for the geometries and textures of synthesized objects. By training with the principal component analysis method, the AAM parameterizes the shape and the texture of each synthesized object simply with a linear combination of eigen-shapes and eigen-textures, respectively. This paper presents a novel video-driven face synthesizing method which tracks the faces of a person on video frames and synthesize novel faces using geometries of individual facial components, such as eyes, noses, and mouths, of other persons. To this end, we propose the component-based active shape models (ASM) for synthesizing each facial component. One major prominent feature of the proposed method is that a rich number of novel facial expressions can be synthesized by combining different facial components from different persons on the synthesized faces. No further retraining process for the AAMs or ASMs is required for synthesizing these novel facial expressions. The experimental results show that the proposed method can accomplish interesting and vivid facial synthesis and exhibits its high potential in many practical applications.
Description: APSIPA ASC 2009: Asia-Pacific Signal and Information Processing Association, 2009 Annual Summit and Conference. 4-7 October 2009. Sapporo, Japan. Oral session: 3D Synthesis and Expression (5 October 2009).
Conference Name: APSIPA ASC 2009: Asia-Pacific Signal and Information Processing Association, 2009 Annual Summit and Conference
2009年アジア太平洋信号情報処理連合学会アニュアルサミット・国際会議
Conference Place: Sapporo
Type: proceedings
URI: http://hdl.handle.net/2115/39638
Appears in Collections:北海道大学サステナビリティ・ウィーク2009 (Sustainability Weeks 2009) > 2009年アジア太平洋信号情報処理連合学会アニュアルサミット・国際会議 (2009 APSIPA Annual Summit and Conference)

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University