ITE Transactions on Media Technology and Applications
Online ISSN : 2186-7364
ISSN-L : 2186-7364
Regular Section
[Paper] Speech-driven Face Reenactment for a Video Sequence
Yuta NakashimaTakaaki YasuiLeon NguyenNoboru Babaguchi
著者情報
ジャーナル フリー

2020 年 8 巻 1 号 p. 60-68

詳細
抄録

We present a system for reenacting a person's face driven by speech. Given a video sequence with the corresponding audio track of a person giving a speech and another audio track containing different speech from the same person, we reconstruct a 3D mesh of the face in each frame of the video sequence to match the speech in the second audio track. Audio features are extracted from such two audio tracks. Assuming that the appearance of the mouth is highly correlated to these speech features, we extract the mouth region of the face's 3D mesh from the video sequence when speech features from the second audio track are close to those of the video's audio track. While retaining temporal consistency, these extracted mouth regions then replace the original mouth regions in the video sequence, synthesizing a reenactment video where the person seemingly gives the speech from the second audio track. Our system, coined S2TH (speech to talking head), does not require any special hardware to capture the 3D geometry of faces but uses the state-of-the-art method for facial geometry regression. We visually and subjectively demonstrate reenactment quality.

著者関連情報
© 2020 The Institute of Image Information and Television Engineers
前の記事
feedback
Top