ホウチン テルヒサ
Teruhisa Hochin
寶珍 輝尚 所属 追手門学院大学 理工学部 情報工学科 職種 教授 |
|
言語種別 | 英語 |
発行・発表の年月 | 2020/09 |
形態種別 | 論文 |
査読 | 査読あり |
標題 | Speech-Driven Facial Animation with Transfer Learning |
執筆形態 | 共著・編著(代表編著を除く) |
掲載誌名 | Proceedings - 2020 9th International Congress on Advanced Applied Informatics, IIAI-AAI 2020 |
掲載区分 | 国外 |
出版社・発行元 | IEEE |
巻・号・頁 | pp.49-54 |
著者・共著者 | Masaya Ohura,Teruhisa Hochin,Hiroki Nomiya |
概要 | In order to move a CG model, it is necessary to manually move it with the conventional method, and the task is extremely difficult and time-consuming. Therefore, there is a method to generate CG animation from the feature points of the tracked face. However, face tracking technology has problems such as the inability to track if the face is hidden, and the need for large-scale equipment to track with high accuracy. In order to solve such problems, research has been conducted to generate facial expressions by using machine learning from speech linked to mouth movement. However, these studies do not take into account the individuality of the user during the learning process. In this paper, we propose a method for performing transfer learning on a trained model for the purpose of generating animations that take into account the user's individuality. Verification experiments based on the proposed method are conducted with data generated with and without transfer learning as an evaluation. |
DOI | 10.1109/IIAI-AAI50415.2020.00020 |
DBLP ID | conf/iiaiaai/OhuraHN20 |
PermalinkURL | https://dblp.uni-trier.de/rec/conf/iiaiaai/2020 |
researchmap用URL | https://dblp.uni-trier.de/db/conf/iiaiaai/iiaiaai2020.html#OhuraHN20 |