Off-line automatic virtual director for lecture video

Di Wei Huang, Yu Tzu Lin*, Greg C. Lee

*此作品的通信作者

研究成果: 書貢獻/報告類型會議論文篇章

摘要

This research proposed an automatic mechanism to refine the lecture video by composing meaningful video clips from multiple cameras. In order to maximize the captured video information and produce a suitable lecture video for learners, video content should be analysed by considering both visual and audio information firstly. Meaningful events were then detected by extracting lecturer’s and learners’ behaviours according to teaching and learning principles in class. An event-driven camera switching strategy was derived to change the camera view to a meaningful one based on the finite state machine. The final lecture video was then produced by composing all meaningful video clips. The experiment results show that learners felt interested and comfortable while watching the lecture video, and also agreed with the meaningfulness of the selected video clips.

原文英語
主出版物標題Advanced Technologies, Embedded and Multimedia for Human-Centric Computing, HumanCom and EMC 2013
發行者Springer Verlag
頁數1
ISBN(列印)9789400772618
DOIs
出版狀態已發佈 - 2014 一月 1
事件Advanced Technologies, Embedded and Multimedia for Human-Centric Computing, HumanCom and EMC 2013 - , 臺灣
持續時間: 2013 八月 232013 八月 25

出版系列

名字Lecture Notes in Electrical Engineering
260
ISSN(列印)1876-1100
ISSN(電子)1876-1119

會議

會議Advanced Technologies, Embedded and Multimedia for Human-Centric Computing, HumanCom and EMC 2013
國家/地區臺灣
期間2013/08/232013/08/25

ASJC Scopus subject areas

  • 工業與製造工程

指紋

深入研究「Off-line automatic virtual director for lecture video」主題。共同形成了獨特的指紋。

引用此