Teachers’ Vocal Expressions and Student Engagement in Asynchronous Video Learning

Hung Yue Suen, Yu Sheng Su*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Asynchronous video learning, including massive open online courses (MOOCs), offers flexibility but often lacks students’ affective engagement. This study examines how teachers’ verbal and nonverbal vocal emotive expressions influence students’ self-reported affective engagement. Using computational acoustic and sentiment analysis, valence and arousal scores were extracted from teachers’ verbal vocal expressions, and nonverbal vocal emotions were classified into six categories: anger, fear, happiness, neutral, sadness, and surprise. Data from 210 video lectures across four MOOC platforms and feedback from 738 students collected after class were analyzed. Results revealed that teachers’ verbal emotive expressions, even with positive valence and high arousal, did not significantly impact engagement. Conversely, vocal expressions with positive valence and high arousal (e.g., happiness, surprise) enhanced engagement, while negative high-arousal emotions (e.g., anger) reduced it. These findings offer practical insights for instructional video creators, teachers, and influencers to foster emotional engagement in asynchronous video learning.

Original languageEnglish
JournalInternational Journal of Human-Computer Interaction
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • Acoustic analysis
  • machine learning
  • natural language processing
  • pedagogy
  • sentiment analysis
  • speech emotion

ASJC Scopus subject areas

  • Human Factors and Ergonomics
  • Human-Computer Interaction
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Teachers’ Vocal Expressions and Student Engagement in Asynchronous Video Learning'. Together they form a unique fingerprint.

Cite this