Abstract
Asynchronous video learning, including massive open online courses (MOOCs), offers flexibility but often lacks students’ affective engagement. This study examines how teachers’ verbal and nonverbal vocal emotive expressions influence students’ self-reported affective engagement. Using computational acoustic and sentiment analysis, valence and arousal scores were extracted from teachers’ verbal vocal expressions, and nonverbal vocal emotions were classified into six categories: anger, fear, happiness, neutral, sadness, and surprise. Data from 210 video lectures across four MOOC platforms and feedback from 738 students collected after class were analyzed. Results revealed that teachers’ verbal emotive expressions, even with positive valence and high arousal, did not significantly impact engagement. Conversely, vocal expressions with positive valence and high arousal (e.g., happiness, surprise) enhanced engagement, while negative high-arousal emotions (e.g., anger) reduced it. These findings offer practical insights for instructional video creators, teachers, and influencers to foster emotional engagement in asynchronous video learning.
| Original language | English |
|---|---|
| Pages (from-to) | 13483-13494 |
| Number of pages | 12 |
| Journal | International Journal of Human-Computer Interaction |
| Volume | 41 |
| Issue number | 21 |
| DOIs | |
| Publication status | Published - 2025 |
Keywords
- Acoustic analysis
- machine learning
- natural language processing
- pedagogy
- sentiment analysis
- speech emotion
ASJC Scopus subject areas
- Human Factors and Ergonomics
- Human-Computer Interaction
- Computer Science Applications