The modality effect in a mobile learning environment: Learning from spoken text and real objects

Tzu Chien Liu*, Yi Chun Lin, Yuan Gao, Fred Paas

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

5 引文 斯高帕斯(Scopus)

摘要

The finding that under split-attention conditions students learn more from a picture and spoken text than from a picture and written text (ie, the modality effect) has consistently been found in many types of computer-assisted multimedia learning environments. Using 58 fifth-grade and sixth-grade elementary school children as participants, we investigated whether the modality effect can also be found in a mobile learning environment (MLE) on plants' leaf morphology, in which students had to learn by integrating information from text and real plants in the physical environment. A single factor experimental design was used to examine the hypothesis that students in a mixed-mode condition with real plants and spoken text (STP condition) would pay more attention to the real plants, and achieve higher performance on retention, comprehension, and transfer tests than the single-mode condition with real plants and written text (WTP condition). Whereas we found that participants in the STP condition paid more attention to observing the plants, and achieved a higher score on the transfer test than participants in the WTP condition, no differences were found between the conditions for retention and comprehension test performance.

原文英語
頁(從 - 到)574-586
頁數13
期刊British Journal of Educational Technology
50
發行號2
DOIs
出版狀態已發佈 - 2019 三月

ASJC Scopus subject areas

  • 教育

指紋

深入研究「The modality effect in a mobile learning environment: Learning from spoken text and real objects」主題。共同形成了獨特的指紋。

引用此