The modality effect in a mobile learning environment: Learning from spoken text and real objects

Tzu Chien Liu*, Yi Chun Lin, Yuan Gao, Fred Paas

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

The finding that under split-attention conditions students learn more from a picture and spoken text than from a picture and written text (ie, the modality effect) has consistently been found in many types of computer-assisted multimedia learning environments. Using 58 fifth-grade and sixth-grade elementary school children as participants, we investigated whether the modality effect can also be found in a mobile learning environment (MLE) on plants' leaf morphology, in which students had to learn by integrating information from text and real plants in the physical environment. A single factor experimental design was used to examine the hypothesis that students in a mixed-mode condition with real plants and spoken text (STP condition) would pay more attention to the real plants, and achieve higher performance on retention, comprehension, and transfer tests than the single-mode condition with real plants and written text (WTP condition). Whereas we found that participants in the STP condition paid more attention to observing the plants, and achieved a higher score on the transfer test than participants in the WTP condition, no differences were found between the conditions for retention and comprehension test performance.

Original languageEnglish
Pages (from-to)574-586
Number of pages13
JournalBritish Journal of Educational Technology
Volume50
Issue number2
DOIs
Publication statusPublished - 2019 Mar

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'The modality effect in a mobile learning environment: Learning from spoken text and real objects'. Together they form a unique fingerprint.

Cite this