Exploring the effectiveness of media on learning, psychologist Richard Mayer brings an interesting and well-examined viewpoint.
Mayer’s View on How People Learn
Cognitive psychologist Richard Mayer has done considerable work to explore the link between multimedia exposure and learning (Medina, 2008, p. 208). Mayer (2009, p. 57) states “multimedia messages that are designed in light of how the human mind works are more likely to lead to meaningful learning than those that are not.” The challenge is that (Mayer, 2009, p. 60) “designing multimedia messages is always informed by the designer’s conception of how the human mind works.” Mayer asserts that for the most part, “designer’s underlying conception is that human learners possess a single-channel, unlimited-capacity, passive-processing system.” The problem with this is that it conflicts with what is truly known about how people learn (Mayer, 2009, p. 61).
Mayer (2009, p. 61) has developed a cognitive model of multimedia learning intended to represent the human information processing system. See Figure 1.0.
Figure 1.0 Cognitive theory of multimedia learning (Mayer, 2009, p. 61)
Figure 1.0 is depicting that humans possess two channels for processing visual and auditory information. Information is processed in working memory and then transferred to long-term memory. Mayer’s cognitive theory of multimedia learning is based on dual channel processing, limited capacity of working memory and active-learning processing assumptions (Mayer, 2009, p. 82).
The fact that working memory has limited capacity is significant because if instructional sessions are not properly designed, the learning sessions can actually hinder learning. With this in mind, Mayer (2009, p. 57) stresses the importance of reducing extraneous processing, managing essential processing, and fostering general processing. Extraneous processing is cognitive processing that does not serve the instructional goal. Extraneous processing can lead to excessive cognitive load. Cognitive load refers to the strain that is put on working memory, also known as STM (Short-Term Memory) by the processing requirements of a learning task (Driscoll, 2005, p. 136). Essential processing is cognitive processing necessary to represent media in working memory. General processing is deep cognitive processing including managing and integrating media (Mayer, 2009, p. 57).
Humans engage in active learning by attending to relevant incoming information, i.e. essential processing, organizing selected information into coherent mental representations in working memory, and integrating mental representations with other existing knowledge already stored in long-term memory, i.e. general processing (Mayer, 2009, p. 63). This is accomplished through the affordance that the media provides.
The Concept of Affordances
Some refer to media characteristics as attributes while others use the term affordance. Affordances in the traditional sense are objects that are physiologically intuitive. An example would be the handle of a hammer.
The primary affordance of media is the ability to convey information however, each type of media may or may not have one or more affordances. In the media sense, according to Clark (Clark, 1994, p. 23) “television conveys realistic, real time, documentary information.”
Another example would be the affordance of discovering the layout of a plant area through an interactive visual. See image 2.0.
Image 2.0 Static of an Interactive Overview
Mayer (2009, p. 22) asserts that well-designed multimedia instructional messages can promote active cognitive processing in learners even when they seem to be behaviorally inactive. This counters the belief by most designers that hands-on experiential learning will guarantee meaningful learning which couldn’t be farther from the truth.
Mayer (2009, p. 22) identifies two types of active learning:
- Cognitive, and
Mayer (2009, p. 22) explains “behavioral activity per se does not guarantee cognitively active learning; it is possible to engage in hands-on activities that do not promote active cognitive processing – such as in the case of people playing some highly interactive computer games.”
To this end, Mayer (2009, p. 52) has identified twelve features of multimedia instructional methods that aid design. These twelve features are:
- Coherence – do people learn better when extraneous material is excluded (concise method) rather than included (elaborated method)?
- Signalling – do people learn better when essential material is highlighted (signalled method) rather than not highlighted (non-signalled method)?
- Redundancy – do people learn better from animation and narration (nonredundant method) rather than from animation, narration, and on-screen text (redundant method)
- Spatial Contiguity – Do people learn better when corresponding graphics and printed text are placed near each other (integrated method) rather than far from each other (separated method)?
- Temporal Contiguity – do people learn better when graphics and spoken text are presented at the same time (simultaneous method) rather than in succession (successive method)?
- Segmenting – Do people learn better when multimedia instruction are presented in learner-paced segments (segmented method) rather than as a continuous presentation (continuous method)?
- Pre-training – Do people learn better when they receive pre-training in the names and characteristics of key components (pre-training method) rather than with-out pre-training (no-pre-training method)?
- Modality – Do people learn better from graphics and narration (narration method) than from graphics and printed text (text method)?
- Multimedia – Do people learn better from words and pictures (multimedia method) than from words alone (single-medium method)?
- Personalization – Do people learn better from a multimedia lesson when the words are in conversational style (personalized method) rather than in formal style (nonpersonalized method)?
- Voice – Do people learn better when the words in a multimedia lesson are spoken by a human voice (human-voice method) rather than a machine voice (machine voice method)?
- Image – Do people learn better from a multimedia lesson when the speaker’s image is on the screen rather than not on the screen ( no-image method)
Although Mayer is a proponent for leveraging media to enhance instruction, he recognizes that the cause of learning is the instructional method. This evident by his (Mayer, 2009, p. 53)acknowledgment that Clark has eloquently “argued instructional methods cause learning, but instructional media do not cause learning.” Mayer substantiates this by stating (2009, p. 53) that “similarly, he has shown “that the same instructional methods have the same effects on learning regardless of whether the medium is a desktop computer, non-immersive virtual reality, or virtual immersive reality.”
A Case for Multisensory Instructional Methods
Medina (2008, p. 210) states that Mayer’s twelve principles “are relevant only to combinations of two senses: hearing and vision.” Medina (2008, p. 210) emphasizes that humans have three other senses capable of contributing to learning, i.e. touch, smell and taste.
Neuroscientists have made considerable advances in understanding how the human brain interprets media, and how media are integrally related and work together to convey information.
For example, one of the affordances of sound is that it can be used to enhance a visual experience. Medina (2008, p. 207) states “multiple senses affect our ability to detect stimuli.
Most people have a very difficult time seeing a flickering light if the intensity of the light is gradually decreased.” According to Medina (2008, p. 207) when researchers decided to test the threshold by precisely coordinating a short burst of sound with the light flickering off, the presence of the sound actually changed the threshold. The subjects were able to see the light way beyond their normal threshold if the sound was part of the experience. Medina (2008, p. 207) goes on to say that “these data show off the brain’s powerful integrative instincts.”
Smell and Emotional Memory
Medina (2008, p. 211) asserts what scientists have apparently known for years – that smell can evoke memory also known as the Proust effect. The following is a story Medina (2008, p. 217) tells in his book Brain Rules and it emphasizes the importance of multisensory learning.
I occasionally teach a molecular biology class for engineers, and one time I decided to do my own little Proust experiment. (There was nothing rigorous about this little parlor investigation; it was simply an informal inquiry.) Every time I taught one section on an enzyme (called RNA polymerase II), I prepped the room by squirting the perfume Brut on one wall. In an identical class in another building, I taught the same material, but I did not squirt Brut when describing the enzyme. Then I tested everybody, squirting the perfume into both classrooms. Every time I did this experiment, I got the same result. The people who were exposed to the perfume during learning did better on subject matter pertaining to the enzyme – sometimes dramatically better – than those who were not.
Medina (2008, p. 208, p. 219) asserts the brain’s ability to learn is significantly optimized the more multisensory the environment becomes and less optimized for unisensory media. He recommends stimulating more of the senses at the same time. Here’s why (Medina, 2008, p. 219):
Humans absorb information about events through our senses, translate it into electrical signals, (some for sight, others from sound, etc.), disperse those signals to separate parts of the brain, then reconstruct what happened, eventually perceiving the event as a whole.
- The brain seems to rely partly on past experience in deciding how to combine these signals, so two people can perceive the same event very differently.
- Our senses evolved to work together – hearing influencing vision, for example – which means that we learn best if we stimulate several senses at once.
- Smell has an unusual power to bring back memories, maybe because smell signals bypass the thalamus and head straight to their destinations, which include that supervisor of emotions known as the amygdala
I support the view that media have unique teaching and learning capabilities, or affordances and that instructional methods that use media and a multisensory approach yield the best results. I have supported this position by referencing two well-known authorities on how people learn, Richard E. Mayer and John Medina. Mayer is a Cognitive Psychologist, and Medina is a Molecular Biologist who has spent the past twenty plus years studying how the brain works.