Posted by rferguson on November 10, 2010 in Elearning
An interesting discussion titled “Do we really need narration” over at Cathy Moore’s blog (http://blog.cathy-moore.com/2010/09/do-we-really-need-narration/ ) has prompted me to write about the muting and un-muting of audio. There are a number of ways that you can identify and differentiate professional developers from amateur developers, and one of these ways is by observing how a developer has programmed the functionality for the presentation of content, when the audio is muted. So let’s begin with a question: what should happen when the narration is muted?
In the following two screenshots, we have seven bullets that are synchronized to the visual presentation. In Screenshot A, the audio is not muted and the textual component is narrated in synchronization with the highlighting of each bullet, and the presentation of the corresponding image. When the presentation for Screenshot A has finished, learners are provided with onscreen navigation that allows them to navigate back and forth between the images. They can easily determine how the images correspond with the bullets, because each bullet is highlighted in synchronization with the image it correlates to. In Screenshot B, the same navigational controls that appear at the end of the presentation for Screenshot A are made available to learners the instant the audio is muted. Typically, in courses where the functionality of the presentation is not well developed, on-screen navigation is not provided when the audio is muted and the elapsed time associated with playback of the narration remains the same as when the narration is not muted. This in turn frustrates learners, especially if they have to wait for each textual component to appear.
So what is a good “rule of thumb” to follow when you’re programming the functionality of each screen in a self-paced course that has accompanying narration? You should try to ensure that when the audio is muted, there are controls in place that will allow learners to move through the presentation at their preferred pace. This means programming each screen so that it functions differently when the audio is muted than it does when the audio is not muted.
However, occasionally, there will also be a few screens which, due to their design, will not require any change in functionality when the audio is muted. An example is shown below in Screenshots C & D. In Screenshot C, the audio is not muted and the bulleted list is presented in synchronization with cue points in a flash video. When the audio is muted as shown in Screenshot D, no change to the onscreen functionality is required because the navigational controls are already in place to enable learners to “scrub” through the video and as they do, each bullet will highlight in accordance with the location of the scrubber.
By incorporating this type of functionality into the presentation of the content, you’re putting learners in the driver’s seat, and allowing them to control the playback when the audio is muted. You’re also helping to distinguish yourself as a professional developer and differentiating yourself from amateur players in the industry.