e-Learning Ecologies MOOC’s Updates

Essential Update #3: Multimodal Learning Design. Past and Future UX.

In this module, Profs. Cope and Kalantzis speak to the ability to create, communicate, receive and interpret meaning in a variety of modes: not only sight-reading but also video, images, graphs, and sound. Modern technology allows learners to participate in all processes in a variety of ways, either simultaneously or across space and time. Synthesia, which is medically defined as a neurological condition, if not disorder, becomes a more holistic way of processing data, while the term literacy expands from the comprehension of graphic systems to literacies, the ability to grasp and integrate data in different formats.

I'd like to address this topic from two points of view: one, historic; the other, futuristic. As regards the first, I'll speak to medieval Spain, my erstwhile research specialty. First, we have 11th-century Andalusian jarchas, fragments of oral Romance love songs transcribed as the conclusion of classical Arabic or Hebrew poems [1], in such a way that the last verse of these poems does not make sense unless the sonic value of each graphic character is vocalized. Second, to throw in with Prof. Cope's references to early printed texts, we find inline woodcuts labeled with content summaries and character names. (Admittedly, printing houses often re-used their woodcuts, in such a way that labels distinguished among images as much as aided reader comprehension): 

Comedia de Calixto y Melibea, o Celestina (Burgos, 1499?)

I include these references not to diminish the radical changes that modern technology affords in learning modalities, but simply to underscore that people have received content multi-modally for centuries.

As regards the future, I think that educators and educational designers who seek to incorporate multimodality in learning materials must look to experts in UI (User Interface Design) and UX (User Experience Design). One difficulty that we teachers have had trying to integrate new technology in our classes is that fact that modern technology -- as Prof. Cope mentions in the Introductory video to this series -- is not designed to meet the needs of teachers and students. We are, as he says, jerry-rigging business software to do entirely different tasks.

Yet educational software and apps are not much better. Learning Management Systems (LMS) like Canvas and D2L, are hoarding-spaces for every conceivable app, tool or feature, and are so crowded and confusing that many professors avoid using them altogether. Here's a screenshot of the standard, non-customized LMS dashboard for one of my own courses:

A standard LMS course instructor dashboard

In response to so-called teacher aids like this one, I produced a learning module in Articulate Rise entitled _Kondo Your Canvas_, using organizational guru Marie Kondo's methods to help instructors streamline their LMS. (Rise, by the way, is an outstanding educational software tool. But my free trial has expired and I can't afford it!)

Our textbooks, too, are a pastiche of in-person, text readings, video clips and stock images. I joke that when I taught my first language class, I used four textbooks: one for grammar, one for culture, one for literature and one for homework. Now, two decades later, I only use four: a digital textbook, a digital workbook, a digital reproduction of the old textbook, and my LMS. And you've seen my LMS!

Teachers will not fully embrace modern educational technology until modern educational technology approaches teachers much the way successful apps approach any other kind of user. Think of how complex the many functions of the Google Suite are, but how simple, even rudimentary, is the first Google interface a user sees: a single search bar on a white field. No one lectures the user that they need more skills training or must learn to adapt or have to acquire this or that certificate before requiring that Google carry out a series of complex algorithms across billions of datasets based on their specific input. 

I was intrigued by the promise of learning, design and technology in this week's Affordance, and so I investigated the professors' New Learning Online website, especially its sections on Multiliteracy and Learning by Design. I was disheartened, then, to find that the theoretical basis for new learning design is rooted in semiotics, or the meaning of signs, rather than design. 

Once we start to familiarize ourselves with best practices in multimodal design, multimodal learning will become much easier. We won't have to constantly resize header text in slide decks, for one thing. But we would also learn that users tend to reject icons as incomprehensible when not accompanied by a text label [3]; users set out to accomplish tasks thinking in nouns rather than verbs [4]; that users overwhelmingly prefer to intuit how to perform a given task, then consult simple text instructions and then -- only then -- view an instructional video [5]; and the Golden Rule of UX is Steve Krug's Don't Make Me Think.

Synthesia may be the most natural way for human beings to incorporate knowledge, but without a solid understanding of regular old design, new learning design is unlikely to take hold.