e-Learning Ecologies MOOC’s Updates
Multimodal Meaning (Admin Update 5)
Multimodal Meaning—using new media resources. Today’s learners need to be able to use digital media to juxtapose and link text, diagram, table, dataset, video documentation, audio recording and other media. Across all subject areas, meaning making and knowledge representations are supported and enhanced today by digital production skills and technologies.
-
Video 3a: What's New About Digital Technologies?
-
Video 3b: Multiliteracies and Synesthesia
-
Multimodal Meaning in Scholar
All Levels of Participation: Make a comment below this update about the ways in which the multimodal affordances of new media can change the nature of learning. Respond to others' comments with @name.
Additional Introductory and Advanced Participation: Make an update introducing a multimodal meaning concept on the community page (not your personal page - because only peers will see that!). Define the concept and provide at least one example of the concept in practice. Be sure to add links or other references, and images or other media to illustrate your point. If possible, select a concept that nobody has addressed yet so we get a well-balanced view of multimodal meaning. Also, comment on at least three or four updates by other participants. Multimodal meaning concepts might include:
-
Multiliteracies
- New Media
- Digital Media
- Multimodal knowledge representations
- Visual learning
-
Video learning
- Simulations
- Learning games
-
Suggest a subconcept in need of definition!
Multimodal learning concepts: Learning games
Learning through play has been referred to by various terms including ludogogy, cognitively developmental play (Piaget's description) and serious games (which seems to be the preferred term when referring to learning games for adults: http://en.wikipedia.org/wiki/Serious_game).
My own interpretation of learning games is that they can provide authentic learning in a safe environment; this allows for learners to learn through their mistakes without the fear that would accompany a 'real world' scenario.
------Multimodal learning for conscious learniners----
I'd like to continue with the discussion on the importance of adapting our teaching-learning environments to students that can benefit from their creativity, their autonomous learning skills and from their colaboration with others. I would also add the affordance of u-learning, which implies that they may be working in class or online or anywhere and at at any other time (not just the timetable).
This implies that students will need to create learning products (if possible with a variety of tasks that can answer different intelligences) that go from autonomous to peer to peer and collaborative learning abilities of the content studied. They need to do some research, to plan and to elaborate the product (project) in order to show it to other peers and the teacher.
Finally, the learning cycle requires that these learners receive and give feedback on what they have done and on what they could do to improve their work. It is essential for them to have in mind the goals of the project or a symbolic rubric that guide their aims.
I have used this multimodal framework in EFL and ESP in Higher Education and it has worked quite well. From some reluctant students who didn't want to enrol on another compulsary subject of English, I have seen active participants who have committed themselves to achieve the goals of the course. They have even enjoyed their active participation in the creation of tasks such as videos.
Having said that, it takes time to prepare well the learning scenario: course aims, evaluation criteria (summative assessment included), in class and digital tasks to be done by learners (together with tools) and rubrics that facilitate feedback. But the goals have been mostly positive and rewarding for them and for me.
Thanks and have a good day, everyone!
Programming is an insightful example of multimodal learning, as the learning process is - in my experience - largely influenced by the programming language and environment. A couple of years ago, I started to learn the R programming language, which illustrates well this point.
I started to use R as a replacement for Excel (when files were too big for my computer) then more comprehensively. This was made possible by the fact that R is designed to easily accomplish simple tasks, thus bringing immediate utility which, in turn, creates opportunities for deeper learning. In addition, as an interpreted language, R provides immediate feedback on any line of code typed and, consequently, its learning is highly interactive. In a word, the R language is approachable.
Soon, I started to use R to learn and apply statistics and, more generally, for data analysis. This was made easier because, as a functional language, R has a syntax close to mathematical expressions; combined with immediate feedback, this makes writing mathematics (including statistics) an interactive process. In addition, plotting data in various ways is relatively straightforward, which makes the exploration of data and mathematical expressions an iterative process, going back and forth between code and visualization. Thus, R has the ability to transform learning and using mathematics into a multimodal experience.
Later on, I started to explore different - and often more elegant - ways to solve problems using R. This was facilitated by the fact that R functionality can be extended with packages and there often exist several packages, each with their own logic and syntax, to accomplish a given task. In addition R benefits from an active and enthusiastic community, which provides help and advice. Exchanges are facilitated by markup languages allowing the intermingling of text, code and interactive displays, as well as web services to share documents produced in this way. Altogether, this creates a learning ecology where learners are exposed to diverse ways of thinking. As the same tools are available to learners and researchers, this also tends to abolish the boundary between learning and producing knowledge.
An important aspect is that R has evolved to be a multimodal learning environment as much as a programming language, creating a shift in the way programming and data analysis (in a broad sense) can be learned. A detailed comparison of learning practice with other programming languages would certainly be informative. In turn, this may serve as an inspiration to shift learning practice in other fields by developing new multimodal learning environments.
Once our way of representing meaning changes, due to so many possibilities now easily available, this has as a consequence, changes in our ways of thinking and understanding. For example, if one is studying English and gets in touch with a book explaining something about what is being studied, one meaning will be conveyed. If, on the other hand, someone else (another student) had access to the same content in the book, but also a video about the topic, then the whole process of meaning making changes... And this has consequences for the learning process. After all, we learn differently and those “aha” moments take place differently, depending on the individual.
Besides, critical literacy is fundamental if any change is desired on the way we see and understand learning. What I mean is: in order to develop a broader understanding of Multiliteracies, developing critical literacy is imperative so that we can make choices on which of the various modes available will help me better understand what I am trying to learn.
In addition, if we want to make synesthesia become something that can enter our educational system, then teacher education ought to seriously take the discussion on the multimodal affordances into consideration. After all, when it comes to our teaching practices, written language has been privileged as compared to the range of modes available and mentioned by Dr. Cope when referring to the Multiliteracies theory: oral language, image, sound, gesture and tactile communication. And I think this connects to what @Nikolaos Moropoulos said concerning shifts that belong to our everyday life, in terms of time and multitasking, I would add. It is true that those multimodal forms of representation are here now as part of our lives…dramatically changing the way we are, and therefore, the way we learn.
I think the shift from literacy to multimodality is not only redefining societies and cultures. Indeed, it is changing modes of corporeality / embodiment among people and communities. In other words, corporeality is neither standard nor linear. Like other ecologies, corporeality is naturally multimodal and non-linear. Multimodality of meaning is allowing people to experience life beyond conventional corporeality (bodies). They are experiencing life as inter-corporeal process. The new modes of corporeality /embodiment are changing how people might experience body organs and senses (such as eye, heart, ear, hand and foot....). Synesthesia is a new normal. By doing / undoing new/old corporealities, multimodality of meaning is enabling people to see more, hear more and achieve more meaningful things for oneself and others. It allows people to live in extended modes of existence such as sign, language, artifacts...etc.
Over the past 15 years or so there has been a big switch towards using multiple modes of communication at work, certainly in the commercial world.
Plans, reports, and various other types of documents which were once lengthy formatted documents are now commonly produced in Powerpoint format. Bullet points have replaced full sentences; slide headings have replaced paragraph sub-titles; website links have replaced book citations; there are relatively few words; and we have images, sound and video. They are delivered by face-to-face or online presentation using the slides, sometimes recorded for later access, or the slide deck is simply distributed to those who could not attend. And things are continuing to move forward with technology. Take Prezi for example. www.prezi.com
People need to be able to use these tools effectively, whether they learn them at “school” or “on the job”.
One question that intrigues me is what role do educators play? Making use of students’ multiliteracies in learning is not the same as teaching students how to be multiliterate and use all the tools available to them. A geography specialist helping students learn to make a video? I suppose students will mostly learn by doing but I suspect that this (multi)literacy aspect to learning will need to be accommodated (in terms of time and effort).
I think it’s an interesting question to consider, what level of knowledge and skill of the ever-advancing new technologies do you assume? Will some courses require a baseline level of skill that students need to have reached to be able to participate effectively?
I have always been fascinated and intrigued by the neurological condition of synaesthesia so was surprised to see it appear in the context of multi-modal learning. Generally, I am a bit dubious when a term with a specific meaning in one field is co-opted to perform duties in another field because it can lead to confusion and mis-understanding. Consequently, I was curious to see if this application of synaesthesia into literacy discourse is warranted.
Synaesthesia, in the neurological context, refers to the condition in which individuals have their senses mixed-up, cross-wired or blended in some way. For example, some people with synaesthesia will actually see colours when they hear sounds or particular words. The BBC documentary, Derek Tastes Like Earwax, includes the case of a man who literally experiences specific tastes for particular words (luckily his name isn't Derek).
Now, in the multimodal sense, synaesthesia is used to describe The process of shifting between modes and re-representing the same thing from one mode to another (Cope & Kalantzis, 'Multiliteracies': New Literacies, New Learning, p.13) In this article, our professors also go into more detail than the Coursera video about representational parallels [which] make synaesthesia possible (p.13), referring to the way different modes, say, oral, writing and visual, share similar conventions that allow the representation of a thing in each mode to be understood in a similar way. Furthermore, knowledge of how one mode operates to generate meaning gives cues to generate meaning from another mode. For example, the narrative structure of a story (its sequence) can be understood and interpreted for meaning whether it is told (oral mode), read (written mode), or presented visually as a comic strip (visual mode). It seems intuitively correct that accessing the same concept, represented via multiple modes leads to better understanding (to demonstrate, just take a look at this TED-ed video that seeks to help us conceptualise the size of atoms using a blend of animations, written text, analogies, visual symbolism etc - all built around a narrative-like structure).
Along with parallelism, Cope and Kalantzis also draw attention to the other, seemingly paradoxical, aspect of multimodalities: that their representational potentials ... are unique unto themselves (p. 13). That is, while each mode may be able to represent the same thing, the meaning is never quite the same (p.13) because of the inherent features of, and responses we have to, each mode.
To me, this uniqueness of modes bears some connection to our bodily senses that get mixed up by neurological synaesthesia. Even so, the communication modes are still distinct from each other as we shift between them. In contrast, neurological synaesthesia involves one sense behaving like, or being layered on top of another.
As for the parallelism described in multimodal synaesthesia, this doesn't seem to have an equivalent in neurological synaesthesia. That is, our sensations of taste, smell, hearing etc appear to be interpreted according to completely different processes, even though, as Hope and Kalantzis point out, they are holistically integrated (p13). There seems to be little connection between the word Derek and the taste of earwax that the man in the documentary is basing his perception upon.
Also, neurological synaesthesia appears to be a form of neurological dysfunction of a biological process, while multimodal synaesthesia appears to be a learned or conditioned cultural process that is a sign of well-developed literacy across a number of modes.
So, is the term synaesthesia, as applied to multimodalities, helpful? My initial impression is that it is not, especially when an existing terminological option exists already that serves a similar purpose. This is the prefix, inter, which is used to mean shifting between items of the same category. In the world of literacy it is often seen in the form of intertextuality, which refers the ways texts we have already been exposed to influence our response to, and the meaning we make from, new texts (whether as producer or consumer of the new text). So intertextuality is similar to intermodality (synaesthesia) in that knowledge of one text/mode impacts the response we have to another. The inter prefix also allows multiple ways to express different aspects of intertextuality (noun): intertextual (adjective), intertextually (adverb), intertextualize (verb). So while you can have synaesthetic (adjective), you can't do synaesthesia (noun) in the same way you read/view intertextually and even, dare I say, intermodally.
Personally, I think (neurological) synaesthesia is such a fascinating condition that those with the condition should have the term all to themselves.
In my view the key change that ICT brought to learning is in the way we experience time as we shift through various modalities.
The modalities existed before, in one way or another. What ICT technologies have dramatically changed is how quickly we can switch from one modality to the next, how frequently, and so on.
This change has brought about a new kind of creativitiy, and/or imagination, as well as a new way of being. It is not only learning that has changed, but also the way we are, the way we lead our lives.
Simple every day things are different. In the old days, when I took a photo I was proud of, I had to wait for it to be printed, and only then I could share it. Today I can email the photo one millisecond after I took it.
It is therefore also the new way of living with multiple modalities that affects learning with multiple modalities.
Living and learning become inseparable.