Multimodal Literacies MOOC’s Updates

Blurred Lines on LinkedIn

This update is a submission for the Essential Peer Reviewed Update #1 for the Multimodal Literacies course on Coursera. (I hope I've posted in the right place.) The prompt was "Make an update of 300 words or more: Describe an important site of multimodal communication in your life, or your students' lives. How might a multimodal analysis of meaning prove useful? How does this compare with traditional notions of literacy?"

Blurred Lines on LinkedIn

Social media seems to be one of the richest displays of multimodal communication. Using the professional networking site LinkedIn as an example, you can peel back layers upon layers of communication in multiple modalities. Upon examining each of those layers for this assignment, I came to two conclusions. The first is that the current use of different modalities is a result of the desire by both the users and the designers of the platform itself to get the greatest level of activity possible (in terms of views, likes, shares, comments). Secondly, in such an environment, we find that users can move with fluidity between being consumers to producers of meaning, and perhaps sometimes the rules are not clear at all.

The platform fosters effective meaning making through the way it displays shared content:

Choices on formatting and layout of text and visual images (weight, color, font size) influence readability and engagement.

Previews and thumbnails of posts or articles give users a textual or visual glimpse of the content before they commit to read further.

Social elements (likes, comments) allow users to see a content’s influence on their network before they have even read it.

Videos play automatically.

A translate function.

Speaking of videos, similar to other social media platforms recently, videos play automatically when you get to it on your activity stream. Perhaps to not disturb others nearby, LinkedIn and other services have the audio muted by default. A recent change I have noticed is that videos on LinkedIn have begun to include more text, in the form of captions. While it provides accessibility for the hearing disabled, the added text transmits meaning to someone scrolling down their activity stream with the audio off.

Screenshot source: Barrett, J. [Jason]. (n.d.) Posts [LinkedIn page]. Retrieved Feburary 23, 2019 from https://www.linkedin.com/in/mr-jason-barrett/detail/recent-activity/

In a case where the actions in the video are not compelling enough for someone to stop and click/tap the speaker button to unmute, the words may get the job done. If my suspicions are correct, when deprived of one modality (videos losing their audio component) users have tweaked (or supplemented) by transmitting through an alternative modality (written words). This, then, would be an example of how the platform designers decisions resulted in an effect on (and subsequent response from) the users.

The translation function removes yet another barrier to communication, as meaning is now even less bound by language differences. In one moment, you feel prohibited by seeing a foreign language, and in a single tap, the entire meaning of the message is available to you (bound only by the accuracy of the translating tool).

Screenshot source: Bigwood, G. [Guy]. (n.d.) Posts [LinkedIn page]. Retrieved Feburary 23, 2019 from https://www.linkedin.com/feed/update/urn:li:activity:6501823187515371520/

On the user end of the ecosystem, there are various kinds of original content produced by users (brief text, lengthy articles, videos, photos and images). Beyond the user production of meaning through posts, we could examine the consumption of meaning, visible through the social interactions:

response comments comprised of text and emojis (yet another unique modality)

acts of “liking” or “resharing” content from somebody else (could this be seen as a virtual gesture?) - you are a consumer but yet sending forth meaning as you express your interest

In this sense, any preexisting lines between producers and consumers of meaning are blurred (or resonate back and forth).

Other lines that are blurred are those between the modalities themselves. Even within the videos, can be images, text and sound: typically voice (e.g. and instructional video with narration) or music (as the backdrop to some compelling or intriguing video).

Upon just this brief analysis of the interactions (platform-user, user-user, modality-modality) in the LinkedIn platform, we can see how multimodality (alongside the social aspects of the technology) has changed the power dynamics as it offers numerous and multi-directional new avenues for communication. Thank you in advance for feedback.

References

Barrett, J. [Jason]. (n.d.) Posts [LinkedIn page]. Retrieved Feburary 23, 2019 from https://www.linkedin.com/in/mr-jason-barrett/detail/recent-activity/

Bigwood, G. [Guy]. (n.d.) Posts [LinkedIn page]. Retrieved Feburary 23, 2019 from https://www.linkedin.com/feed/update/urn:li:activity:6501823187515371520/

  • Robert R Daniel
  • Phuong Thai-Garcia
  • Sonia Estima