Abstract
Mehan (2022) precisely explains AI as “intelligence exhibited by machines, where a machine can learn from information (data) and then use that learned knowledge to do something” (p. 10). Generally, we come to interact with different types of AI in our current daily digital and social lives such as AI in Google search, email spam filters, Siri by Apple, Alexa by Amazon, facial recognition and finger-touch passwords and AI in Health apps. However, the AI machines that are designed to perform with capabilities equal to or above human abilities have incurred serious problems of human rights violations and ethical issues because of algorithm bias and unwanted consequences. This paper takes up (a) the use of artificial intelligence in classrooms in China and (b) the use of AI-powered weapons by the Israeli Army as two distinct case studies for critical studies of AI from the ethical perspective. This paper examines how kids have been used as guinea pigs for AI experiments in classrooms, what happens if AI-powered guns that have been designed to constantly keep surveillance on and recognize persons carrying weapons and shoot them right away fail to differentiate between persons with and without weapons and if it is ethical to keep people in constant fear of it. This paper employs the digital method to collect data and the argumentative method (method of an argumentative essay) to analyze data.
Presenters
Dilli Bikram EdingoPh.D. Candidate, Communication and Culture, York University, Ontario, Canada
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
Artificial Intelligence, AI-powered Weapons, Algorithm Bias, Ethical Perspectives, AI Experiments