Win18_Phil302 Cyberethics’s Updates

Lesson 15: Future concerns--Virtual Reality, Robot Ethics, and the Infosphere

Module 8 Lesson 15— Future concerns--Virtual Reality, Robot Ethics, and the Infosphere

There are a number of emerging technologies that we need to spend some time thinking about as we near the end of this course.  Given that the technologies you use are often designed years before they actually make it to the public, there are decisions being made right now in research and design labs around the world that you are unaware of now, but the ethical choices made by the designers and builders of these emerging technologies will deeply affect your life in the near future.

Let’s begin by reading section 2.4 and all of section 3 here: http://plato.stanford.edu/entries/it-moral-values/.  We can see from this reading that whatever the specific technology we are talking about might be, the rate of change in that technology is likely to be accelerating.  We are used to innovating on technology but information technology has allowed us to innovate on the processes of innovation.  The consequence of this is that we do not have as much time to adjust to new technologies as we might have in the past.  Where it took about seventy five years for land line telephones to reach ninety percent market saturation in the US, it took cell phones about fifteen years to do the same thing.  Additionally, mobile smart phones are going to bring internet technologies to people that have been left on the wrong side of the digital divide.

We now have world changing technologies coming at us faster which gives us less time to think through their effects on our society before the next one comes along.  We will now look at three broad categories of information technologies that seem poised to bring real change to our lives in the foreseeable future.  Each one of these could have an entire course devoted to it but we will have to suffice with a quick overview for now, but we all need to watch the developments of these technologies over the next few years and contribute when we can in the ethical development of them. 

Virtual Reality

Virtual reality is all hype, until it is not.  This technology has been a staple of science fiction, books, movies and games since the middle of the last century.  Unfortunately, it has had a slow birth as a real technology.  There was a brief period in the 90s where the technology at first seemed like it was ready for implementation but then died out as a fad.  Today we are seeing another resurgence of interest with big money investment from companies like Facebook in the Oculus rift headset.  The idea is that this technology will have initial payoffs in gaming but that it will expand well beyond that as a new interface for social networking and business applications.  

Included in this technology would be things like blended reality and augmented reality where one might use technologies like Google Glass perhaps that would allow one to see virtual images superimposed on one’s visual field.  You could use data and information that was superimposed over what one was looking at to help make decisions.  Perhaps you are looking at a shelf and data on comparative prices flash before your eyes next to the item you are looking at to help you make a more informed buying choice.  You could be looking at a historical ruin and also see the building as it would have looked new.  An acquaintance stops you to say hi but you have forgotten their name, no worries, your augmented reality system has already brought up their Facebook page and you are able to use their correct name and strike up a conversation about their new pet that you just saw on their newsfeed as if they were an old friend.   

There are some easy to see ethical issues involved with this technology such as the nauseating effect this technology has on some people which might limit their ability to partake in new virtual wonders.   These technologies play with our sense of reality and while they open up the virtual world for us, at this point that seems to come at the cost of our self-awareness of the space around us.  We do not want to increase accidents in the real world while pursuing fun in the virtual world.  Augmented reality might be used to unfairly alter our buying choices if we are not careful with this technology. 

A harder to see concern is what some philosophers call “hyperreality” (please read this article). Have you ever been to a live event and wished that you were instead watching it at home because it would be so much easier to follow the action?  Our ability to create televised images is so good now, it is in many ways better than real.  Hyperreality is the next step in this direction and it will occur when we either can’t tell, or don’t care about, the difference between what is real and what is fiction, what is artificial and what is not.  We might greatly prefer to live in our virtual worlds and spend less and less time in reality.  Some studies have shown that children spend less time outdoors than earlier generations, preferring to spend time indoors consuming media.  If that is the kind of effect that old style media can have, just imagine the results with virtual, blended and augmented reality. 

Robot Ethics

Robotics is another field that seems ripe for innovation and consumer adoptions.  Robots, have of course been a staple of science fiction since the at least the 1920s if not before.  The idea may be old, but the enabling technologies have been lacking.  But it now seems like the miniaturized electronics and computers in our smart phones, along with mobile networks and cloud computing assets are combining to make a number of feasible robot applications. Let’s take a quick look at the ethical impacts of these new robotics technologies.  This subfield of computer ethics has picked up the name “roboethics” and at this time it focuses on these areas:

The following are excerpts from: “Open Questions in Roboethics,” in Philosophy and Technology, Luciano Floridi (ed.), John P. Sullins (Guest Ed.), pp. 233-238. Volume 24, Number 3/ September  2011.

Military applications

This is by far the most important of the sub fields of roboethics.  It would have been preferable had we worked through all the problems of programming a robot to think and act ethically before we had them make life and death decisions, but it looks like that is not to be.  While teleoperated weapons systems have been used experimentally since the Second World War, there are now thousands of robotic weapons systems deployed all over the world in every advanced military organization and in an ad hoc way by rebel forces in the Middle East.  Some of the primary ethical issues to be address here revolve around the application of just war theory.  Can these weapons be used ethically by programing rules of warfare, the law of war and just war theory into the machine itself?  Perhaps machines so programed would make the battlefield a much more ethically constrained space? How should they be built and programmed to help war fighters make sound and ethical decisions on the battlefield? Do they lower the bar to entry into conflict too low?  Will politicians see the as easy ways to wage covert wars on a nearly continuous level?  In an effort to keep the soldier away from harm, will we in fact bring the war to our own front door as soldiers telecommute to the battlefield?  What happens as these systems become more autonomous? Is it reasonable to claim that humans will always be “in” or “on the loop” as a robot decides to use lethal force?

Privacy

Robots need data to operate.  In the course of collecting data they will collect some that people may not want shared but which the machine needs nonetheless to operate.  There will be many tricky conundrums that have to be solved as more and more home robotics applications evolve.  For instance, if we imagine a general-purpose household robot of the reasonably near future, how much data of the family’s day-to-day life should it store? Who owns that data? Might that data be used in divorce or custody settlements? Will the Robot be another entry for directed marketing to enter the home?

Robotic ethical awareness

How does a machine determine if it is in an ethically charged situation?  And assuming it can deal with that problem, which ethical system should it use to help make its decision?  But we are sorely lacking on the specifics needed to make any of these claims anything more than theoretical.  Programmers and engineers are wonderfully opportunistic and do not tend to have emotional commitments to this or that school of thought in ethics.  Therefore what we see occurring today is that they tend to make a pastiche of the ethical theories that are on offer in philosophy and pick and choose the aspects of each theory that seem to work and deliver real results.

Affective robotics

Personal robots need to be able to act in a friendly and inviting way.  This field is often called social robotics, or sociable robotics, and was largely the brainchild of Cynthia Breazeal form the MIT robotics lab.  The interesting ethical question here is if your robot acts like your friend, is it really your friend? Perhaps that distinction doesn’t even matter?  With sociable robotics, the machine looks for subtle clues gathered from facial expression, body language, perhaps heat signatures or other biometrics and uses this data to ascertain the user’s emotional state.  The machine then alters its behavior to suit the emotional situation and hopefully make the user feel more comfortable with the machine.  If we come to accept this simulacrum of friendship, will this degrade our ability to form friendship with other humans?  We might begin to prefer the company of machines.

Sex Robots

It seems strange but it is true that there are already semi responsive sex dolls that do count as a minor type of robot.  These machines are such a tantalizing dream for some roboticists that there is little doubt that this industry will continue to grow.  This category of robotics supercharges the worries raised by affective robotics and adds a few more.  Sociable robots examine the user biometrics so the robot can elicit friendly relations, but here the robot examines biometrics to elicit illicit relations.  A sex robot is manipulating very strong emotions and if we thought video games were addictive, then imagine the behavior a game consul one could have sex with might produce.  These machines are likely to remain on the fringe of society for some time but the roboticist David Levy has argued that since this technology can fulfill so many of our dreams and desires, it is inevitable that it will make deep market penetration and eventually will be wide spread in our society.  This will result in many situations that will run the spectrum from tragic, to sad, to humorous.  The key point here is if the machines can really be filled with love and grace or if we are just fooling ourselves with incredibly expensive and expressive love dolls.  I can easily grant that engineers can build a machine many would like to have sex with, but can they build a machine that delivers the erotic in a philosophical sense?  Can they build a machine that can make us a better person for having made love to it?

Carebots

Somewhat related to the above is the carebot.  These machines are meant to provide primary or secondary care to children, the elderly and medical patients.  There are already a number of these machines, such as the Paro robot, in service around the world.  On one end of the scale you have something like Paro that is meant to provide something like pet therapy for its users.  Towards the middle of the scale you would have machines built to assist medical caregivers in lifting and moving patients or helping to monitor their medications or just to check in with patients during their stay.  At the far end of the scale you would have autonomous or semi-autonomous machines that would have nearly full responsibility in looking after children or the elderly in a home setting.  Here again we have some of the same issues raised by social robotics and the concomitant privacy issues.  But in addition to those you have the troubling problem of why aren’t other humans taking care of their own children and elderly.  What kind of society are we creating where we wish to outsource these important human relations to a machine?

Robot Surgery

These are robots that assist in surgery and other life and death medical practices such as administering medication.  Often the surgeons using these machines are close by but this technology could also be used to allow a surgeon to work on a patient many thousands of miles away, perhaps a wounded soldier or a patient with serious conditions who is living in remote or economically depressed places of the world.  This technology puts a new wrinkle on many of the standard medical ethics issues and we need more medical ethicists to study this phenomenon in-depth.

Autonomous vehicles

Our roadways are soon to change in a very radical way.  Autos and large transportation vehicles may soon have no human driver.  Already many of our vehicles can be seen as robots of a sort, some luxury vehicles will take over in emergency breaking situations and when you fall asleep at the wheel.  A number of autos will park themselves completely autonomously.  The vast majority of the issues involved here will be legal but there will be social upheaval and resistance here.  Imagine the destruction autonomous cars will have on the egos of the American Male who largely bases his entire personality on his vehicle.  More importantly, can one trust a vehicle to make the right decisions when those decisions mean the lives of you, your family and all those around you?  There have already been deaths caused by faulty automatic navigation services because people robotically follow the robotic voice no matter what it says even if it is giving incorrect directions that lead one out into the middle of Death Valley.  This latter event was caused by the fact that maps services use proprietary data that is programmed in by people with no experience of the territory they are mapping and they further do not share changes to road conditions that may save the lives of users of a competitor’s service. 

Attribution of moral blame

This is one of the biggest conundrums in roboethics.  Nearly all moral systems have some way of assessing which moral agent involved in a system is to blame when things go wrong. Most humans respond to blame and punishment and will modify their behavior to avoid it when possible.  But how does one blame a machine?  Will people use robots as proxies for the bad behavior in order to remove themselves from blame?  When an military robot kills innocent civilians, who is to blame?  If you are asleep in your robotic car and it runs down a pedestrian, did you commit manslaughter or are you just an innocent bystander?

Environmental Robotics

There are two ways to look at the environmental ethics impacts of robotics.  One is to look at the impact of the manufacture, use and disposal of robots.  Currently there is no green robotics that I am aware of and we should push for this to be developed.  The second interesting idea is that robotics could provide an invaluable tool for gathering data about environmental change.  The very same robots that are used to monitor enemy troops and scour the ocean floor for enemy activity can be easily re-tasked to monitor forests, ocean ecosystems, protect whale and dolphins or any number of environmental tasks that unaided humans find difficult.

Infosphere

The infosphere is a word that was coined to refer to the new information environment we are creating that layers over the natural environment.  Think of it like the ecosphere but for machines.  Machines use information technology to work together.  As these networks continue to form and evolve, the idea is that they will reach a certain complexity that will rival the natural world.  Watch this short video that explains how the philosopher Luciano Floridi uses the term infosphere. Another way to think about it is if we were to meld virtual reality and robotics together with future information technologies and networks, then we will have created the infosphere.  The ethical challenges of this would be very large as what were are doing is creating a new kind of environment along with the organisms that would inhabit it.  No humans have ever faced this kind of challenge before and it would require equally innovative thought in ethics and morality to be done correctly.  One new form of ethics that attempts to do this is called information ethics and it has a number of subfields that focus on specific challenges of the growth of information technology and the infosphere. 

Assignment 22, Writing Reflection (200 to 400 words) posted in the comments section below.  We have covered a wide territory here.  Go back and pick something that you found particularly interesting or challenging and describe how an understanding of the ethical theories we have looked at in this class can help us make better choices in the development of emerging technologies?

  • Natalie Keys
  • Carlos Gonzalez
  • Carlos Gonzalez
  • John Sullins
  • Laurel Poff
  • Cody Bryant-Zygowski
  • Jonna Elvin
  • Grant St. Martin
  • Cassandra Abad
  • Ashey Narciso