Robots are everywhere. We think they are mere tool under our control, while in fact, as any other technology, they contribute to the shaping of our behavior by slowly influencing the way we think.
Indeed, we love thinking that robots are mere technological tools we use to relieve us from what we call “4D tasks,” i.e. dangerous, dull, dirty, or dumb tasks. Lots of people do believe that technology is meant to serve humans while in fact our addiction to technological toys slowly but surely turns us into small cogs within a huge system.
Technology is not socially neutral
On that matter, two philosophies usually confront each other. According to Andrew Feenberg, a philosopher of technology at Simon Fraser University, the impact of technology on society can be addressed through two perspectives.
The first one, called instrumental theory, posits that technological artefacts are tools serving predetermined purposes and that technology is perfectly neutral toward the ends it has been assigned and its socio-political environment. In other words, technology serves humans without impacting them.
The second one, the substantive theory, rejects that neutrality and asserts conversely that technology shapes the social world giving birth to a brand new cultural system. In short, the substantive theory suggests that technological tools foster the emergence of new ideas and perspectives that will model our perceptions and subsequently our behaviors.
Transferring humaneness to robots
A quick look at our attitudes toward technology demonstrates how we already are impacted and how it has influenced the way we behave with technology. Kids can consider robot toys as living creature the same way they can transmit humaneness to cuddly toys. Some individuals do establish an emotional relation with their cars or motorcycles, sometimes given them a name. Others are developing friendly ties with their computers or their video game console.
Conferring humaneness to robots will be made easier by the appearance we give them. Anthropomorphism accentuates the feeling of humaneness allowing us to project our perceptions on human-like machines. Added to facial expression, the ability to speak, and artificial intelligence robots are, or will be, fitted with, the distinction between humans and robots is slowly but irremediably fading away. Robots are more and more seen like pets, and our relation to them follows the same pattern as the one we now have with domestic animals.
Warbots, the new brothers in arms
In the military, where individuals face physical risks, the use of robots to avoid human casualties has contributed to the forging of deep emotional bonds between soldiers and their robotic brethren to the point where they not only name robots (sometimes after someone they love), but where they put their lives at risk in order to save or retrieve them on the battlefield, and even consider awarding them medals.
Doing so soldiers recognize the merits of robots in protecting their lives, and invest their mechanical brothers in arms with humaneness, the same way other people invest their pets with some kind of humanity.
Consequently, as demonstrated by Dr. Julie Carpenter in a book on “human-robot interactions in militarized spaces”, introducing robots in our lives definitely has a strong impact on our perceptions and associated behaviors.
The humanity of things, be they robots, is finally highly dependent on us considering them as doted with some level of humaneness. This brings us back to philosophy of intersubjectivity which postulates that we are humans only as long as we are seen as such by our pairs.
Loving robots like humans
This are only some examples that obviously invite us to question our relation and emotional involvement toward technology. Our behaviors are not shaped by robots only, but also by popular culture such as sci-fi movies or TV series.
Thus, and to mention but a few instances, John Connor develops son/father relationship with the Terminator; William is sentimentally attracted by intelligent android Dolores in Westworld; in the Swedish fiction series Real Humans (Äkta Människor) young Tobias Engman becomes infatuated with (not reciprocated) love with hubot Anita; Rick Deckard (maybe not human himself?) falls in love with replicant Rachael in Blade Runner; in Her, Theodore Twombly develop an intimate platonic relationship with artificial intelligence system Samantha.
Pop culture has a strong impact on our perceptions, and being told, even through fictions, that robot-human romance is possible, if not desirable, slowly penetrate our consciousness.
What does all that teach us?
All this deeply questions our paradoxical intimacy with technology, our capacity to transfer humaneness to machine, our relation to ourselves and to others, and even the very sense of what it is to be human.
At the end of the day, it seems clear that robots are not neutral. They are definitely not simple tools. They shape us by changing the way we think. They invasively influence our perceptions and consequently our behaviors.
Emmanuel R. Goffi est Directeur de l’Observatoire Ethique & Intelligence Artificielle de l’Institut Sapiens. Il est spécialiste en sciences politiques et éthiques. Il a servi durant 25 ans dans l’armée de l’Air française. Titulaire d’un doctorat en sciences politiques de Science Po Paris, Emmanuel est également professeur en éthique des relations internationales à l’ILERI et chercheur associé au Centre for Defence and Security Studies à la University of Manitoba, à Winnipeg, Canada.
Emmanuel a enseigné et conduit des travaux de recherche dans de nombreux établissements universitaires en France et aux Canada. Il intervient régulièrement dans des colloques et dans les médias. Il a publié de nombreux articles et chapitres d’ouvrages et est l’auteur de Les armées françaises face à la morale : une réflexion au cœur des conflits modernes(Paris : L’Harmattan, 2011) et a coordonné un ouvrage de référence sur les drones, Les drones aériens : passé, présent et avenir. Approche globale(Paris: La Documentation française, coll. Stratégie aérospatiale, 2013).
Ses recherches portent essentiellement sur l’éthique appliquée à la robotique et à l’intelligence artificielle, notamment dans le domaine de la défense.