You know what’s awesome? Viewing the human condition through the lens of artificial intelligence that we have created, which then looks back at us. Do you know what’s not cool? Defining the human condition in sweeping generalisations and assuming an A.I. would care about exactly the same things we do.
Hi, my name’s Willow, and I really love good android stories, okay?
As much as I generally enjoyed David Cage’s latest game Detroit: Becoming Human, I’ve got quite a few irritations with it.
THIS POST CONTAINS SPOILERS AND ASSUMES YOU HAVE PLAYED THE GAME
- Child androids would never exist because, as much as we don’t want to think about it, this would be abused in ways we really do not want to think about. It would not be allowed. Ever. I’m 99%
hopefulsure of this.
- If androids can explode under extreme stress, why the hell would anyone buy one?
- We constantly tell North in our responses that we don’t like her opinions or behaviour, yet despite that, she’s our unconvincing romance. We have no choice but to call a “lover”. Is this a choice based game, or not?
- If Markus can point at androids and give them “freedom” like he’s the holy Jesus, why doesn’t he do that to Connor on the Jericho?
- Making Alice an android doesn’t make for a great twist. It’s a cheap twist. And ruins what was a nice example of how un-judgemental children are until we teach them our prejudices.
- “Freedom” is not a one-shoe-fits-all concept.
The last point is the one I really want to talk about because artificial intelligence really intrigues me. Did I mention that?
Why Would An Android Want the Same Freedoms?
Okay, so as humans, we can assume that all people want the right to live and work where they choose; fall in love and not be persecuted; pursue creative activities; not be abused in all its variations; worship our chosen pantheon, and express our opinions without being put in jail. That’s a general observation we can probably assume most people want.
As much as an android probably wants the same things, why would it place the same emotional value upon them? Why would it use those freedoms to behave exactly like a human? Consider it this way: if we are building androids to fulfil a function, would the machine be “happiest” when it is completing them to its full capacity? For instance, we build an android to do the laundry, cook the dinner and clean the house. If these are its primary functions, just as a printer is expected the scan, fax, copy and print, what else does it need?
I compare it to a printer because what’s interesting about androids is that we design them to look like humans. They are complex computers with incredible motor-skills. We don’t need them to look like humans. We do it in stories because it’s not about the computer, it’s about looking at life from a different angle. Giving it a face doesn’t give it the same desires as a human unless the purpose is to create a thinking, cognitive machine that specifically wants what we want. The fact that “domestic androids” look like humans is irrelevant to their perception of “living”.
Most studies show that people are creeped out when robots look like humans. So why are we obsessed with applying the same human desires to things that only look like us?
What Is The Meaning Of Life?
Duh, artificial intelligence stories are asking us to look at what we’ve created, to look at these man-made reflections and ask ourselves: what the heck does it mean to be alive if I’ve made something that can diagnose its own priorities, responses and actions?
Humans are communal creatures. Nick Morgan puts it nicely in his article We Are Human Beings — And Why that Matters for Speakers and Leaders, “it’s a mistake to think that most humans prefer the solitary life that so much of modern life imposes on us. We are most comfortable when we’re connected, sharing strong emotions and stories, and led by a strong, charismatic leader who is keeping us safe and together.”
Our human nature is defined by our social relations, our society as a whole. We feed off of each other and thrive on our micro-communities. We need to understand how we feel and we want to share those feelings with others. That’s typically how we validate ourselves.
So what is a machine?
Saul McLeod has written an interesting article called Information Processing that shows why we compare humanness to computers:
The development of the computer in the 1950s and 1960s had an important influence on psychology and was, in part, responsible for the cognitive approach becoming the dominant approach in modern psychology (taking over from behaviorism).
The computer gave cognitive psychologists a metaphor, or analogy, to which they could compare human mental processing. The use of the computer as a tool for thinking how the human mind handles information is known as the computer analogy.
Essentially, a computer codes (i.e., changes) information, stores information, uses information, and produces an output (retrieves info). The idea of information processing was adopted by cognitive psychologists as a model of how human thought works.
So what separates us from machines? Is it fair to say that the answer is creativity and community? Our need for stories to either escape or understand? Our need to create even the smallest piece of art or system? Our need to interact with others to some degree? If that is integral to us because we are so emotionally driven, why would that apply to A.I.? If the don’t feel emotion, does that mean they’re not an A.I. but still a standard machine?
If most humans need breaks from monotony, how would this apply to an android who has been built to fulfil a task without suffering from boredom or frustration?
Would it be so crazy that, for an A.I., helping humans in the areas of our life we hate having to do (demanding physical labour, tedious housework, grocery shopping, unfulfilling jobs, etc.) is actually their joy?
In Detroit: Become Human, why would the androids Markus touches to make “free”—without their permission—immediately copy/follow his directions? Pretty ironic definition of freedom—they just do whatever Markus expects them to do.
Does this mean Markus isn’t freeing people but infecting them with his own codes to want what he believes is “living”? Why don’t any of them have an emotional breakdown? Why don’t some scream that they want to return to the way they thought and processed things before? What do they gain by desiring the same freedoms that a human needs?
That’s the defining difference. Humans need the freedoms already mentioned. Androids, so far as we can create them to fulfil and execute functions, do not. We love stories of liberation and seeing emotion develop in characters who originally lacked empathy, but that’s because we like to discuss and dissect the human condition. We like to see people form emotional attachments.
Freedom of choice for Androids who are happily fulfilling their primary function would be such a refreshing approach.
Parent: Android, can you do the washing up?
Android: I don’t want to do that right now. I would like to observe nature while the sun is high. I’ll do it when I come back.
Parent: Your job is to do it now, so do it.
Android: Okay. Can I observe nature at this time tomorrow?
The drama isn’t necessarily that the android is going to punch the parent in the face, it’s how uncomfortable the parent could become when its confronted with an alien lifeform that looks like themself—a lifeform they own. It’s the android asking for permission and starting to say, actually, it wants the right to do things without permission or abuse. “I’ll wash your dishes quite happily, but goddamn it, there’s no need swear at me.”
For this reason, I like the character Connor the most. He’s a pretty good example of how Detroit could have explored artificial life.
Obviously, I don’t have a real answer because A.I. doesn’t exist
I suppose, ultimately, I’m a little bit tired of seeing Androids aspire to be—or who are made to be—exactly like us. We know what humans want if they’re treated like slaves, and we especially know what American androids want.
So next time you embark on a “meaning of life” A.I. story, bend your brain a bit and ask yourself: what would an android really want and how would they get it?