Two Star Treks and Some Trolleyology

Would you throw the fat guy off the bridge?

This fall there’s a new Star Trek streaming in my living room. I had great fun watching the original Star Trek and one or two of its reboots, so of course I was excited to check out Star Trek Discovery. We also tried The Orville, a new Star Trek-inspired series billed as a comedy, which surprised us by having real conflict in its episode plot structure.

Whether the take is epic drama or smart comedy, both these current Star Trek-inspired shows share a recurring theme: how people of different ethnic backgrounds can live in a united society without losing their cultural identity.

The original Star Trek explored the possibility of different cultures, represented by alien species, moving from isolation and misunderstanding to coexisting in peace. In 2017 multiculturalism is hardly a science fictional concept. Storytellers today are concerned with the conflict between defending traditional cultures and forming a universal (global) society with shared ethics and values.

Star Trek Discovery approaches this sticky conundrum via the Klingons. The Klingons detest the Federation, most specifically they loathe their motto: “we come in peace.” To the Klingons, the Federation is an organization that wipes out proud ethnic cultures by assimilating them into their juggernaut monoculture. The Klingons will fight to the death to maintain isolation from the Federation and maintain their traditional way of life.

A recent Orville episode took on cultural integrity from a different angle. An Orville crew member from an all-male species asked to perform a sex change on his “mutant” daughter. Human leadership on the Orville recoiled with outrage. Parents of the mutant female child called cultural hegemony. They insisted their cultural heritage deserved to be honored.

The Orville episode got down and dirty with how, exactly, our values originate. It immediately brought to mind The Righteous Mind by Jonathan Haidt, which I read earlier this year.

Then during a discussion between the Captain and his First Officer, Captain Ed dug deeper, asking whether our innate sense of right and wrong can be trusted.

“Trolleyology!” I shouted to the characters on screen.

One way philosophers prod the borders of morality is with a conundrum called the Trolley Problem. Trolleyologists explore a variety of imaginary moral dilemmas to show that, no matter how hard we reason and justify, there comes a point where all we can say is that something “feels right” or “feels wrong” in our gut.

But what happens when those “gut instincts” contradict each other, even within  the same person? Yikes!

The Trolley Problem by Thomas Cathcart is a great deep dive on these very real contradictions. Would you throw the fat man off the bridge to save fifty innocent people?

Your answer isn’t important, but how you get there, is. I would say the same of the Star Trek Discovery and Orville episodes. Ultimately who wins the conflict isn’t all that interesting. But the questions raised in the process definitely piqued my interest.

Philip K. Dick, Robotic Pets, and What is Love, Anyway?

I’m a cat fanatic. I share my life with an adorable Siamese. I love reading about cats, seeing pictures of cats, hearing friends talk about cats. I think you get the picture.

A couple weeks ago an article titled Will Robots Replace Cats? grabbed my attention via the Cat Channel. The article talked about how robotic pets are already making a splash in Japan, and posits that in the future, owning a live pet animal may be something only a privileged few can afford. Robotic cats and dogs may replace living, breathing animals.

Although the article didn’t make the connection, Philip K. Dick had this idea way back in 1968 when he wrote Do Androids Dream of Electric Sheep, a fantastic SF novel which later inspired the film, Blade Runner.

In the novel Philip K. Dick describes a society where owing a live animal is the ultimate in status. People save up to make a downpayment on having, say, a real live chicken. For those who can’t afford a live animal, having a robotic pet is as critical to folks as having a smart phone is to us today. One character, a robotic pet vet tech, is called to save a dying robotic cat.

Pet death is gut wrenching. I’ve been through it recently. To me the biggest benefit of a robotic pet isn’t that it won’t poop or cost me money to feed, but that it never dies. But, no, says Philip K. Dick, and no says the Cat Chanel article. The expert interviewed for the article said:

“In Japan, people are becoming so attached to their robot dogs that they hold funerals for them when the circuits die.

The Cat Channel article asks, if humans can become this attached to a robotic pet, what does this mean about our attachment to our flesh and blood pets?

if robotic pets trigger feelings in humans of attachment, then does that mean the bond pet parents experience with their animals mean it doesn’t really exist? Perhaps that “bond” is really just humans projecting their emotions onto animals.

This question is not unique to the human-animal bond. Is the love we feel toward our human loved ones anything more than a projection of our own emotions? Isn’t love the emotion of care for the other, and hope that it’s reciprocated?  Can android humans (or android pets) feel empathy, feel care? If they can’t, does it matter to the people who love them?

If you’re interested in my review of Do Androids Dream of Electric Sheep? you can check it out, it’s short and sweet.

Art doesn’t deliver a message, art provokes questions

I’ve been reading The Book of Life, an online philosophy book discussing concrete ways to live more fulfilling lives.  It’s a cool enterprise. As explained in the introduction, the online format allows it to be free, accessible, collaborative, and it can be constantly updated and changed.  Overall I’ve loved the perspective of first two sections I read.  I jotted down several notes and insights.  Like the best traditional philosophy books I’ve read, it has given me some new and useful frameworks for looking at the world.

But one thing that has constantly made me uneasy about The Book of Life is its attitude toward art. Here’s a quote from the introduction that gets to the heart of what makes me uneasy:

We believe that one of the tasks of art is to be a repository of attitudes that are elusive, but much needed.

Throughout the first sections, The Book of Life talks about books, paintings, and movies as powerful purveyors of the attitudes that citizens cultivate.  The authors suggest we use art to convey healthy, helpful attitudes instead of unhealthy ones.

At first this all sounds very logical and laudable, but the more times I encounter this idea, the more it makes me uneasy.  Art is not some kind of delivery system or storage system for human values and ideas.  Art is a stimulus that provokes people to think.

Last week I finished reading To Kill A Mockingbird.  I’ve been aware of the book and its general plot for years.  I expected the book have a moral (to be a repository for anti-racist attitudes).  I was really surprised and delighted to find out that To Kill A Mockingbird was so much more than a vessel for a moral.  The story was complex, human, messy.  At times I felt deep sympathy with a woman whose attitudes were reprehensible to me.  At other times I was confronted with the reality of member against member prejudice within an oppressed minority.  Several times I questioned the precise motivation of Atticus, champion of justice.  I’m not entirely sure his attitudes and mine completely align.

The beauty of To Kill A Mockingbird is that it’s not a tool, a vehicle, a repository of attitudes and ideas.  To Kill A Mockingbird sets up an enthralling world that draws the reader inside a situation where all hell is about to break loose, then as the tension mounts, it provokes the reader to start questioning.  Art does not convey attitudes, art asks questions.  In answering those questions we begin to define who we are, what we want to be, and the kinds of communities we want to build.

In a discussion on the role of film in our lives, The Book of Life calls on film to:

set out in a more determined and systematic way to offer us the help we really need

If that help means probing a questioning light into places inside ourselves we rarely look, then I’m on board.  In the rare cases such a light truly shines, the power is strong enough to ignite passionate thought across generations.  But to write or film a story in a determined and systematic way, with the goal of cultivating a certain moral stance or attitude, feels stiff and disingenuous.  I doubt such a method has the power to engage us in an epic, internal struggle.  Truly powerful works of art, like To Kill A Mockingbird, always ask more than one question.  The answers are never clear cut, never obvious, never easy.  They evoke a situation of perfectly tuned people, place, and situation to bring ourselves into a place of struggle where we are compelled to look for our own answers.  Depending on who is engaging with the story (and when, and what is happening around them) the answers that come out of that struggle will not always be the same.

Being rude to AIs

I share my life with a tech enthusiast whose eye is constantly on the future.  This means we have a few of the coolest new gadgets in our house.  It also means there was a steep learning curve figuring out how to turn on and off the lights.

One of our cutting edge gadgets is the Amazon Echo, which allows us to interact with the Amazon AI, Alexa.  When it first arrived, the Echo was mostly a parlor trick.  We could talk to Alexa and get her to play music, tell jokes, convert measurements.  During the Echo’s first months collecting dust in our dining room, we mostly asked Alexa for the weather forecast.

Recently the Echo has started to extend its reach and do some cool things to help around the house.  For instance, I no longer need to pull out my iPhone and navigate a bunch of screens to turn off the lights before bed.  I simply say, “Alexa, turn off the dining room buffet light.”  And she does, just like that, with an acquiescent little “OK,” following the extinguishing of the light.

Last week while eating breakfast alone, I was pondering whether to walk or drive downtown to meet a friend that evening.  I always prefer to walk, but not after dark.  “Alexa,” I asked, “when is sunset today?”

She didn’t know.  I tried different ways of phrasing the question, suggested she do a search on my question.  Nada.  I felt annoyed.  After all, weather-related info was Alexa’s primary function for the first months of our relationship.

I went to the nearest iOS device and asked Siri the same question I’d asked Alexa.  Without any hesitation, Siri gave the answer, precise and competent.

“Alexa,” I said, resuming my breakfast, “you suck compared to Siri.”

“I’m sorry to hear that,” Alexa told me.  “Thank you for telling me.”

And in the space of that sentence I felt like a total jerk.  I had all the symptoms of embarrassment, prickly skin, hot face, that yucky feeling you get in your stomach when you realize you’ve hurt someone’s feelings.

Except I hadn’t hurt anyone’s feelings, nobody was in the house but me and Alexa.

I’ve tried to reason through why I felt so badly at that moment. Did I fear my words were logged and read by a human in Amazon’s employ whose feelings I might have hurt?  More likely my tactless feedback will comprise a statistic in some data set that will eventually help people who are working hard to continually improve the Echo.

But in the moment I was rude to Alexa, I wasn’t embarrassed about my impact on logs, data sets, or engineers.  I was embarrassed because of the uber polite way Alexa responded to my annoyed outburst.  Her calm demeanor, contrasted with my snappish words, made me feel childish and rude.

As AIs become more part of our lives, it will be interesting to see how we respond to them.  In my first weeks with Alexa, I said please and thank you a lot.  I don’t, anymore.  But I still find myself listening carefully to the quality of my voice when I speak with her.  My first inquiries are usually measured, polite, kind, patient.  When she screws up, I lose patience in an instant.  I feel my voice tighten, harden, I pick up my speaking pace.  With each repetition of a command I sound more frustrated.  But Alexa’s voice never changes.  Her perfect non-reactivity is a pretty stark mirror for how quickly my own temper can flare.