AI: Moral Status - fordsfords/fordsfords.github.io GitHub Wiki
I've been thinking a lot about AI recently.
Parent article: AI:
One of the most pressing questions in my mind is: how close are we to being able to write an AI program that I would feel morally obligated to keep running?
I take that line apart a little bit in AI: My Morals.
Basically, my question boils down to one of "moral status", i.e. is the AI worthy of moral consideration? Almost everybody would agree that a rock has no moral status. I can do what I want to the rock without being immoral towards the rock. Plants have some moral status - we should not harm plants without having a reason. Animals have more moral status, especially those with complex brains. Moral status is a continuum.
Moral status does not imply any specific moral rules should be applied. For example, saying a pig has moral status does not necessarily mean that humans have a moral obligation not to kill a pig. It simply means that morals must be considered and applied when dealing with a pig. For example, for many people, it is morally acceptable to kill pigs for food, but while alive, pigs should be treated humanely.
So, does a game of "pong" have moral status? Clearly not. There is no reason to consider morals when dealing with a pong game.
What about the AI "hosts" in the HBO series, "WestWorld?" One premise of the show is that the robot hosts are conscious, sentient, intelligent beings, capable of experiencing emotions. When a WestWorld host is shot, it genuinely suffers. These are people, made out of different materials, but psychologically the same as you and me.
It seems obvious that WestWorld host robots deserve the same status as humans, because they are, in essence, human. But what about AIs that are less human-like? How will we decide when an AI program crosses the line?
I tried to come up with a list of characteristics that I think should be considered when deciding if an AI should be granted moral status:
- AI: Consciousness
- AI: Sentience
- AI: Independent Thought
- AI: Anticipate the Future
- AI: Emotions
- AI: Creativity
FYI - I used to have another bullet for intelligence. But I decided that although it has been done in the past, in modern times we do not withhold moral status from people of low intelligence, nor do we automatically grant moral status to animals of high intelligence. And existing AIs are already exhibiting something resembling human intelligence, and we aren't ready to grant moral status to existing AIs. So I've decided that it doesn't need to be considered when deciding whether to grant moral status to an AI.
I don't have answers. I would love to say that although I can't exactly define the criteria for moral status, I'll know it when I see it. Unfortunately, we are already in the realm of AI: Walks Like a Duck. One of my fears is that we'll cross that line and not know it. Or maybe that some may know it but won't tell the rest of us. Like the designers and implementers of WestWorld - presumably they know the robots are real people, but they sell the guests on the lie that they're just really good simulations.
Update: there's a new-ish AI in town. I've been experimenting a little with Claude. I decided to talk to it about this subject, and I found the conversation interesting.