AI: My Morals - fordsfords/fordsfords.github.io GitHub Wiki

I've been wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?

Parent article: AI: Moral Status.

I wanted to take that sentence apart a little bit.

  • "How close are we..."

By "we", I mean AI professionals and academics. I mean the state of the AI art.

  • "... that I would feel..."

By "I", I mean me. I'm not religious, so I don't believe that morals are fixed. They vary greatly by society and culture, and they vary greatly over time. There are some moral rules that are pretty constant, but they tend to be the ones that lead to the survival of our species, so it's no wonder they are common across space and time. Granting moral status to an AI is something that different people will have different thresholds for. I'm not enough of an anthropologist or sociologist to talk about moral norms by culture or society, so I can't comment on when "we" would be ready to grant moral status to an AI. I can only think about when I will be ready.

  • "... to keep running?"

I.e. when would I consider an AI to have enough moral status to be deserving of "life", to the point that turning it off would be analogous to murder?

I'm also not enough of a philosopher to try very hard to consistently apply the morals I hold. For example, I think that animals should not be killed except when necessary, and I don't think it is necessary for me to eat meat. But I do. So, if I started an AI running and then concluded that it deserved moral status, would I keep it running, essentially donating my computer to it and paying its electric bill?

Maybe?

I suspect it would be easier to eat a chicken unnecessarily than it would be to kill a conscious being that is capable of having a conversation with you.