AI: Emotions - fordsfords/fordsfords.github.io GitHub Wiki

I've been wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?

Parent article: AI: Moral Status

I think the capacity to experience emotions is a characteristic that should be considered.

If a person fears dying, and when threatened with death experiences anguish, we feel a moral obligation not to kill that person. But what if there were a person that had no fear of dying - would we feel OK about killing them?

I don't think so. I think that an entity having genuine emotions makes us feel our moral obligations more acutely, but I'm not sure we should make emotions a gating factor when considering if an entity has moral status. The Star Trek character Data has no emotions for much of the series, but we consider him worthy of moral status. Ditto Spock, although calling Spock non-emotional is a stretch.

We spend a lot of time debating whether an AI is experiencing emotions or just simulating it, but I don't think it is all that interesting of a question. I can't think of a set of characteristics that falls just short of deserving moral status, which would be pushed over the edge if we add emotions.