AI: Sentience - fordsfords/fordsfords.github.io GitHub Wiki
I've been wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?
Parent article: AI: Moral Status.
I think sentience is a characteristic that should be considered.
I'm a little nervous talking about sentience because the word seems to mean different things to different people. In popular usage, sentience usually includes intelligence and independent thought, but I want to talk about those separately. So I'm going to take a minimalist definition of sentience (that does not correspond to the popular one, so be careful): sentience is the ability to experience sensations. Basically, inputs.
Chatbots have basically one input - the text entered by the users. They had other inputs during their training, but this was the programming phase. The operation phase only has user text.
Note that humans have the normal senses we think of, sight, hearing, touch (including pain), smell, and taste, but also have other internal sensations like hunger, sexual urge, and stress response. These internal sensations are both inputs and outputs and are part of complex feedback loops that regulate emotions, social interactions, mood, etc.
This is why I question whether AIs will feel real, human-like emotions in the near future. They are good at simulating emotions due to their large body of training data. But they aren't sensing the sensations.
On the other hand, artificial feedback loops could be implemented to simulate analogs to hunger, sexual, and stress responses. There might be ways to have those feedbacks create effects that could be defined as emotions. So this could be an area of research. But I'm not sure today's chatbots have that.
Could an AI that has such limited sentience deserve moral status?
Sure, why not? For one thing, sentience is only one characteristic, and a chatbot's low level of sentience could be made up for by other characteristics.