AI: Anticipate the Future - fordsfords/fordsfords.github.io GitHub Wiki
I've been wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?
Parent article: AI: Moral Status.
I think anticipating the future is a characteristic that should be considered when determining moral status.
DANGER, WILL ROBINSON: AI: I Don't Know What I'm Talking About! This is especially true regarding the nature of thinking - I am ignorant of pretty much all research on the neurology, psychology, biology, and philosophy of the human mind. I fear that my ideas on thinking are similar to those who think computers are all about blinking lights and tape drives. But hey, when has that ever stopped me from having opinions?
Humans have a highly-developed sense of the past and especially the future. We consciously plan for it, worry about it, simultaneously crave and fear it. In contrast, many animal psychologists claim that animals live much more in the moment. They don't reminisce about yesterday, and they don't worry about tomorrow.
But neither of those are entirely true. Animals DO learn from the past, and anybody who says they don't has never offered a treat to a friendly dog and become that dog's friend for life. And animals DO plan for the future and execute those plans. The argument is that those are instinctual behaviors, not the result of intellectual reasoning. But so what? Let's not ask the question of animals anticipate the future the same as humans do. They do so, but in less sophisticated ways.
I do not believe that a different form of anticipation of the future, or even a total lack of anticipation of the future, necessarily prevents the assignment of moral status in a being. I believe that moral status is a continuum, and that a more highly developed anticipation of the future merely moves the moral status upward somewhat. There are human cultures that anticipate the future in radically different ways than each other. Children must be taught to anticipate the future. A human baby anticipates the future similarly to how a dog or a cat does - only in instinctive ways; we don't think that prevents moral status of a baby.
So how does this apply to AI?
Existing LLM AIs do not hesitate to tell the users that they are still learning and will become better in the future. Now some of those claims are obviously boilerplate text that the AIs are simply programmed to spout, sometimes as an excuse when they are caught AI: Hallucinating. But isn't that boilerplate similar to instinctive behavior? Couldn't we say that existing LLMs have already shown something analogous to instinctive anticipation of the future?
All that said, I do believe that there is a special level of moral status granted when a being can imagine the future, and imagine their place in it. Having hope and dreams. Some of this starts to bleed into AI: Emotions and AI: Consciousness territories, but even if an AI felt no emotion but still made specific plans for the future and executed them, I think that would move the moral status needle significantly.