AI: Consciousness - fordsfords/fordsfords.github.io GitHub Wiki
I am wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?
Parent article: AI: Moral Status
In my opinion, one component of an AI having moral status is that it has consciousness. But I have a problem with this measure: we don't know exactly what consciousness is. How can we figure out how to implement consciousness if we don't know what it is?
I think some people are concerned that we might accidentally make a conscious AI and not realize it. I'm skeptical about this; I suspect we would have to program it intentionally, although without knowing what consciousness is, I can't really argue in favor of that opinion.
I think consciousness is another continuum. Without nitpicking definitions, I think most people would agree that apes and dolphins are conscious. Dogs and cats too. Are mice? Less so. Ants? Yeah, a little. Trees? Some research suggests that trees have non-zero consciousness. Amoebas? You would have to really stretch the definition of consciousness.
The degree of moral status we give to an entity depends partly on that entity's degree of consciousness. Do we have a moral obligation to apes and dolphins? Yes, although arguably with fewer obligations than to other humans. As we slide down the consciousness scale, our moral obligations tend to decrease.
One concern I have about AIs is that we will apply a binary "yes/no" measure of consciousness to them, where they only get a "yes" if they have near-human consciousness. But an AI with, say, dog-level consciousness will just be considered a glorified game of pong, with no moral status at all.