AI: Independent Thought - fordsfords/fordsfords.github.io GitHub Wiki
I've been wondering: how close are we to being able to write an AI program that I would feel morally obligated to keep running?
Parent article: AI: Moral Status.
I think independent thought is a characteristic that should be considered.
DANGER, WILL ROBINSON: AI: I Don't Know What I'm Talking About! This is especially true regarding the nature of independent thought - I am ignorant of pretty much all research on the neurology, psychology, biology, and philosophy of human thought. I fear that my ideas on thinking are similar to somebody who thinks that computers are all about blinking lights and tape drives. But hey, when has that ever stopped me from having opinions?
When two people converse, each participant has an independent train of thought which steers the conversation. We've all experienced instances where our thinking takes an unexpected turn and disrupts the flow of the conversation. "Sorry, I just realized something." I think that current AI software doesn't do that. In between user inputs, I think the AI software isn't doing much of anything, at least nothing that is analogous to thought.
What is thought? For modern humans, the most conspicuous characteristic of thought is internal monolog. We think in words that we say to ourselves. But this is not the only component of thought. Another important characteristic of thought is the retrieval of memory. Our "mind's eye" sees pictures, not the same way that our eyes stimulate our visual cortex, but with similar results. We can remember things we've seen, and we can imagine things we haven't seen. We can remember feelings. Part of thought is retrieving memories and applying them to whatever your current situation is.
But I don't want to overly constrain the definition of independent thought. Dogs think, but they don't have an inner monolog that we know of. Maybe there are other implementations of thinking that are just as valid. For example, maybe a large language model could be implemented to basically talk to itself. I.e. in addition to processing input, feed some of its output back into its input.
In a way, it is already doing this. A chat session is fed back into ChatGPT each time you add a prompt to it. So the software re-reads both your earlier prompts and its own responses. But it's still in response to the user's action of entering a new prompt. It's not an independent operation happening in real time. I.e. you won't see ChatGPT sit there for a while and then pop up with something like, "Hello? Did I upset you? Is that why you went quite? I upset you, didn't I? I'm always doing that. Ugh! I hate myself."
But again, I don't really know what I'm talking about. Could a ChatGPT developer implement a feedback loop and a real-time clock and make the beginnings of independent thought?
I'm not sure that independent thought is sufficient to elevate an AI to deserving moral status. But I'm leaning towards thinking it is a necessary component.
Re-thinking This
Again, I'm trying to think outside the human box. What is a thought process? It's a series of state transitions within the mind. Often those state transitions are triggered by an external event (input), but when engrossed in deep thought, the events are ... internally-generated? Some part of the conscious thought process triggers a memory retrieval or a logical analysis, the completion of which pushes the thought process forward.
With ChatGPT, the thought process is always moved forward in response to a user prompt. I might consider this a lower form of thought, but still non-zero.
Part of the problem with ChatGPT is that its thought process is ephemeral. If Chat and I synthesize some new idea, that idea only exists in the chat transcript, not in Chat's memory. This is a problem for these reasons:
- ChatGPT can't draw on previous ideas synthesized in other chat sessions. It can only re-discover its ideas each time I hit enter and it re-reads the current chat session's thought process.
- It has a very limited length. Even if I were willing to continue a session indefinitely, the time required to re-read the entire session with each submission would get too onerous. But this is only an implementation issue! I'm confident that a minor re-write of ChatGPT could allow it to maintain state between prompts. So this issue could easily be solved.
- We've been wondering about ChatGPT being (or becoming) a new entity. But in terms of thought processes, each chat session becomes a separate entity. A new entity is created each time we start a new chat. If we decide to grant moral status to each entity, would we feel obligated to continue each session indefinitely?
By the way, in the process of expanding this entry, I removed a different one - sense of time. I think experiencing time passing is a subset of the larger issue of having a thought process. Having a chatbot that senses time only as the user's inputs tick simply means that its time is non-linear.