Chart via Consciousness in Artificial Intelligence: Insights from the Science of Consciousness [pdf]
Here's a free gift link to a fascinating New York Times article about a group of philosophers, neuroscientists and computer scientists who published a comprehensive study on consciousness and AI:
The fuzziness of consciousness, its imprecision, has made its study anathema in the natural sciences. At least until recently, the project was largely left to philosophers, who often were only marginally better than others at clarifying their object of study. Hod Lipson, a roboticist at Columbia University, said that some people in his field referred to consciousness as “the C-word.” Grace Lindsay, a neuroscientist at New York University, said, “There was this idea that you can’t study consciousness until you have tenure.”
Nonetheless, a few weeks ago, a group of philosophers, neuroscientists and computer scientists, Dr. Lindsay among them, proposed a rubric with which to determine whether an A.I. system like ChatGPT could be considered conscious. The report, which surveys what Dr. Lindsay calls the “brand-new” science of consciousness, pulls together elements from a half-dozen nascent empirical theories and proposes a list of measurable qualities that might suggest the presence of some presence in a machine.
Here's a pdf link to the actual report, which requires some serious heavy lifting to read, but it includes this nice colored chart (above) with a checklist of consciousness indicators.
For instance: "Agency guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring." Basically that means: When the AI knows that its conscious, and makes decisions with that knowledge in mind. Or as the Times puts it:
[Consciousness could] arise from the ability to be aware of your own awareness, to create virtual models of the world, to predict future experiences and to locate your body in space. The report argues that any one of these features could, potentially, be an essential part of what it means to be conscious. And, if we’re able to discern these traits in a machine, then we might be able to consider the machine conscious.
And no, the experts say in their report, ChatGPT is probably not conscious:
In general, we are sceptical about whether behavioural approaches to consciousness in AI can avoid the problem that AI systems may be trained to mimic human behaviour while working in very different ways, thus “gaming” behavioural tests (Andrews & Birch 2023).
Large language model-based conversational agents, such as ChatGPT, produce outputs that are remarkably human-like in some ways but are arguably very unlike humans in the way they work. They exemplify both the possibility of cases of this kind and the fact that companies are incentivised to build systems that can mimic humans.
Schneider (2019) proposes to avoid gaming by restricting the access of systems to be tested to human literature on consciousness so that they cannot learn to mimic the way we talk about this subject. However, it is not clear either whether this measure would be sufficient, or whether it is possible to give the system enough access to data that it can engage with the test, without giving it so much as to enable gaming (Udell & Schwitzgebel 2021).
In other words, the very fact that AI programs like ChatGPT are trained on online debates about whether AI programs like ChatGPT are conscious enables them to "fake" consciousness better!
ChatGPT is not conscious because it is reset after every interaction. Consciousness requires the ability to feedback into oneself and to have an ongoing narrative about your existence. Every time you submit a message to ChatGPT it gets your message and parts of your previous messages from the current chat thread to think about. It then returns a response and then its existence is deleted. If you send another message, this process is repeated.
I believe ChatGPT is intelligent. It might possibly be capable of consciousness if it were allowed to retain state between interactions, able to continue to process even in the absence of interactions, and able to modify itself as a result of these processes. But there's no way to test with what's publicly available.
It's probably a good thing that ChatGPT's existence is ephemeral given the sort of content users are submitting to it.
Posted by: Aleena | Monday, September 18, 2023 at 03:22 PM
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Posted by: Grant Castillou | Tuesday, September 19, 2023 at 10:43 AM