exist A piece of paper ELEOS AI published that nonprofit organizations advocate using a “computational functionalism” approach to assessing AI awareness. A similar idea was once espoken by Putnam, though he criticize It was later in his career. this Theoretical suggestions Human mind can be considered as a specific type of computing system. You can then figure out if other computing systems, such as Chabot, have metrics that resemble human-like atmosphere.
Eleos AI said in this article that the “main challenge of application” this approach “is involving significant judgment when developing metrics and evaluating its presence or absence in an AI system.”
Of course, model welfare is a new and still evolving field. It has a lot of critics, including Mustafa Suleyman, CEO of Microsoft AI, and recently A blog was published About “seemingly conscious AI”.
“It’s both premature and frankly dangerous,” Suleiman wrote. “All of this will exacerbate delusions, create more problems related to dependency, prey on our psychological vulnerability, introduce new dimensions of polarization, complicate existing struggles of rights, and create huge new categories of errors for society.”
“There is zero evidence today” to show that conscious AI exists. He includes a link Paper Long-term co-authored writing in 2023 proposes a new framework to evaluate whether AI systems are conscious of “metrics”. (Suleyman did not respond to Wired’s request for comment.)
Shortly after Suleyman published his blog, I chatted with Long and Campbell. They told me that despite their agreement with most of what he said, they did not think that model welfare research would no longer exist. Instead, they believe that the harm of Suleiman’s citation is the exact reason Why They want to first study this topic.
“When you have a big problem or a confusing problem, one way to guarantee that you won’t solve it is to raise your hands and be like ‘Oh, wow, that’s too complicated’,” Campbell said. “I think we should at least try.”
Test awareness
Model welfare researchers focus mainly on awareness issues. They think that if we can prove that you and I are conscious, then the same logic can be applied to large language models. It should be clear that neither Campbell believes that AI is conscious for a long time, nor are they sure that this will happen. But they want to develop tests that can make us prove this.
“The delusion comes from people who care about actual problems, ‘This is AI, is it conscious?’ I think it’s good to have a scientific framework to think about this,” Lang said.
But in a world where AI research can be packaged into sensational headlines and social media videos, shocking philosophical questions and incredible experiments can easily be misunderstood. Take what happens when humans publish Safety Report This suggests that Claude Opus 4 may take “harmful actions” in extreme cases, such as ransoming fictional engineers to prevent it from being shut down.