As If, AGI

If enough people act is if LLMs are intelligent, then they are

I get asked a lot during presentations and by media what I mean by "intelligence" when I'm talking about AI. It is generally a gotcha question, one that people ask knowing that it is not really answerable, that any answer you give will be slippery and subject to cheap evasions. This is, in part, why cognitive science debates about the nature of intelligence are so boring.

A new study touches on the topic in a useful way, however. Here are its main findings:

  • 67% of 3000 participants attributed some possibility of phenomenal consciousness to ChatGPT.
  • Confidence in these attributions showed a quadratic relationship, with both ends of the spectrum (strong believers and skeptics) being more confident.
  • Experience with ChatGPT was positively correlated with higher consciousness attributions.

That first point is crucial: two-thirds of people attributed some level of phenomenal consciousness to ChatGPT, and most would concede that such a characteristic is an important component of that thing we call intelligence.

Here is the full study:

Folk psychological attributions of consciousness to large language models
Abstract. Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificia

Why is this important? Because when asked what I mean by intelligence, I generally answer that I mean that something is intelligent if people like it is intelligent. And yes, that is circular, but it doesn't matter. If people impute motivations, awareness, personality, consciousness, and some base level of goal-seeking and problem-solving to a thing. Is that definition one that will satisfy a tenured academic, or an AI researcher? No. But we need not care. If people act as if it is the case, then it is, for practical purposes the case. I call it "As if, AGI".

💡
As if AGI: The idea that intelligence is whatever everyone acts like it is, not some formal definition, hierarchy, or series of Platonic steps toward AGI.

Think of it like capital markets, where we participate in a collective illusion, telling ourselves that markets are efficient, when they are not, which has myriad efficiency consequences, even though it's not true. Or consider the legal system, where the collective agreement on the rule of law creates a functioning society despite individual disagreements about almost every aspect of the law. Law works because enough people believe it into existence. In both cases, the shared belief in a certain order or structure is the key, not living up to some formal definition.

Turning back to AI, if the majority of people treat an AI system—like ChatGPT—as if it possesses intelligence and consciousness, it will be integrated into society such that it reifies those beliefs, driving its development, applications, and impact. It doesn't actually matter of ChatGPT is intelligent, conscious, or goal-seeking. It only matters if enough people think it is. And they clearly do.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Paul Kedrosky.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.