Not just that, the technical terms too add to the confusion. Deep learning, neural networks etc. seem to hint at something so human or brain-like, it primes you for anthropomorphising.

I have a friend who keeps talking about how the human brain is just neurons in the state of 0 or 1, and therefore how it's just a matter of time before AI starts learning and encoding "everything" in its 0 and 1 brain.

I don't know if this is how it actually is, but this simplistic 0 and 1 thinking seems off to me. Alas, my friend and I are just two dunderheads arguing about things we know little about. And the eerily neurosciencey terminology does not help at all.

Expand full comment

"Experts concerned about this trend have been trying to emphasize that large language models are merely trying to predict the next word in a sequence. LLMs have been called stochastic parrots, autocomplete on steroids..."

Unfortunately AI boosters (led by Sam Altman himself) have taken to claiming that humans themselves are stochastic parrots, or at least how do you know that humans *aren't* stochastic parrots? To me the "Flying Spaghetti Monster" nature of this argument is self-evident, along with the chorus of disagreement from neuroscientists and psychologists, but maybe this argument needs to be addressed more directly...by someone smarter and more authoritative than me, ideally, but here is my take on it for us in the Substack peanut gallery:

a) The argument rests on the idea that GPT is a black box, and the human brain is a black box, so for all we know they could be doing a very similar thing under the hood. The basic problem with that is that GPT is really not a black box, it's just very confusing and we don't understand it very well. We know precisely how its individual neurons work and how they're connected, and we know precisely what GPT's overall reasoning strategy is. But human neurons are at least thousands of times more complex than the artificial kind - some human neurons might be Turing complete, by themselves - and we don't have a clue how they're actually connected. Likewise the human brain's overall reasoning strategies are unknown at the level of basic cognitive science. The human brain is a profound mystery across several different domains of science. Even the 150 neurons of C. elegans are still unexplainable by science, despite the fact that we understand its connections precisely, along with the cognitive science of its learning: the issue is that individual animal neurons are themselves computation engines and databases, not mere weights in a model. So Sam Altman's claim amounts to a chain of unsupported and self-serving claims: that our uncertainty in understanding GPT is anywhere close to our uncertainty in understanding animal brains; that GPT's architecture is in any way similar to an animal brain; and that therefore it's plausible for an intricate but conceptually simple algorithm to lead to human-level reasoning and understanding, despite the fact that GPT's mechanics clearly don't align with our own lived experience of thought and reason.

b.1) GPT and related models are often subjected to human tests of reasoning and comprehension. This is useful for determining their capability as tools. But it has led to profoundly misleading characterizations of their *intellectual* ability. There are people who are interested in subjecting AI to the kind of intelligence tests we use for animals and small children - counting, physics puzzle solving, navigation. AI researchers should *work with cognitive scientists* and think about how to apply these "simple" tests to LLMs. ChatGPT, for instance, has profoundly serious issues with counting - in many ways it is worse than a honeybee, which can very reliably count to 4 in a wide variety of unnatural circumstances. More importantly, it seems that all of our deep learning AIs, and certainly LLMs, are incapable of forming a *coherent*, *rational* model of the world. Understanding AI's limitations compared to animals will hopefully clarify the nature of their strengths compared to humans: an AI that doesn't understand the meaning of "3" can be trusted with statistical analysis but cannot be trusted with management decisions.

b.2) In fact I suspect a lot of the mystery of "emergent" abilities in chatbots, alongside inhuman stupidity, will be explained as properties of human language, where ChatGPT gets extra human intelligence "for free" without having any associated reasoning abilities. To use a horrible but tech-friendly analogy, language seems to act as a type of "bytecode" for facts and ideas, which can be chopped up and combined according to fairly simple combinatorial rules. This always works syntactically, and can lead to correct answer prediction without actually understanding any of the underlying semantics. It is analogous to a programmer combining functions in C# purely by compatible type signature from the compiled DLLs: in many cases this will generate plausible programs, and sometimes even correct ones, especially if the DLL was written by a careful programmer. But it can also lead to critical mistakes no human dev would make if they knew what they were doing. Decompiling such a program, assuming it was written by one person, and trying to figure out what the dev was thinking would be confounding: brilliance alongside bizarre negligence and boneheaded errors. The mystery is explained once you understand how the program was written, and that most of the brilliance was inherited, the rest was accidental. So by analogy, in certain situations LLMs allow the primitive technique of associative reasoning to be elevated into deductive / social / etc reasoning, despite the underlying neural network not really being capable of that reasoning by itself. The underlying reasoning is somehow "encoded" by humans into language, preserved by the LLM even across different problems, and "decoded" when humans read the generated sentence.

c) I expect by 2050 or so we'll have AI with the same pure reasoning and factual world-building abilities as a pigeon - it would make for a truly formidable chatbot. But such an AI might not be sentient in the way that a pigeon is! An important trait separating sentient animals from sponges or plants is that their lives are characterized by multiple conflicting goals, and their brains weigh needs vs. opportunities in independently pursuing those goals - they have as much free will and choice as humans do. This is not really the case for a chatbot. In fact, I think transformer neural networks are truly more intelligent than a nematode - not just vaster or faster, their associative reasoning abilities are meaningfully more advanced - but a nematode makes (very unsophisticated) decisions between "find food," "escape threat," and "find mate", while GPT only ever performs "complete text in a way humans find plausible." I suspect there is work philosophers and biologists need to do to help AI ethics folks clarify some of the concerns.

Expand full comment

Please check out https://betterimagesofai.org one the role of stock images of AI adding to this problem !

Expand full comment

Is there a marketing/commercial aspect to the anthropomorphizing? Car commercials come to mind.

Expand full comment

I'm working on https://syntheticusers.com/ and I believe this is exactly one of the use cases in which anthropomorphizing is a benefit, not an hindrance

Expand full comment