Now we hardly dare suggest milestones like these anymore. Maybe if an AI can write a publishable scientific paper all on its own? But Sakana can write crappy not-quite-publishable papers. And surely in a few years it will get a little better, and one of its products will sneak over a real journal’s publication threshold, and nobody will be convinced of anything. If an AI can invent a new technology? Someone will train AI on past technologies, have it generate a million new ideas, have some kind of filter that selects them, and produce a slightly better jet engine, and everyone will say this is meaningless. If the same AI can do poetry and chess and math and music at the same time? I think this might have already happened, I can’t even keep track.
So what? Here are some possibilities:
First, maybe we’ve learned that it’s unexpectedly easy to mimic intelligence without having it. This seems closest to ELIZA, which was obviously a cheap trick.
Second, maybe we’ve learned that our ego is so fragile that we’ll always refuse to accord intelligence to mere machines.
Third, maybe we’ve learned that intelligence
is a meaningless concept, always enacted on levels that don’t themselves seem intelligent. Once we pull away the veil and learn what’s going on, it always looks like search, statistics, or pattern matching. The only difference is between intelligences we understand deeply (which seem boring) and intelligences we don’t understand enough to grasp the tricks (which seem like magical Actual Intelligence).
I endorse all three of these. The micro level — a single advance considered in isolation — tends to feel more like a cheap trick. The macro level, where you look at many advances together and see all the impressive things they can do, tends to feel more like culpable moving of goalposts. And when I think about the whole arc as soberly as I can, I suspect it’s the last one, where we’ve deconstructed intelligence
into unintelligent parts.
— Scott Alexander, Sakana, Strawberry, and Scary AI
Astral Codex Ten, 18 September 2024
For what it’s worth, I don’t think it’s actually true that intelligence
tout court is a meaningless concept,
or that the predicament that he describes in the third response to the moving-goalposts problem really seems much like discovering that it might be a meaningless concept.
(It might show that it’s a complex concept — I would go further to argue that it’s a complex, fuzzy-boundaried family resemblance concept — and one whose component parts just don’t always fractally exhibit the critically distinguishing features of the whole. But, well, big deal; there are lots of concepts like that, and it’s interesting to find out that a concept is like that, but it doesn’t mean you’ve found out the concept is meaningless. But I do think Scott’s right that the moving-goalposts problem is a big problem, and one that ought to provoke more thought amongst people proposing critical tests for what intelligence is or where it can and cannot be found.