There is nothing out-of-distribution
AI turned out to be very simple if you think about it. You just make a model that works with something generic, and feed as much training data as you can into it. The generic data I mean here is text. People had writing for thousands of years and the whole world is built on it — we write, we speak, we read, and we listen our entire lives. It's so deep in our brains that it's hard to think of something that cannot be described in text.
Some AI pessimists love to say that "LLMs do not generalize out-of-distribution", but if they are built to understand and write text — they can generalize to anything that can be described as text, so pretty much everything. What people actually refer to when pointing out to generalization problems is intelligence.
Why couldn't LLMs solve simple puzzles 2 years ago if they can generalize? The reason is simple: LLMs were stupid. Bad data, bad models, and bad training produced models with very limited intelligence. They simply lacked IQ to generalize well. Nothing was changed conceptually on a fundamental level in the past couple of years — we just gathered better data, designed better architectures, and improved training algorithms.
Things that LLMs fail right now usually aren't "boolean" in nature. If it only solves 1% of some specific tasks right now, it means that it can solve them sometimes and next generations of models will solve more. And it's hard to find a task where LLMs couldn't complete at least the simplest version of it today.
Inability to solve 100% of given tasks of some kind right away just means the lack of intelligence. In order to generalize some problem, you need the intelligence to understand and solve that problem well. There is no data limitation in LLMs in the sense of generalization ability, only an intelligence limitation that is being improved rapidly.
I believe that LLMs are one of the valid paths to ASI. There could be other paths, even better and more general ones, but LLMs can do the thing too. ASI is not far away already, and from what we'll see in 2026 we won't have to change much to reach a superhuman level of general intelligence. I don't see anything that could stop LLMs from progressing further at the current exponential pace.