I wouldn’t be so skeptical. There’s a quote about reading books: “It’s as if someone else is thinking for you. You experience someone else’s thinking process.” LLMs have read many books and have learned to emulate the process of writing them to some extent.
In this way, LLMs seem like fixed thinking pathways. It feels as if they’ve captured something about the thinking process. What makes it even more interesting is that their outputs are suitable as inputs for the next iteration. So they can kinda loop on themselves. However, they don’t actually learn. And its not enough looping on your own without external feedback.
So, they’re somewhat offline. The question is: what would it take to make them learn in real-time?
Currently, some applications give them an external store of knowledge and use them as static cognition to work with and update that external store. But they also give them real time contextual feedback.
Some of the most interesting examples in recent months are Minecraft bots driven by LLMs that write code, and the village simulation. These showcase the ability of LLMs to function as learning and adapting agents when combined with other systems. Its just that LLMs play role of intution, random creativity, subconsciousness. That sadly does not learn on its own and requires constant inputs from outside world.