We start understanding the limits of transformer-based LLMs, and certainly the current state of the art behind the doors of AI giants is not transformers anymore. You can read that from recent announcements.
Very large — basically infinite — context length. Short-term, long-term, and meta memory. World models with diverse sensory input from image, audio, and text. The capacity to actuate physically (robots) or electronically (MCP and A2A). New learning algorithms that work based on synthetic data coming out of simulation. All of this is changing the AI landscape.
Those who control these foundation models are the superpowers of tomorrow — those who decide for the future of humanity.
Where is the place of Europe in all that? I don't think regulations will protect us.
In this, only fierce and strong competition will prevail. And I really hope that our regulators will think of how to invest and put forward fundamental research instead of putting chains and locks over all innovation that may happen here.