AI cannot reason, Apple study says. Well, they are right — but they say nothing new.
Reasoning models are an artefact of a long computing cycle with a good prompt. The reasoning is baked in the system prompt and in the model. Certainly training and long context have their impact in making us believe the model is able to reason.
But what really is reasoning?
Isn't it the ability to understand relations and being able to deduce from facts? Having a sense of cause and effect? Being able to do some abstraction, having some common sense, and distinguishing right from wrong?
Reasoning is an acquired skill, and I argue that it is something machines will be able to achieve with larger contexts (memory) and a better understanding of our world.
After all, do we have a good understanding of reasoning in humans? Is the ability to solve intricate mathematical or logical problems what constitutes the ability to reason?