I often test new LLM models with the following question —

Given the following conditions, how many ways can Professor Y assign six different books to four different students?

  • The most expensive book must be assigned to student X.
  • Each student must receive at least one book.

The first model which can solve this question is OpenAI’s o1 model. The second one is Google’s Gemini 2.0 Flash Thinking model. Then more of such thinking models emerged, including DeepSeek’s R1 model and Anthropic’s Claude 3.7 (with extended thinking mode), all of which can solve this problem flawlessly.

Unlike earlier models that produce answers based primarily on pattern recognition, these thinking models employ a “chain-of-thought” approach, deliberately working through problems step by step. This methodical process enables them to tackle complex tasks in science, coding, and mathematics more effectively, though it requires more time and computational resources.

However, I was surprised when DeepSeek’s new non-thinking model, DeepSeek-V3-0324, also solved this problem correctly, suggesting either my test problem has made its way into the model’s training data, or perhaps standard, non-thinking models are advancing significantly as well.

I remain skeptical that these models can achieve truly groundbreaking discoveries independently. Yet, I cannot deny that most routine intellectual tasks, including much of what constitutes university-level education, could soon be competently handled by these LLMs.

Reflecting on this, I often recall this passage from the book Alan Turing: The Enigma:

Perhaps this was the most surprising thing about Alan Turing. Despite all he had done in the war, and all the struggles with stupidity, he still did not think of intellectuals or scientists as forming a superior class. The intelligent machine, taking over the role of the ‘masters’, would be a development which would cut the intellectual expert down to size. As Victorian technology had mechanised the work of the artisans, the computer of the future would automate the trade of intelligent thinking. The craft jealousy displayed by human experts only delighted him. In this way he was an anti-technocrat, subversively diminishing the authority of the new priests and magicians of the world. He wanted to make intellectuals into ordinary people. This was not at all calculated to please Sir Charles Darwin.

Turing’s perspective seems remarkably prophetic in our current AI era. As machines increasingly master intellectual tasks we once considered uniquely human, they challenge our tendency to elevate intellectual prowess above other human qualities. Perhaps Turing would view today’s AI advancements not as threats, but as fulfillments of his vision—tools that democratize intellectual work and remind us that our human value extends far beyond our capacity for calculation or logical reasoning. In this way, AI may ultimately serve to humble intellectual elitism and redirect our focus toward uniquely human virtues that machines cannot replicate.