It takes these very simple-minded instructions - 'Go fetch a number, add it to this number, put the result there, perceive if it's greater than this other number' - but executes them at a rate of, let's say, 1,000,000 per second. At 1,000,000 per second, the results appear to be magic. — Steve Jobs
Claude Code feels like magic because it is iterative. The solution to any problem is random. You just have to iterate through the whole possible space until you find one that works.
Here, let me illustrate:
intelligence = heuristic * attempt
If your attempts are purely random, you need roughly the size of the search space to find a solution. A heuristic cuts that down significantly. That is essentially what an LLM is.
Claude Code uses the same models provided through the API or the web interface. Yet, users feel a boost in intelligence. The model didn't get smarter but because Claude Code can make several attempts on its own, its overall intelligence increases for the end user.
As LLMs performance plateaus, intelligence can be derived from the second factor. In this regard, AI tools can have value on their own.
I have been using Claude Code for the last week or so. I completely disregarded it at first because I thought a Chat window where I manually go back and forth is enough. But there is something to be gained from speed and autonomy.
New Era?
I've used LLMs extensively but remained skeptical of their practical value. Claude Code changed that perspective through one concrete test: updating dependencies on a project with compilation and extensive tests. The tool iterated back and forth dozens of times over 30-40 minutes. I intervened occasionally, but mostly watched it work.
Consider the implications of scale. What if Claude Code operated autonomously with massive parallel compute? Could it compress that 40-minute task into 10 minutes? 5 minutes? 1 minute?
If 1 minute proves to be feasible, is it possible to go back to the old way of updating dependencies? What about other tasks? What other tasks could be automated today with the current LLMs performance?
Subscribe to the Newsletter
Get the latest posts from this blog delivered to your inbox. No spam.