← Posts

The Architecture Cahit Arf Drew in 1959: Agentic AI

When an LLM is asked to write code, it produces a reasonable-looking output. But the model cannot know whether it compiles or whether it passes tests. Without knowing the specifics of the project it’s writing code for, it simply does the job you mentioned, working in isolation from context. This is an AI Agent: a component with clear boundaries that performs a specific task.

Agentic AI, on the other hand, doesn’t just generate tokens; it analyzes its environment, generates and executes code that is compatible with that environment, observes and analyzes errors, and makes corrections iteratively. We are no longer talking about a product, but a mode of behavior.

Let’s clarify the difference:

AI Agent: A software component that autonomously performs a specific task. A bot that writes a particular test scenario or a tool that summarizes an incoming PR. Its task is defined, its scope is clear. It only solves problems that were defined when it was designed.

Agentic AI: A system’s ability to plan, make decisions, use tools, and dynamically determine its own steps along the way. The capacity to adapt to unforeseen situations.

So what is the actual breaking point that makes an agent agentic?

The environment.

When you provide an LLM with an environment such as a terminal, file system, usable tools, and test results, you give it a feedback loop. The agent performs an action, receives feedback from the environment, and determines its next step accordingly. An agent without an environment is like a developer writing code on paper: they have an idea of what the output might look like, but they can’t be sure of its correctness.

Looking at the code automation example from this perspective:

A code completion assistant is an AI Agent. It does a specific job, its boundaries are clear.

A system analyzing a problem, creating a solution plan, writing the code, testing it, and going back to fix it if it finds errors — that is agentic behavior.

And what makes all of this possible is the existence of an environment in which the agent can operate.

What automates code writing is not a single smart agent; it is the combination of agentic capabilities such as planning, tool use, and iterative problem-solving within an environment. What matters going forward will not just be better models, but richer environments and better orchestration.

Source: Matti Blume, Wikipedia

One last note: While preparing this article, I recalled that in 1959, Cahit Arf gave a conference talk at Atatürk University titled “Can a Machine Think and How Can It Think?” I wondered whether there might be hints of what we now call Agentic AI, so I went back and checked the talk.

Cahit Arf believed that,

Even if we increase the number of problems a machine can solve to ten thousand, if these are only problems that were solved when the machine was built, we cannot view it as an artificial brain

He argued that what was truly needed was “the ability to adapt” — that is, being able to solve unforeseen problems as well. He even drew the architecture of a thinking machine in the full text of his conference talk: preliminary memory (context window), control center (orchestrator), memory (memory), transformation device (LLM reasoning), and auxiliary memory for when he said “you should ask such-and-such person, you should look at such-and-such book” (tool use). Our great scientist on the 10 Turkish Lira banknote described the difference between an AI Agent and Agentic AI 67 years ago.

This talk, which I remembered at the last moment, was a nice occasion for me to change the title I had originally set as “AI Agent or Agentic AI?”

Share