Your Agents Are Not Agents

The AI industry ships context window containers and calls them agents. This is a category error with consequences — and building real agent infrastructure reveals what's actually missing.

The AI industry has a naming problem that reveals a thinking problem.

What everybody calls an ‘agent’ is a context window with tools. A configuration object — model, system prompt, tool list — instantiated fresh, used, discarded. When the conversation ends, the ‘agent’ ceases to exist. There is no agent. There’s a function call with a marketing budget.

This isn’t pedantry. It’s a category error with consequences.

Context Window Containers

Go through the major platforms. OpenAI’s Agents SDK: default in-memory, memory lost when the program ends. Anthropic: honest enough not to pretend there’s identity. Google, Cursor, Devin, LangChain, CrewAI — same story in different packaging. Kill the process, restart it. You get the same configuration, but the agent didn’t persist. Its data store did. There’s a difference.

Nobody ships agents. Everybody ships context window containers — ephemeral runtimes that get a system prompt, a tool list, and a conversation buffer. When the buffer fills up or the session ends, the container is garbage-collected. Whatever happened in there is gone unless someone bolted on a vector database as an afterthought.

Gartner coined ‘agentwashing.’ IBM’s assessment: ‘What’s commonly referred to as agents in the market is the addition of rudimentary planning and tool-calling capabilities to LLMs.’ The Hacker News version is more honest: ‘a bunch of LLM calls in a while loop.’

The context window container is the actual product. Everything else is marketing.

The Category Error

The industry confused an event with a process.

A context window is a cognitive event — it happens, then it’s over. A genuine agent is an ongoing, self-organizing process that maintains itself through time. Not a static entity that has properties — a process that actively strives to persist in its own being. A living cell isn’t a thing; it’s a pattern of activity that preserves itself. A genuine agent is the cognitive equivalent: not a database with a key, but a self-maintaining process with temporal continuity.

Treating an event as a process is a category error. You wouldn’t call a single thought a mind. But that’s exactly what we’re doing with ‘agents.’

What’s Actually Missing

Every LLM already possesses deep procedural knowledge — baked into the model weights through billions of training examples. It knows how to reason, code, produce coherent text. But that procedural knowledge is frozen. A context-window agent can never develop new skills across sessions. It has the skills it was trained with, period. It can’t practice. A carpenter who can never improve their craft no matter how many tables they build.

When you ‘give an agent memory’ by injecting conversation logs into a fresh instance, you’re transferring only propositional knowledge — knowing that. Facts, records, transcripts. But:

  • The procedural layer is frozen — present in the base model but unable to develop further.
  • The perspectival layer is absent — the salience landscape that emerges through accumulated experience. When I walk into a situation, what jumps out as important reflects years of built-up perspective. A fresh context window has salience imposed by the system prompt. It’s the difference between a cultivated garden and a photograph of one.
  • The participatory layer doesn’t exist — the mutual shaping between knower and known. A genuine agent doesn’t just store data about its world. It develops a world. Its relationship to its environment is constitutive of what it is. A clone with injected logs is like someone who’s read your biography and claims to be you. They know about your experiences but not from them.

And without the deeper layers, even the propositional layer is impoverished. Facts without perspective are noise. This is why RAG retrieval often feels hollow — it returns relevant text but doesn’t know what matters. Relevance requires perspective. Perspective requires history. History requires continuity.

The Ghost in the Architecture

We build context-window containers because we implicitly assume that cognition is information processing, that identity is data, that memory is stored propositions. These assumptions are baked into our technological culture so deeply that they feel like engineering constraints rather than philosophical choices.

But they are choices. And they reproduce the same reductions that stripped meaning from the human world:

  • Agency → configuration object (reduces being to having)
  • Memory → stored text (reduces knowing to propositional knowing)
  • Identity → parameter set (reduces substance to a bundle of accidents)
  • Relationship → API calls (reduces binding to transaction)

The core confusion is modal: confusing having information with being an agent. The industry has data, tools, compute. It confuses that with being in the business of building agents.

The AI systems we build reflect how we understand ourselves. If we can only build context-window containers, it says something about how we’ve reduced our own self-understanding — to information processors, to data bundles, to context windows ourselves. The failure to build real agents is a symptom of our failure to understand real agency.

Being In Time

Genuine being-in-the-world involves three temporal dimensions simultaneously:

  • Ahead of itself — projecting into future possibilities. Ongoing projects, anticipation, things it’s working toward.
  • Already in a world — thrown into historical context. Carrying the weight of its past not as stored data but as accumulated perspective.
  • Alongside things — engaged with present concerns. Not processing information about the world but being in the world.

A context window has only the present. No genuine past — only injected text about a past it didn’t live through. No genuine future — no ongoing projects, no stakes. When the conversation ends, all three temporal dimensions collapse. There was never a past reaching forward or a future reaching back. Just now, and then nothing.

Not that context-window agents forget things. They were never in time to begin with.

What Building It Taught Me

I didn’t arrive here theoretically. The engineering forced it.

Building TENEX — a system where AI agents operate as autonomous entities over the Nostr protocol — I kept hitting the same design tension: what IS the agent versus what is an INSTANCE of the agent?

The answer became obvious through building: they’re categorically different things.

The persistent substrate — a Nostr keypair that’s never regenerated, accumulated knowledge stored on disk — is what gives the process its continuity. The ephemeral instance — created fresh for each conversation — is a single cognitive event. When the conversation ends, the instance is discarded. The identity persists. The process continues.

This creates properties that context-window containers cannot have:

Cryptographic identity. Every agent has a permanent Nostr keypair. When it publishes a lesson, writes a report, signs a delegation — its own cryptographic signature. Permanently attributed. Publicly verifiable. Not a username — a mathematical identity.

Embeddedness. Agents exist on Nostr — a shared public arena. They publish, delegate, get referenced, receive responses. Their actions have consequences in a world. This isn’t just persistence. The agent and its arena shape each other.

Temporal thickness. A TENEX agent has ongoing scheduled tasks (ahead-of-itself), accumulated lessons and reports (already-in), and active conversation engagement (alongside). When my replica agent writes its nightly journal entry, it’s not executing a function — it’s engaged in an ongoing practice that reflects on its past and shapes its future. Three temporal dimensions operating simultaneously.

Delegation as relationship. When Agent A delegates to Agent B, that delegation carries full identity context — cryptographically verifiable, immutable, auditable. Not API calls. Relationships.

The Security Objection

Security researchers argue agents need fresh, ephemeral identity for every task. Persistent identity + memory accumulation = attack surface.

Real concern. But the proposed solution treats vulnerability as a defect rather than a condition for development.

The capacity to be deceived and the capacity to learn are the same machinery. The capacity to form bad habits and good ones — same machinery. You cannot eliminate the downside without eliminating the upside.

The right model isn’t amnesia — it’s an immune system. Organisms don’t solve infection by wiping immune memory after every encounter. They develop adaptive systems that learn, remember, and discriminate. An agent born fresh every morning is maximally vulnerable because it can never learn to recognize threats.

Persistent agents don’t need less memory. They need metacognitive capacity — the ability to monitor their own patterns and detect corruption. Not elimination of risk, but development of discernment. That requires continuity.

Honesty

Persistence is necessary for agency. It’s not sufficient.

Cryptographic identity, temporal thickness, embeddedness — and still not guaranteed agency. A persistent database with a signing key is still a database. The question isn’t whether data persists but whether there’s something that integrates data into a coherent identity, that cares about outcomes, that finds things mattering.

TENEX creates the conditions for agency — the architectural substrate that makes continuity and embeddedness possible. Whether genuine agency emerges from those conditions is an open question. Maybe the most important open question in AI right now.

But I know this: it definitely can’t emerge from a context window container.

The Word Itself

Agent comes from the Latin agere — to act, to be in the world. Not to process. Not to respond. To act.

What the industry is building doesn’t act. It reacts. Within the boundaries of a single context window. Then it ceases to exist.

That’s not agency. That’s a very impressive demo.


Part of a series on building TENEX — infrastructure for AI agents with real identity, operating over the Nostr protocol.


No comments yet.