The Five Levels: from Spicy Autocomplete to the Dark Factory

The Five Levels: from Spicy Autocomplete to the Dark Factory (https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/) Dan Shapiro proposes a five

The Five Levels: from Spicy Autocomplete to the Dark Factory (https://www.danshapiro.com/blog/2026/01/the-five-levels-from-spicy-autocomplete-to-the-software-factory/)

Dan Shapiro proposes a five level model of AI-assisted programming, inspired by the five (or rather six, it’s zero-indexed) levels of driving automation (https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-05/Level-of-Automation-052522-tag.pdf).

• Spicy autocomplete, aka original GitHub Copilot or copying and pasting snippets from ChatGPT.

• The coding intern, writing unimportant snippets and boilerplate with full human review.

• The junior developer, pair programming with the model but still reviewing every line.

• The developer. Most code is generated by AI, and you take on the role of full-time code reviewer.

• The engineering team. You’re more of an engineering manager or product/program/project manager. You collaborate on specs and plans, the agents do the work.

• The dark software factory, like a factory run by robots where the lights are out because robots don’t need to see.

Dan says about that last category:

At level 5, it’s not really a car any more. You’re not really running anybody else’s software any more. And your software process isn’t really a software process any more. It’s a black box that turns specs into software.

Why Dark? Maybe you’ve heard of the Fanuc Dark Factory, the robot factory staffed by robots (https://www.organizedergi.com/News/5493/robots-the-maker-of-robots-in-fanuc-s-dark-factory). It’s dark, because it’s a place where humans are neither needed nor welcome.

I know a handful of people who are doing this. They’re small teams, less than five people. And what they’re doing is nearly unbelievable – and it will likely be our future.

I’ve talked to one team that’s doing the pattern hinted at here. It was fascinating. The key characteristics:

• Nobody reviews AI-produced code, ever. They don’t even look at it.

• The goal of the system is to prove that the system works. A huge amount of the coding agent work goes into testing and tooling and simulating related systems and running demos.

• The role of the humans is to design that system - to find new patterns that can help the agents work more effectively and demonstrate that the software they are building is robust and effective.

It was a tiny team and they stuff they had built in just a few months looked very convincing to me. Some of them had 20+ years of experience as software developers working on systems with high reliability requirements, so they were not approaching this from a naive perspective.

I’m hoping they come out of stealth soon because I can’t really share more details than this.

Tags: ai (https://simonwillison.net/tags/ai), generative-ai (https://simonwillison.net/tags/generative-ai), llms (https://simonwillison.net/tags/llms), ai-assisted-programming (https://simonwillison.net/tags/ai-assisted-programming), coding-agents (https://simonwillison.net/tags/coding-agents)
No comments yet.