AI is Anti-Human (and assorted qualifications)

Vibe coding is taking the nostr developer community by storm. While it’s all very exciting and interesting, I think it’s important to pump the brakes a little - not in order to stop the vehicle, but to try to keep us from flying off the road as we approach this curve.
In this note Pablo is subtweeting something I said to him recently (although I’m sure he’s heard it from other quarters as well):
There is a naive, curmudgeonly case for simply “not doing AI”. I think the intuition is a good one, but the subject is obviously more complicated - not doing it, either on an individual or a collective level, is just not an option. I recently read Tools for Conviviality by Ivan Illich, which I think can help us here. For Illich, the best kind of tool is one which serves “politically interrelated individuals rather than managers”.
This is obviously a core value for bitcoiners. And I think the talks given at the Oslo Freedom Forum this year present a compelling case for adoption of LLMs for the purposes of 1. using them for good, and 2. developing them further so that they don’t get captured by corporations and governments. Illich calls both the telephone and print “almost ideally convivial”. I would add the internet, cryptography, and LLMs to this list, because each one allows individuals to work cooperatively within communities to embody their values in their work.
But this is only half the story. Illich also points out how “the manipulative nature of institutions… have put these ideally convivial tools at the service of more [managerial dominance].”
Preventing the subversion and capture of our tools is not just a matter of who uses what, and for which ends. It also requires an awareness of the environment that the use of the tool (whether for virtuous or vicious ends) creates, which in turn forms the abilities, values, and desires of those who inhabit the environment.
The natural tendency of LLMs is to foster ignorance, dependence, and detachment from reality. This is not the fault of the tool itself, but that of humans’ tendency to trade liberty for convenience. Nevertheless, the inherent values of a given tool naturally gives rise to an environment through use: the tool changes the world that the tool user lives in. This in turn indoctrinates the user into the internal logic of the tool, shaping their thinking, blinding them to the tool’s influence, and neutering their ability to work in ways not endorsed by the structure of the tool-defined environment.
The result of this is that people are formed by their tools, becoming their slaves. We often talk about LLM misalignment, but the same is true of humans. Unreflective use of a tool creates people who are misaligned with their own interests. This is what I mean when I say that AI use is anti-human. I mean it in the same way that all unreflective tool use is anti-human. See Wendell Berry for an evaluation of industrial agriculture along the same lines.
What I’m not claiming is that a minority of high agency individuals can’t use the technology for virtuous ends. In fact, I think that is an essential part of the solution. Tool use can be good. But tools that bring their users into dependence on complex industry and catechize their users into a particular system should be approached with extra caution. The plow was a convivial tool, and so were early tractors. Self-driving John Deere monstrosities are a straightforward extension of the earlier form of the technology, but are self-evidently an instrument of debt slavery, chemical dependency, industrial centralization, and degradation of the land. This over-extension of a given tool can occur regardless of the intentions of the user. As Illich says:
There is a form of malfunction in which growth does not yet tend toward the destruction of life, yet renders a tool antagonistic to its specific aims. Tools, in other words, have an optimal, a tolerable, and a negative range.
The initial form of a tool is almost always beneficial, because tools are made by humans for human ends. But as the scale of the tool grows, its logic gets more widely and forcibly applied. The solution to the anti-human tendencies of any technology is an understanding of scale. To prevent the overrun of the internal logic of a given tool and its creation of an environment hostile to human flourishing, we need to impose limits on scale.
Tools that require time periods or spaces or energies much beyond the order of corresponding natural scales are dysfunctional.
My problem with LLMs is:
- Not their imitation of human idioms, but their subversion of them and the resulting adoption of robotic idioms by humans
- Not the access they grant to information, but their ability to obscure accurate or relevant information
- Not their elimination of menial work, but its increase (Bullshit Jobs)
- Not their ability to take away jobs, but their ability to take away the meaning found in good work
- Not their ability to confer power to the user, but their ability to confer power to their owner which can be used to exploit the user
- Not their ability to solve problems mechanistically, but the extension of their mechanistic value system to human life
- Not their explicit promise of productivity, but the environment they implicitly create in which productivity depends on their use
- Not the conversations they are able to participate in, but the relationships they displace
All of these dysfunctions come from the over-application of the technology in evaluating and executing the fundamentally human task of living. AI work is the same kind of thing as an AI girlfriend, because work is not only for the creation of value (although that’s an essential part of it), but also for the exercise of human agency in the world. In other words, tools must be tools, not masters. This is a problem of scale - when tool use is extended beyond its appropriate domain, it becomes what Illich calls a “radical monopoly” (the domination of a single paradigm over all of human life).
So the important question when dealing with any emergent technology becomes: how can we set limits such that the use of the technology is naturally confined to its appropriate scale?
Here are some considerations:
- Teach people how to use the technology well (e.g. cite sources when doing research, use context files instead of fighting the prompt, know when to ask questions rather than generate code)
- Create and use open source and self-hosted models and tools (MCP, stacks, tenex). Refuse to pay for closed or third-party hosted models and tools.
- Recognize the dependencies of the tool itself, for example GPU availability, and diversify the industrial sources to reduce fragility and dependence.
- Create models with built-in limits. The big companies have attempted this (resulting in Japanese Vikings), but the best-case effect is a top-down imposition of corporate values onto individuals. But the idea isn’t inherently bad - a coding model that refuses to generate code in response to vague prompts, or which asks clarifying questions is an example. Or a home assistant that recognized childrens’ voices and refuses to interact.
- Divert the productivity gains to human enrichment. Without mundane work to do, novice lawyers, coders, and accountants don’t have an opportunity to hone their skills. But their learning could be subsidized by the bots in order to bring them up to a level that continues to be useful.
- Don’t become a slave to the bots. Know when not to use it. Talk to real people. Write real code, poetry, novels, scripts. Do your own research. Learn by experience. Make your own stuff. Take a break from reviewing code to write some. Be independent, impossible to control. Don’t underestimate the value to your soul of good work.
- Resist both monopoly and “radical monopoly”. Both naturally collapse over time, but by cultivating an appreciation of the goodness of hand-crafted goods, non-synthetic entertainment, embodied relationship, and a balance between mobility and place, we can relegate new, threatening technologies to their correct role in society.
I think in all of this is implicit the idea of technological determinism, that productivity is power, and if you don’t adapt you die. I reject this as an artifact of darwinism and materialism. The world is far more complex and full of grace than we think.
The idea that productivity creates wealth is, as we all know, bunk. GDP continues to go up, but ungrounded metrics don’t reflect anything about the reality of human flourishing. We have to return to a qualitative understanding of life as whole, and contextualize quantitative tools and metrics within that framework.
Finally, don’t believe the hype. Even if AI delivers everything it promises, conservatism in changing our ways of life will decelerate the rate of change society is subjected to and allow time for reflection and proper use of the tool. Curmudgeons are as valuable as technologists. There will be no jobspocalypse if there is sufficient political will to value human good over mere productivity. It’s ok to pump the breaks.
hodlbod
June 6 2025Attempted a nuanced approach to the downsides of AI use:
@naddr1qv…pjq8wy72