Different Incentives, Different Outcomes
Worker co-ops in an age of agentic AI
We're already seeing it in software development. A good frontier-level LLM agent can do a lot of what junior and intermediate devs used to do: scaffolding, fixing bugs, refactors, test writing, docs, even a decent chunk of product work if you give it enough context.
This is going to spread beyond software. Anywhere work can be broken into repeatable tasks, AI agents are going to take a real slice of it. That doesn't mean every job disappears, but it does mean the economics change.
That's where incentives matter.
Capitalism says profit first. If a company can replace people with software, that's the rational move. Worker co-ops say employment first. We exist to employ our members, not to maximize profit. Same tools, different incentives, different outcomes.
So here's a hypothesis: co-ops could be safer places to work in industries disrupted by AI. If your job is to keep members employed, you don't treat automation the same way. You can use tools to reduce drudgery, improve quality, or open up new work — without turning it into a headcount reduction exercise.
But it's not that simple.
If AI makes a co-op twice as productive, what happens next? Do we just do more work with the same people? Do we grow membership more slowly because we don't need to hire as fast? Do we shorten weeks, take on riskier projects, lower prices, or spend more time on community work? None of that is automatically good or bad — it's just a new set of choices.
There's also the ethical question: should co-ops use agentic AI at all? If we're part of a broader labor movement, is it right to adopt tools that make other workers less needed? But if we refuse to use them, do we just get outcompeted by firms that don't share our values?
And then there's the bigger structural issue. Co-ops still operate inside a capitalist economy. We still have to compete for contracts and customers. That pressure is real. At the same time, co-ops can also build a solidarity economy: co-ops working with other co-ops, sharing work, sharing tools, collaborating instead of fighting each other to the bottom.
Maybe that's the path. Compete when we have to, collaborate when we can. Use AI as a tool to make work more humane, not to eliminate it. But that only works if we're honest about the tradeoffs — and make those tradeoffs explicitly instead of letting them happen by default.
We don't have to answer most of this yet. Billie Coop isn't making any money — it's not our main source of income, and we're still fitting it around day jobs. But the goal is for it to be, and as we get closer to that, these questions stop being abstract. We'll have to figure them out, and when we do, we'll write about how it goes. Think of this as the start of a series — we want to keep exploring how AI is affecting worker co-ops, and in some ways work and labor more broadly.
— Steve
About AI & writing
📊 Lite Invoice — Week 4 Stats
Last week: After the Bump — the launch boost doesn't last forever, so now we start figuring out growth.
← Back to all posts