"Workslop" was the logical outcome of productivity maxxing

Local throughput optimization always externalizes costs onto downstream colleagues.

The Web is drowning in slop. Hobby and professional resources have all been degraded — not solely by AI, but by a partnership between LLMs and policy-makers who do not care what goes up on the website so long as there is “content” to drive ad or subscriber revenue (which Wiep Hamstra terms the “webmaster’s paradise” in this essay). Coincidentally, Google admits that the open Web is in “rapid decline” as even Youtube and Spotify have been unable to handle the influx of slop.

But while consumers have the right (however hypothetical) to opt out, workers are not so lucky. AI mandates are still alive, and still misguided; setting “more AI” instead of “more value” as their goal.

However, we might be seeing the tail end of this trend. AI usage is declining in Enterprise as the same executives who pushed AI tooling mandates onto their people fail to explain any actual benefits of those tools.

It turns out that scaling up outputs doesn’t actually translate into valuable outcomes. In fact, it can even undermine the valuable work being done.

The displacement effect of workslop

Just like in consumer spaces, workers do not like finding themselves on the receiving end of AI. And now thanks to the Harvard Business Review, we have a word for this situation: workslop. These are outputs that resemble something a person worked on…but looking more closely, it is AI-generated material, copy-pasted without thinking.

Producing workslop is incentivized by a confluence of factors. First of all, it is just easier; time and effort are also costs and people will always reach for the “cheapest” solution to their problem. When the problem is framed as “do outputs” then instantly producing outputs with AI seems like a fantastic solution. The behavior may even be lauded by senior leadership because it uses those expensive AI tools that are innovative (yet that everyone seems to hate for some reason). And finally, the low quality of the outputs doesn’t actually affect them.

As you might expect, the cost of producing these outputs didn’t disappear. It’s just displaced onto the worker downstream of the AI user (sometimes at considerable expense to the company). Validating AI outputs was already difficult, but this pattern makes it even harder by human-washing the content. The worker downstream now has no idea where errors might be hiding, which parts of the code or report were reliable and which risk being a hallucination. Even if only a small fraction of the content was actually AI-generated, the worker has no choice but to review all of it; the provenance of every decision in the artifact falls into question. In code, this phenomenon is known as shotgun surgery, where a single, specific change makes tiny and often logic-breaking changes throughout the codebase, and good luck finding them all.

Even if there is no handoff, the lack of a unifying intent as an organizing principle behind those decisions creates a costly, unmaintanable mess:

He and his team were doing more babysitting than he was comfortable with. Now, he was having trouble remembering who did what, what they did, why they did it, and let’s not even talk about how they did it.

Workslop also magnifies the process that Steve Farrugia calls “anti-design” — where the team’s effort is spent entirely on compensating for shoddy upstream decision-making. For every intentionally bad requirement (“build out this feature because it sounds cool”) there might be a dozen non-requirements (“ChatGPT wrote me a PRD that you can follow”); design teams have no way of knowing which parts of the guidance they receive is real and which can be challenged or simply ignored because nobody actually made that decision.

Deslopping your toolbox

Even before workslop had a name, people were thinking about how to deal with it. The most radical approach may be Joan Westenberg’s: delete all your tools and become reacquainted with the full capabilities of the human brain.

But others think that might be going a bit far.

A Luddite is someone who is against the abuse — not the use — of technology.

Cory Doctorow has a framing that highlights some of the silver lining of the moment in which we find ourselves (which Scott Jenson expands upon here): automation following the principle of a “centaur” (human mind guiding automated tools) is, conceptually, compatible with a healthy value-creating collaboration.

However, a “reverse centaur” (automation making decisions on behalf of human workers) is inimical to a humane workplace.

A workslopper may feel confident that they are in the former camp: after all, they freely pick up the tools that produce these outputs. But their downstream collaborators are denied that choice. By refusing to take responsibility, the workslopper is turning colleagues into reverse centaurs without their consent.

This is not a technology issue; no amount of model improvements will make this okay. It is an ethics issue, and the only way to fix it is to fix your heart.

— Pavel at the Product Picnic