“AI can make mistakes.”
This phrase might as well be the slogan of our era. It follows on the heels of LLMs being hastily jammed into various places, from software development to courtrooms to surgeries. But for some reason, it’s seen as a get-out-of-jail-free card: AI can make mistakes and yet we should still use it (must use it, at some companies). AI “will get better someday” but we must use it today, while it still confidently makes such catastrophic errors.
If a human told you things that were correct 80% of the time but claimed, flat out, with absolute confidence, that they were correct 100% of the time, you would dislike them & never trust a word they say.
We are extending unearned grace to LLMs
It’s common to talk about AI “replacing” such-and-such jobs. Implied within that word is some equivalence: that the AI is meeting the same bar that the person had set. Nothing could be further from the truth: the AI does a subpar job, far below the bar set by a professional. Even in coding, where proponents self-report the most benefits, LLMs actually create more bugs, more technical debt, more cognitive load.
In the public sphere, we also see AI behaving like a bad citizen. Their creators ignore no-crawl directives in the search for more training data. Small hosts are drowning under what is effectively a VC-funded DDoS attack. Between these scrapers on one side, and anti-browsers on the other, AI is killing both the infrastructure of the Web and the livelihoods of people who make it the vibrant place that it is today.
I want to call out this particular essay because it simultaneously highlights the ransacking of the public square and points to a piece of the puzzle I’ll talk about in a bit: the social structure that AI toolmen create to place their products beyond reproach:
I find myself, along with thousands upon thousands of others, in a ridiculous situation wherein I am constantly told that as a historian soon my services will no longer be necessary because software will just do all the thinking for us, and that my skills are worthless. Ironically, were the companies who have stolen all my work to train their models to pay me for them they would, however, go bankrupt.
If a colleague had the reliability rate of an LLM, they would be fired. If ordinary software did, it would be banned (indeed, we are seeing social media crackdowns for much lesser harms than chatbots inflict by design upon their users). Instead, companies give LLMs responsibilities that they would not trust their top humans with.
So how did AI tap into some endless well of grace? And why are companies putting up with it when it repeatedly threatens their own reputation and bottom line?
The intelligence illusion
The ELIZA effect is familiar to anyone who knows that AI wasn’t invented in 2022. A chatbot is anthropoglossic — it communicates like a human — creating an illusion that there’s a there there. I know a number of designers who are frustrated with “chat” being the dominant interaction pattern with LLM tools, but the reason for that is simple: without the illusion of another intelligence on the other side, these tools just aren’t very impressive; certainly not “VC billions” impressive.
Unless the design signals “this is AI!” at every turn, the illusion of magic disappears. It becomes regular software, and stops being forgiven for its unacceptable failure rate. But because it is a chatbot, we are willing to lend it our humanity endlessly. The same people who lose patience with a delivery team (for having to explain what they want the team to build) will be willing to go through endless iterations of prompts with a bot, burning credits with every cycle.
We are willing to put up with more from our new robot overlords, and expect much less in return:
LLMs are trained on all our shitty code, and we've taught them that that's what they should be outputting… Instead of wanting to learn and improve as humans, and build better software, we’ve outsourced our mistakes to an unthinking algorithm.
Is it because of the sycophancy? Because of the halo effect? Because we mistake faster for better?
I propose that it doesn’t actually matter.
Empathy for colleagues
The LLM experiment has taught us one thing: people are willing to tolerate error, explain themselves, collaborate, trust. Today, they are choosing to invest this positive energy into a synthetic slop extruder. But tomorrow, they could invest it into their fellow human beings, if they chose to do so.
I’m asking you to make that choice, and to help others around you make it too.
Take the grace you reserve for Claude, and instead extend it to the people you work with. Be patient when explaining what you need (in writing specs, and in critiques — both design skills). Set expectations that emphasize what matters. Take the time to understand their limitations, and leverage their strengths. Polish only what’s important, and don’t sweat the stuff that no one cares about.
Make the effort — because unlike an LLM, this relationship can go both ways. People can teach you new things. People can ask clarifying questions that help you refine your own thinking. People can provide their own experience, their own point of view. People can take initiative, and make progress towards goals you share.
And perhaps most importantly, people can build collective power. If you are going through some bullshit at work, ask a human colleague if they also think it is bullshit. You might be pleasantly surprised.
— Pavel at the Product Picnic
