Welcome back, picnickers! We’re picking up where we left off last week: as making becomes easier than thinking, teams are shoving things into prod faster than they can evaluate them for quality. Everyone is in a rush to get nowhere in particular. Urgency without direction is eating the industry alive.
A paper from earlier this year gives a name to this phenomenon. It’s not mere cognitive offloading (which was already studied as a problem last year). It’s full-blown cognitive surrender: abandoning our own judgment entirely, and letting AI take the wheel. Props to David Dabscheck, whose summary put this paper on my radar a month before Ars Technica did their feature about it.
Subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time…fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny.
I wrote about this phenomenon for NN/G a year ago, but I’m not a scientist, just a guy typing words into the internet.
What’s happening was pretty obvious, however. Generating working (for a given definition of “working”) code or plausible text has never been easier, while validating that the output was correct is getting proportionally harder. Every time someone involved in the process decides to “just AI it,” the chain of provenance breaks. Instead of building a mental model in manageable increments, we are now asked to absorb massive chunks of output being flung our way all at once.
Cognitive surrender is a leadership failure
Surrendering to this onslaught is far easier than asking for thoughtfulness: you get branded as an innovator rather than a party pooper, and leadership is thrilled at the prospect of more velocity. Getting Claude Code or Figma Make to do something is infinitely easier than getting it to do the exact thing that you need it to do. As a result, we are being trained to expect less of our tools, and to fit what we want into what we can get.
But when something goes wrong, who is responsible? It’s certainly not the model provider: after all, legally, these chatbots are for entertainment purposes only. When the little text at the bottom says “AI can make mistakes,” you must understand that someone is still responsible for those mistakes, and that someone is you:
You probably read it as "AI is capable of making mistakes; you should check the results". What it actually says is "AI is permitted to make mistakes; you are liable for the results, whether you check them or not".
Even when the LLM doesn’t make an outright mistake (in the sense of a verifiable fact) it will still produce underwhelming outputs. And not only are LLM users unable to detect that the outputs are underwhelming, but their internal bar for quality progressively degrades to the point that even their own work begins to approach the mediocrity of the machine.
I don’t want to blame workers for this. People I know & respect who have surrendered to AI have done so under extreme duress. I’ve been writing about AI usage mandates here and there on the Picnic, and especially how those mandates are accompanied by a lack of other concrete guidance.
In the presence of demands to use a tool, without a clear understanding of what using the tool is meant to accomplish (beyond “more velocity”), workers are increasingly forced to abandon their own judgment in favor of the logic built into the product. Workers must either do their jobs in spite of the tools being forced on them (and thus drown under the deluge of added workload), or follow the path of least resistance and become a clerk for the machine:
[…] there is a seething unhappiness among both manual and intellectual workers because the resultant systems tend to absorb the knowledge from them, deny them the right to use their skill and judgement, and render them abject appendages to the machines and systems being developed.
While designers are accustomed to sitting on our hands and complaining that we have no power, notionally there is some leader in your company responsible for the design practice. But those leaders have largely abdicated these responsibilities. VPs and CXOs who built their entire careers on governing pixelfucking can do nothing for their subordinates now.
“Faster outputs” was the governing principle of their life’s work, but today “going faster” means going without designers entirely. If we want the profession to remain relevant, we need to avoid cognitive surrender, and ditch the leaders whose limited skillset drives them to advocate for it.
Prevent cognitive surrender from leaking into your work
Outputs without outcomes may be the default, but it’s far from the only option available to us. Managers with a real point of view can make all the difference between shipping as much as possible in case good ideas will arise by chance, and making a choice about where the team can make real impact:
I'm rebooting how my org operates around a simple idea: every investment of design effort should connect to how we actually grow: attracting new customers, expanding usage, or retaining core customers. That's it. The default answer for everything else is "not now."
This kind of pivot can’t happen beneath the level of strategy. Tactically seeking out pain points to solve has extremely low returns, no matter how quickly you can churn them out. And if your leadership isn’t up to the task, you need to get some design judo going — for which no amount of high-fidelity prototyping will help you.
Artifacts that don’t exist on the fidelity cascade (such as power maps) are the only thing that will help you break the cycle. And as with all such maps, it’s the work of making it, rather than the thing itself, that creates value.
— Pavel at the Product Picnic
