Howdy, picnickers!
The “news” part of this newsletter brings some coverage of the drama at grammar police company Superhuman brewing all month long, when Wired found that Grammarly had sneakily added a feature that creates AI impersonations of famous writers without their permission. After a week of non-stop intense backlash (and a class action lawsuit) they quickly rushed to take it down.
Superhuman’s CEO Shishir Mehotra then did an interview about it. Malka Older¹ has a great close reading of it over on Bluesky, to which I would like to add two things:
We find out from the interview exactly what kind of advice the feature gave out (nothing like what the writer it’s impersonating would have said).
We also learn that engagement with the feature was poor; turns out crime doesn’t pay.
There’s a lot to say about the cavalier attitude towards both other people’s intellectual property, and what it means to receive knowledge and apply it. Christine Haskell puts it well:
The real issue is not one AI feature. It is the system underneath it: a culture that has spent decades treating friction as failure, depth as inconvenience, and expertise as something to extract rather than earn.
Reading between the lines, Mehotra’s interview paints a picture that I think many tech workers will find familiar: features are conceived, coded, and shipped as quickly as possible. He is happy to admit that the feature was a mistake… in retrospect. But in the moment it actually mattered, critical thinking was swept away by the false urgency of pushing things out.
The industry is in the grip of directionless urgency
We often hear about how AI is “empowering” people to do this or that. Next to “democratizing” it is probably the most common techno-solutionist cliche. But just like the “democratizing” line is full of problems if you stop to think about it, so too is “empowering.”
Lara Hogan has a very good framing in this talk (see also the blog post). Empowerment, you see, is the opposite of direction. It’s good to empower people when you want them to explore and play around in a low stakes context. But under time pressure or when there is risk, what teams crave is direction.

Remember when we had a diagram segment? Maybe I should bring it back.
Look at that list, and then think about how LLM coding tools are being rolled out. The entire conversation is around urgency: you must be more productive yesterday. Work directly in code, because it’s faster. Commit your vibe PRs.
But no direction is forthcoming. We are being told to run faster, but not where to run to. There’s no clarity beyond a demand for outputs. No wonder teams feel lost.
The result is that businesses are faking their results from AI adoption. Those outputs are easiest to measure, so that’s what gets trumpeted as success:
Code can look right and pass the unit tests and still be wrong. The way you measure that is typically in benchmark tests. So a lot of these companies haven't engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of [pull requests], these are liabilities. These are not measures of engineering excellence.
This mindset has taken over tech to such an extent that even designers are trying to show our value by contributing to these metrics. Unfortunately, this is counterproductive; the entire purpose of our role is to provide the direction. When we abdicate that responsibility, we make it that much harder for our colleagues to do good work.
Because there’s a rhetorical sleight of hand involved with “empowering” everyone to write a line of code. A line of code does not have quality in and of itself, because it does not really have meaning on its own. You might as well pick an individual line of this blog and try to evaluate its quality. Code is a credence good (I truly recommend this article if you’ve only ever read AI takes from a tech perspective before), which is a fancy economic term for things whose quality is not apparent to the buyer:
The harder it is to observe quality at the moment of delivery, the greater the risk of a late-emerging loss, and the more central the question of proof becomes. AI therefore does not merely accelerate production. It also increases the probability that hard-to-evaluate services will be circulated at scale, only revealing their flaws once they have already been integrated into a decision, a contract, or a workflow.
This naturally makes one wonder: if we can’t know whether what we ship is “good” until it reaches the user, surely we do our best to observe that interaction!
Well, no. Of course not. How could you ask that? That’s not very agentic of you. The highest velocity — and remember that you are solely being measured on velocity — is obtained by yeeting things into the void and never checking if they worked.
Vibe prototypes undermine our ability to provide direction
Let’s take the claim at face value: some things can only be learned by shipping. That is, we must observe user behavior to learn what we need to know (if you don’t know what you are trying to learn when you ship a prototype, do not ship a prototype).
Having LLM-generated code within arm’s reach feels like the best approach for this. It’s so fast! It’s so magical! But it also undermines your ability to decide what it is you want to test. If you want to test direction rather than mere execution, you do not need any code at all: Andrea Ong is launching a business with a poll and a Notion page plus talking to a bunch of people. The list of features was not the most important thing for her to test, and so she didn’t.
At a time when hotshot thought leaders are proclaiming the death of design tools because now everyone can just “do” code, we seem to have forgotten that the point of the tools was to realize our intentions rather than merely manufacture a product:
I don't go straight into code because that is not where ideas start. "Designing in code" is like telling an industrial designer to "design in the fabrication shop". It makes absolutely zero sense. You will waste so much time and resources trying to start any new idea.
This is not to say that code is not a design material. It is; the same way as paper, or a canvas, or boxes and arrows. But the wonderful thing about design materials is that they are not just one thing. When people say “code is a design material” they are often talking about UI and how Figma does box models wrong. Which is fair, but it’s a tiny sliver of the whole story.
If code is a design material, then coding is designing. If coding is designing, then the job is not to actually write code, but to create a theory for how a system might behave:
The primary aim of programming is to have the programmers build a theory of the way the matters at hand may be supported by the execution of a program… programmers have to be accorded the status of responsible, permanent developers and managers of the activity of which the computer is a part.
When we reduce programming to lines of code deployed to prod, we obliterate this process of theory-building. The computer transforms from being a part of the system into being the entire system. Which is a critical mistake, as Nilay Patel emphasizes in that Grammarly interview: for most of us, the margin on bits is gone, and what remains to most people is the margin on atoms. Things happening in the real world make money. Things happening on a computer just send money to a platform landlord.
But doing this is hard. It’s much easier to just push out new features. This was always true, and it’s even more true nowadays. There’s even a name for this phenomenon of giving up and letting our tools dictate what we do next: cognitive surrender.
We’ll cover that next week.
— Pavel at the Product Picnic
1: While Dr. Older has impeccable academic credentials, she is also a fantastic fiction author. Her book Infomocracy is a must-read for anyone who sees themselves as a systems thinker. I’m not even being paid to say this.
