Open your favorite thought leadership page and scroll for a few seconds. Chances are, you’ll quickly find some screed about how the reality of software development is changing and we have to adapt to it.
Well, I have news for you. The reality of software development changed decades ago, and we still haven’t adapted. Today’s teams are using LLMs to push black-box code they don’t understand into production — but between plentiful open-source libraries and StackOverflow copy-pasting, they were already doing that.
Code hasn’t been the real limit on productivity any time in this century, and yet all of our work processes are structured as though it is.
What was (is) the limit? Getting signal from customers.
You don’t need high fidelity to learn that you are barking up the wrong tree. The easier coding becomes — and the more you produce before showing it to a user — the more effort you end up investing into being wrong. It’s no surprise that while HBR has found that AI intensifies work instead of reducing it, execs forecast an anemic 1.4% productivity growth over the next 3 years.
Alas! It turns out that all the other people in the office with you aren’t merely decorative. Work is a social system and productivity improvements localized to a part of the workflow that was already completely unblocked are not going to reflect in the bottom line.
Unfortunately, companies have been doubling down on obliterating the coherence of that system with endless waves of layoffs (which they blame on AI — and even Sam Altman isn’t buying all of these alleged productivity gains from his own product). This, not LOCs per second, is the #1 blocker for productivity growth.
Rather than help, the presence of AI is actually making it worse.
If there is a productivity crisis in the knowledge economy, it is the fault of management for failing to retain mid-level people in positions where they might feel consistently supported to help make the projects they sponsor do well.
An atomized team is the antithesis of UX
It’s easy to pretend that continuity isn’t important, as long as you can get your deliverables over the line every two weeks. But is that really what you want to write in your promo doc, or on your resume? No one cares about your story point velocity. Managers want to see impact and ownership. And without engaging with work-as-a-system, you will never achieve either of those.
If you don't understand the structural incentives, the social context of decision-making, and the individual perspectives, you will be continuously confused by watching your organization make obviously bad choices over and over and over while ignoring your recommendations.
Long before AI rolled onto the scene, the “build to learn” ideology had already done irreparable harm to people’s ability to understand this system, through the simple means of convincing them to pretend that the system does not exist.
But no matter how hard you try to ignore it, the system is there. You can’t just build your way into PMF. You have to do all the uncomfortable, squishy work around the software. Like research.
Unfortunately, research means talking to people, which means that research can only ever happen at a human pace. It can be tempting to skip that research by relying on heuristics (for example assuming that efficiency is always good, and optimizing for that) but that approach is always going to burn you. The state of the art moved past simple time-and-motion optimization in the 1960s, and it’s time to get on board.
Good user research, not output velocity, unblocks productivity
It’s easy to assume that someone else already talked to users, and figured out what they wanted. Even the Agile manifesto carefully excludes the work required to actually compile requirements. And in the decades since it was written, the situation has only gotten worse: tooling has helped us deliver more quickly, but it has done nothing to help us learn what to deliver.
This perverse incentive has led a lot of people to foolishly use AI to counterfeit research data (which is often sold to low-maturity teams under the moniker of “synthetic” research) just so they can get back to shipping deliverables, which is easier and smoother and less complicated. And also provides zero actual value.
There are many excuses for not doing real user research. But if you want to anchor your work in real data instead of sparkling assumptions, you’re going to have to get over those excuses. Once you’re ready to do so, Stephanie Walter has made it easy with a comprehensive guide to user interviews that will get you started even if you’re not an expert.
Which brings us back to the thing about code. Why doesn’t the ability to reach high fidelity faster accelerate our learning? Because the “blockiness” of our research artifacts is actually a beneficial property. Good research isn’t looking for a yes or no; it’s creating a dialogue with the participant, and low fidelity leaves the possibility space open as wide as possible.
And when you’re finally ready to write working code? We’ve known for ten years that testing by launching a viable product is foolish. Good research will not only give you answers, but also let you develop a sense of what data is convincing enough, and which assumptions actually warrant testing in prod.
— Pavel at the Product Picnic
