AI saved the feature factory

When outputs are valued above all else, hallucinations and slop aren't bugs — they're features.

There’s a quote (attributed to a variety of authors) that describes the process of going bankrupt:

Gradually, then suddenly.

Hemingway, or Mark Twain, or F. Scott Fitzgerald

As the popularity of the quote attests, this pattern applies to any catastrophe. And between serial layoffs and the fever of the AI bubble, tech is certainly at a catastrophe stage.

But we couldn’t have gotten to the “suddenly” without the “gradually.” That “gradually” happened over the past two decades, as reality-based business models were replaced by a vibes-based economy. That article is a long read, but absolutely worth reading to understand how we got to a point where orgs think they can entirely replace the design function with AI.

This state of affairs is the logical conclusion of a “build, measure, learn” approach to development, which in reality stops at “build”. This created an attitude that it’s OK to release something that’s crap, because users will tell us what they don’t like and then we can iterate. Under this mindset, it’s not necessary to do any thinking ahead of time; the entire burden of doing the design is shifted to its consumer.

Obviously, that state of affairs was unsustainable. I’ve cited Ed Zitron’s Rot Economy piece here before — tech companies realized this, but were afraid (or unable) to reorient themselves towards creating value for users. The incentive structure of the feature factory was too embedded, and the managers responsible for fixing the system were first and foremost the beneficiaries of that system’s many flaws.

Fortunately for them, AI happened. In a system that could still pretend that volume of outputs had inherent value, LLMs were a godsend, because they could produce more outputs faster than ever before. Were those outputs any good? That was the consumer’s problem.

The feature factory turned into a slop factory, and managers couldn’t be happier.

What the coming of the computer did, "just in time," was to make it unnecessary to … change the system in any way. So in that sense, the computer has acted as fundamentally a conservative force.

Joseph Weizenbaum (inventor of the chatbot), Weizenbaum examines computers and society

Who cares that the tools fail between half and 98.3% of the time? As long as you can make catching and fixing the error someone else’s problem, it doesn’t matter what that error rate is. You don’t need to be an expert in the thing that you are making, as long as you can pass the buck to someone else. Companies are even relying on this effect — it’s much easier to brag about the number of LOCs that are fully AI-generated when all the AI-generated PRs are rejected by people who know what they are doing.

This is why the “learn to code” canard has never sat right with me. We were always told that, even if designers don’t write our own code, we needed to know how to do it so that we could collaborate more effectively. But the ability to write code is perhaps the least important skill of a software engineer. Real cross-functional literacy starts by appreciating everything beyond those visible outputs.

You’d think designers would understand that, since we wrangle with exactly the same stigma of our surface-level outputs being perceived as the entirety of the work. And yet, here we are.

In conferences [designers] learn that “with great power comes great responsibility” but, when it comes to real-life clients, all they ask is to “make the logo bigger.”

The push for “democratization” of design, research, and so forth ultimately only engages with these surface-level outputs. My former professor Jodi Forlizzi describes the problem in great detail here: user-centered design is now seen as an add-on rather than a profession, with dismal outcomes (and if you want to really dive deep on the topic, another one of my profs weighs in here on how the unresolved friction between the design and computer science sides of HCI got us into this mess).

AI adds a layer of spice to the mix — if you don’t understand how user research actually works, “synthetic users” seem like they are just as good as real ones. If you do understand, the very idea should horrify you.

Even non-UXers who get over the hurdle of “talking to real people” can come away unsatisfied with the result, because no one told them that research is far more than just talking.

It's not the talking to people part that's the problem. It's the lack of process and standards for making sure you're talking to the right people at the right time in the right way for the right reasons, and that what you learn is actually being fed back into decisions.

It’s impossible to have this conversation without someone trying to thread the needle — surely there’s a compromise, we should try and find a way to get 80% of the rigor for 20% of the cost.

And yeah, sure. I’ve gone on the record in admitting that UX can get as deep in the weeds as any other function. I wouldn’t quote Erika Hall extensively if I didn’t agree that there is such as thing as “just enough research.” The problem, as Erika puts it, is when good enough is neither good nor enough.

As long as the understanding of our value (or the work of our partner functions) is constrained to outputs, the work that creates impact will be the first to be dismissed as “unnecessary.”

Chasing quality doesn’t always lead to quality. Good design doesn’t always look like good design. Before working on the pixels, you need to make sure you’ve drawn the arrows.

The goal is not to create visual representations. The goal is to create a shared vision of what we are doing, and often having something tangible to look at, poke at, and talk through is the best way to do that.

You don’t have to go far to find someone saying that AI is the best thing to ever happen for systems thinkers because it will weed out the people who only do surface-level work. But as long as our stakeholders only have a surface-level understanding of our work, that will never be true. If that is the future we want, we need to make it happen — because while we just sit on our hands and hope, the feature factory is planting its roots deeper and deeper.

Some thoughts on how to do that:

— Pavel at the Product Picnic