Howdy, picnickers!

Although, with the season’s first snowfall upon us here in New York, it may be time to admit that the picnic season is over. And as the year draws to a close, I can already hear hundreds of keyboards banging out takes on “UX trends to watch” or “the state of UX in 2026.”

My personal hot take is that UX in 2026 will be almost exactly the same as UX in 2025, which was the same as UX in 2024, and so on and so forth. Oh sure, the tools will change a little. People’s patience with Figma continues to evaporate, and your various Penpots are rising to compete with it. But two things about UX will remain immutable, and these two things constrain the possibility space of the entire discipline:

  • The first is the broad strokes of how design creates value. We talk to stakeholders to understand what they want to accomplish for the company, and talk to users to understand what they are trying to do. Then we find the overlap; what user behaviors the product needs to enable to help both the business and the customer to achieve their respective goals.

  • The second is the reason that this process does not work. Because our stakeholders do not come to us with business goals. They come to us with ideas, but those ideas are inevitably ideas for outputs. Dictating what a thing should look like or how it should make users feel has been elevated above such humdrum everyday concerns as “why we are doing it,” and highly placed executives want a piece of the pie.

In the two years since Chuck Wendig wrote that essay, his assessment of the situation has only become more accurate. The friction we talked about last week has steadily dissolved in a soup of LLM-generated prototypes that shout “my idea is possible, and therefore it should be built.”

It doesn't help when big organizations are structured such that the activities of build, measure and learn are split across different groups! One group will specialize in learning, one in measuring, and one in building.

As with all Gen AI use cases, this is not a new pattern; merely the scaling-up of an old, dysfunctional one (resulting, naturally, in more dysfunction). And the dysfunction also stems from how stakeholders choose to use this pattern. I’m talking, of course, about “build, measure, learn.”

In the abstract, there is nothing wrong with it. It even approximates the scientific method. But rather than being a tool for creating good products, it is deployed as a smokescreen to prevent that outcome.

Rather than being a feedback loop, “build, measure, learn” is deployed as a sequence. We will first set our goal as building the thing. Once we have finished building it, we will measure its impact. And — somewhere down the line — we pinkie swear that we will learn something from what we did.

This pattern assumes that we already know everything we need to know to build that first slice. Unfortunately, nothing could be further from the truth.

The term “UI” tends to carry the common mistaken assumption that the system already has a defined behavior which design must simply express clearly. But software has the behaviors we give it, and we should design those carefully.

A functional “build, measure, learn” process would generate learning from the very beginning. As soon as we learn that the thing we’re building won’t meet our goals, we would stop building it. But in a context that privileges short-term execution of ideas above all else, the build must be completed, because getting to “done” is its entire value.

Effectively, we have circled all the way around to waterfall in two-week chunks, except worse. The stakeholder’s idea is not a complete design. It is a sketch; it creates the impression of substance. What we learn through delivery is not “how well does the product do its job” but rather “where are the inconsistencies in our requirements?”

What this produces is, essentially, anxiety.

You often hear “comfortable with ambiguity” as a key trait that leaders ought to have. But there are two kinds of ambiguity. One is the ambiguity of the path; we know where we want to go, but we don’t yet know how to get there. “Build, measure, learn” works fine here, on short time scales: we try something and see if it brings us closer to our end goal or not. But the other — far more harmful one — is the ambiguity of the goal.

What usually sinks projects are mistakes like a lack of clarity about what a project is actually meant to achieve for a business… A lot of what seem like tactical failures, then, are in fact a direct result of strategic mistakes.

And this is the death of any design-oriented culture. Without understand why we are doing what we’re doing, teams end up inventing their own arbitrary benchmarks of what “good” looks like; each discipline becomes siloed and insular, concerned over small things no one else cares about.

And the lower the maturity — and therefore, the slower the release cycles — the more this tension builds. Without external, objective validation that what teams are doing is the right thing to be doing, they invent more and more arbitrary measures of quality, and spend hours polishing things that just don’t matter, and will never make a difference in the long run.

Enter UX design. Or at least, enter the tools of UX design. Similar to what happens with UX theater, the processes of our craft become hijacked towards the goal of releasing this tension. The developers (who are, after all, the highest-paid constituency in the room) need to be assured that what they’re doing is not a pointless waste of time.

The thing that makes design research so fraught is that a lot of organizations pretend they want to learn, but really just want to justify decisions that have already been made. Admitting this is a real timesaver.

And so we arrive at validation. Just like “build, measure, learn” it is not inherently a bad thing. But when deployed for projects that have no clear goals beyond “build the thing,” validation ends up being nothing more than UX cheerleading. The goal of this work is to soothe the team and the project sponsors that what they have been doing all this time was not wasted.

This is research slop.

The idea is not new (Simon’s original article is over 8 years old) but it has also been scaled up exponentially by LLMs, and its harms scaled with it. And some of those harms are harms towards designers. In the article that best articulates my personal thesis around what design is (and is for), I emphasize that doing design must be safe; safe not just for our users, but also safe for the designer’s reputation within the organization they serve. Because when we make assumptions and bets, we are frequently wrong. The design process works by learning from the ways in which we end up being wrong, but it needs that extension of grace in order to function.

Try, observe, try again.

But what if it’s not safe to be wrong? What happens when our feedback loops are slow, and our roadmaps leave no room to course-correct? Well, then our experiments are guaranteed to return positive findings — regardless of what was actually observed.

And that is what we see as research becomes “democratized.” Rather than reduce the risks that a project faces, the work becomes a pacifier to convince teams and stakeholders that everything is fine, and delay the reckoning with reality for as long as possible. The ROI of design is reduced down to an accountability sink, so that there is a designated function to take the blame whenever anything goes wrong.

Design leaders have gotten away with following the path of least resistance for this long. But following that path any further — embracing design’s role as a soothing function and accountability sink — will destroy us. Absent ideal circumstances, the ideal process simply does not work.

Next week, we’ll talk about how the industry can pull out of this nose dive.

— Pavel at the Product Picnic

Never miss an issue.

Subscribe for free