- The Product Picnic
- Posts
- Tools for not-thinking
Tools for not-thinking
User research would have prevented this week's extremely expensive mistakes - but leaders are determined to keep making them.
Hello picnicker!
With this newsletter, I want to do more than just send out a series of links. Instead, I want to create connections across the different pieces of content - both because that approach is better for drawing new conclusions from the material, and because it’s more interesting for me to write.
So today’s newsletter is going to be in a slightly different format as I work out just how I want the sections to support one another, and how to adapt to the ongoing loss of discussion spaces such as Posts.cv and Threads. The links are mostly going to be sprinkled throughout the text.
The Front Page
Valuations and investments are skyrocketing, but it’s been a rough month for generative AI when looking at actual products. As teams seek to juice usage metrics and revenues, AI-enabled functionality is being pushed into the hands of unwilling customers – but companies are now having to backpedal as users revolt.
Apple suspended its headline summarizer after it lied to users that Luigi Mangione shot himself and Benjamin Netanyahu was arrested. Meta was forced to shut down AI accounts it created on its platforms. The train wreck release of Gemini has led to users calling for hours to disable it. As a cherry on top, Logitech was pressured into allowing users to disable the “AI Prompt Builder” it snuck into its keyboard software.
The tools companies rushed to market have earned a reputation for generating middle-school-grade outputs, 41% buggier code, and being drains on productivity. The mere mention of AI immediately reduces customer trust.
It should be clear to anyone by this point that AI is not a silver bullet. A savvy leader with access to a newspaper might pause and consider how AI is actually going to fit into their overall strategy and what will make an AI product a success rather than an expensive mistake. They might make a significant investment into user research and design strategy to precisely frame opportunity and define how the proposed idea will fill the identified need.
Unless that newspaper is the Washington Post and that leader is its Chief Strategy Officer. Last week, CSO Suzi Watford announced a mission for the paper, and it's clear that research didn't have a part to play because that mission is for WaPo to become “an A.I.-fueled platform for news” that delivers “vital news, ideas and insights for all Americans where, how and when they want it.”
Watford is far from the first to use phrasing like “where, how, and when they want it” when talking about how value might be delivered to users. I’ve even written about it before when describing the form factor trap that organizations without strong design culture fall into:
The user never quite becomes “empowered to receive relevant information that lets them take the right action at the right time” because we optimized out the part of the process that asks “what does that actually mean?”
But the vision of an “AI-fueled platform for news” is even more ambiguous than the much-maligned dashboard, because it’s not even a form factor. There’s no there there; it doesn’t actually mean anything. What Watford has brought to the picnic is a nothing burger.
Via buns.life
The Timeline
Why are companies continually seduced into this kind of “AI transformation” when they see it failing around them?
Last year’s essay from “Doc” Burford perfectly captured the mindset in play: leaders who see their business as “making the stock go up” rather than making a profitable product people want to buy. And if you can’t make profits go up, the only other lever you can move is to cut costs.
Unfortunately, the mindset of “leadership over expertise” that Burford describes is completely antagonistic to UX. Design and research are perceived exclusively as a cost ripe for “optimizing” – not only because designers draw salaries and spend time on work, but because their findings can reveal leaders’ pet ideas to be detrimental.
And so, rather than an opportunity, design has become a problem for these “leaders” to solve. In a recent article Jared Spool claims “...nor has there been a single documented instance of a UX person being replaced by an AI tool” - but of course, that is not the process by which UXers are replaced. Instead, they are pushed out through a diversity of RIF efforts and then never backfilled.
This is where the one-two punch of AI-driven austerity. It’s not enough to build AI into the product and hope that it solves the problem. No, AI also promises to figure out what the problem is. Quite literally, as the definition of AGI given by AI’s poster boy Altman is “when it can figure out how to make us lots of money.”
Today, as Darren Hood points out this week, AI tools can do nothing of the sort. In the same way that AI-generated text creates the appearance of content, AI-generated outputs create the appearance of strategy - but there is no substance behind an AI “mind map” or an AI “persona.”
But it is cheaper for managers who neither know nor care about building good products, because they are easily fooled by the greebles that AI tools can produce by the boatload.
Employers take AI's shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn't to "support" teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.
Rather than a “bicycle for the mind” AI has become a tool for not-thinking; the implementations of these technologies that are served up for us to use do not actually accelerate our path to insights – but falsely promise that the path can be sidestepped entirely. The predictable outcome of deploying tools without the ability to see that they produce nonsense is deteriorating critical thinking skills, flattened understanding, and (as we saw earlier in this post) a steady stream of products that don’t actually solve any problems but do cause quite a few problems.
Goodbye minimum viable - “perfectly acceptable” is the name of the game from here on out.
Good Questions
What do you do if you want to buck the trend and make products that are good, instead of merely mimic the appearance of good?
There is a set of questions that Amazon used to have, called the Five Working Backwards Questions. I say used to have, because you can see newer public-facing materials with a slightly different list from the old one.
The first three questions are the same, but questions four and five have changed. Amazon’s latest guidance has been to stop asking what I think is the most fundamental question of any product practice:
How do you know?
If we don't actually know that what we're making will help our customers, then we are just designing for ourselves - even if a chatbot told us that the value was real.
Ironically, Amazon’s updated guidance - “how will we measure success?” - becomes pointless when “how do you know?” is removed. The implication of the change is obvious (going from research-first to build-first) but without that research, there is no way to actually know that the metrics we have set are in any way meaningful. If our original decisions were not based on facts, we can never hope to actually iterate on them.
The size of a bet made based on assumptions or hallucinations is by necessity very different from a bet made based on fresh and relevant facts.
- Pavel at the Product Picnic