- The Product Picnic
- Posts
- The AI Hangover Era (The Everything App Part 3)
The AI Hangover Era (The Everything App Part 3)
Some leaders are starting to realize the extent to which AI has broken their ability to prioritize and ship value. Others are still doubling down.
Welcome back, picnickers! Today we’re continuing the conversation around the AI-powered “Everything App” that everyone seems to be working on. Last week, we covered the cause of this lack of focus – managers disengaging from critical perspectives by building an AI moat around themselves.
Today we’re going to bring it all back to UX/Product: how is this isolation affecting teams’ abilities to prioritize and deliver value? As it turns out, extremely negatively. And while some leaders are reversing course, others are doubling down.
Welcome to the AI hangover. It won’t be pleasant, but the sooner we get through it, the better.
The Timeline
Roughly speaking, managers have two functions: to set goals, and to organize teams that can meet those goals. Strategy and delivery. To no one’s surprise, the AI-powered Nothing Manager has no clue on either front.
Let’s start with strategy, as one ought to. One of the important functions of management in organizations is to transform data into knowledge - create a shared understanding of what is known to be true. When managers outsource this to AI (or worse, ask their reports to outsource this to AI on an individual basis) the entire semantic environment collapses. The shared informational reality is lost, and what remains is disorientation.
When answers come faster than questions can form, what happens to understanding? Why pause with uncertainty when a chatbot can summarise, speculate, or reframe? Why follow an idea when you can skip to the answer?
AI smooths over the specifics – where the most important information can always be found – with unactionable generalizations. Megan Scheminske points out one outcome of filtering your world through AI summarizers: it won’t tell you the context of whether a topic was prompted or not. They can also straight up mix up yes with no.
But as long as you don’t read the underlying data yourself, you won’t know that (and won’t know if you’re actually working off bad data, because the AI certainly can’t tell you) so you can generate a report that says whatever you want. And the next person can generate their own report that says something else. None of these insights are real; we are data rich, but make ourselves insight poor.
It’s no surprise that over half of teams can’t even align on a strategy.
And as we move from sensing to responding, substituting management for AI breaks the process even further. Instead of carefully planning work, LLMs flood backlogs with thousands of unclear items. AI-generated lists of requirements create ambiguity about what leaders actually want, and what they had an LLM generate for them and then didn’t delete because they didn’t read back the output.
AI tools now deliver instant, judgment-free, obedient responses. That recalibrates what people unconsciously expect from interactions (especially under pressure). So when humans introduce ambiguity, disagreement, or delay, the friction hits even harder.
Teams are overwhelmed. Are managers doing anything to help? No, they are complaining about low velocity and execution problems, without taking a moment to acknowledge that problems with execution are caused by organizational problems, which are management's responsibility to fix.
With their managers no longer responsive to critical perspectives (or no longer responsible for actually making decisions) they are forced to commit in a system that sets them up to fail.
Committing without examining the premise isn’t backbone at all. It’s just compliance.
Teams trying to work through the increased workload with vibe coded demoware that “gets you 80% of the way there” end up drowning in the remaining 20%. The survival strategy of “just get it over the wall, leaders don’t know what they want anyway” results in an environment of “good enough, who cares.”
And under a Nothing Manager, that survival strategy works. Because all they are concerned about is performing leadership, and a steady stream of outputs makes them look like “builders” as long as no one looks too closely at any kind of downstream results. And there’s nothing better than AI for producing results that look like work was done.
The material gains from the LLM (which are usually quite marginal) really aren't why people are doing it: they're doing it because in many spaces, using ChatGPT and being very optimistic about AI being the "future" raises their social status.
This of course brings us into the gnarly world of “culture fit.” Performing the desired culture, rather than demonstrating results or doing good work, becomes the main marker of legitimacy (if you’ve ever discussed someone being Technical or Non-technical, you were engaged in exactly this sort of behavior).
The people who pull off this performance more convincingly than anyone else rise to the exalted ranks of the C-suite, where they can only fail upwards because they are surrounded by people whose main skillset is also acting. This “business idiot” is completely insulated from consequences, so why not go all-in on AI? It’s all upside for them, and if they bet wrong and have to lay off a bunch of people, well that just makes the stock price go up.
But the consequences can only be delayed for so long.
Reading Material
I cannot overstate just how poorly thought-out these investments actually are. Target is a great example, announcing an AI strategy to “improve agility” after destroying their business through decisions that alienated their most loyal customer base. But Target COO Michael Fiddelke is not alone in the bad decisions club: 64% of CEOs told IBM that they are investing in AI without knowing what value it will bring.
Unsurprisingly, it’s not bringing value. That same report says that 3 in 4 AI initiatives don’t justify the expense. Half of leaders regret their investments in AI over people. The two most high-profile backpedals are Klarna (a strategy that lasted all of one year) and Duolingo (walking back the announcement made just 3 weeks ago).
Despite the high costs and deep cuts, the average time savings of AI tools come in at around 3%. The tasks picked up by AI aren’t actually the most time-consuming or tedious ones, but whatever happens to be easiest for an LLM to do.
As a result, most of the adoption is not real – companies are paying for seats, but employees aren’t finding value in them. Those that do give it a try find that the tools actually make things worse; the drive to wedge AI tools into workflows is taking focus away from producing any actual, you know, value.
Increased use of "A.I." coding assistants negatively impacts delivery throughput & stability. There's even a plausible causal mechanism, in the shape of larger change sets and more code quality issues - both well-documented by now.
CEOs bought in to AI hype on the promise that one day things will get better. And now it is plain to see that the “when” is really more of an “if.” OpenAI didn’t achieve most of its H1 roadmap. Manufacturers are cheating on tests because LLM performance has plateaued. Newer models are increasingly providing converging ideas because everything is now trained on the same data, and new data is gradually being poisoned by AI outputs. And in the meantime, entry-level roles being displaced by AI are breaking the junior-to-senior pipeline of industries. Once the hype dies down, there may be no one left to take AI’s place.
Of course, AI vendors themselves can’t afford to let the hype die down. As a distraction from the models getting worse, OpenAI tried to inject some of Apple’s mystique into their product by getting Jony Ive on board to make what appears to be (metaphorically, if not by form factor) this decade’s Google Glass – an always-on surveillance gadget that will get people punched. Other AI leaders are promising apocalyptic-level impacts of their models, because anything less than total societal transformation is no longer an exciting enough promise to keep wading through AI soup.
Good Questions
There’s an interview question I like to ask hiring managers to get a sense of their team’s performance. While they are excited to talk about the roadmap, I like to bring the conversation back to what they actually accomplished. So I ask, “tell me about your most recent win; what did you plan and what were you able to deliver?”
The same lens is very useful for helping people get off the AI hype train. Don’t look at what these people are promising today – look at what they promised yesterday, and at how well they were able to deliver on those promises.
— Pavel at the Product Picnic