Today’s issue starts with a little joke:

This probably doesn’t come as a surprise to anyone working at a company of more than five people. Every team gradually becomes a machine for turning meetings into reports — and other content like emails, docs, and memos — that no one reads. Companies produce so much text that it’s now being mined for AI training data.

And the AI is probably the only one who will ever read it, because the author had an AI draft it, and the recipient had an AI summarize it. At this point we know more about what Claude thinks about something than about our colleagues’ own opinions.

This is a good opportunity to remind the audience that AI does not actually summarize, because summarizing requires the ability to draw a conclusion from a text. Even when there is an author-written abstract available, the AI will give you something else.

…information from the texts they have been trained on “bleeds” into the summary … models deviate from the source material by almost a third.

It’s that very act of reading and summarizing that makes knowledge stick in your brain. Even when you are the source of the text (for example, a transcript of an interview that you conducted) the repetition that comes from going over those notes is a key part of learning. A human-written summary is equal parts “what happened” and “why you should care” — and with that last part missing, your company will never take action on any insight produced.

Skimming the dashboard is not a product strategy

This is why every company is addicted to dashboards. They are designed to be perfectly skimmable. The data is already massaged to draw conclusions for you: hundreds of decisions have gone into reducing the complex multidimensional real-world events into “green number good, red number bad.” This pre-digested thinking is critical because when we skim, we do not have time to think:

“To skim to inform” is the new norm for reading. What goes missing are deep reading processes[:] connecting background knowledge to new information, making analogies, drawing inferences, examining truth value, passing over into the perspectives of others (expanding empathy and knowledge), and integrating everything into critical analysis.

Paradoxically, this marvel of automation (insights on tap!) ends up exactly where the automated emails and the automated memos go: the garbage, without once passing between the ears of a human being. Attention at the organizational level is just as flighty as an individual’s. Companies stand up dashboards and move on, leaving behind a swamp of dashboard rot.

Perhaps the reason these get left behind is exactly the same reason, too: there was never really a “so what” beyond a general feeling that it’s good when numbers go up.

When I work with research CX/UX teams, often the problem isn't lack of data or speed. It's lack of interpretive structure. They have dashboards, surveys, NPS. What they don't have is a model that says: this metric connects to this customer intentionality, this intentionality belongs to this experience dimension that the business defined as strategic. Without that, data doesn't produce decisions — it produces slide decks.

Perhaps this is why people can get away with fake metrics and AI-generated “citations” not supported by the text: everyone stopped caring about the thing being measured as soon as they started to measure it.

The Conway-Sturgeon Law of Garbage Software

This dynamic creates a sort of organizational illiteracy: no one reads because there is nothing worth reading. Incoming insights are rendered meaningless by a semantic environment of organizational bullshit (a real scientific concept, there’s a paper and everything). A decade ago, we manufactured facts to fit our preconceptions; today we do not even need to bother.

Predictably, the disconnect between insight and action leads to poor decision-making once someone actually has to do something:

We interviewed AI Deciders across diverse organizations. We found no ideation.

This lack of clear goal-based prioritization percolates downwards into a toxic working environment. Workers are constantly simmering in a pool of fake urgency, rushing from “critical” task to “critical” task. Teams are stuck trying to convince each other that their direction is the most important one and everyone should drop everything to follow them (everyone nods politely for 3 hours and then goes back to what they were doing in the first place).

In product design, this manifests as a shouting match: individual teams operating off individual dashboards try to make their individual numbers go up by making their part of the software louder, bigger, more frequent, more urgent. Naturally, the product as a whole goes to shit — the design system might be consistent, but the product is incoherent because the underlying org is incoherent.

It’s not fair to dump that mess at the foot of UX and say “fix it.” But that’s what’s happening anyway. We have to equip ourselves with tools for wrangling organizational complexity and make improvements there before we can hope to see those improvements show up in the software.

Oh, and if you still have to write some reports: there’s good advice here on how to make them as readable as possible.

— Pavel at the Product Picnic

Reply

Avatar

or to participate

Never miss an issue.

Subscribe for free