The people ARE the process

Cutting down on collaboration tricks you into thinking you're going faster, but it destroys the divergence effects that make the design process valuable.

I’m going to start this article today with a bit of a digression into my favorite world — analogous domains. I promise that it’s relevant! You should read it!

(can it be a digression if it’s not in the middle? let me know in the comments)

I’ve written before about lessons from game design that can be applied to product development, but I want to briefly talk about a cultural aspect of gaming.

Because one of the other things about gaming that also applies to software in general is the distinction between the game and the toy; the rules that govern how a thing is used that are inherent to its design, and the rules that govern how a thing is used that are external to the thing. This principle is even easier to observe with non-video games and we can learn a lot from these games about evaluating whether the rules we are following are fit for our purpose.

One of the purposes of games is to have fun, and one of the ways people have fun with games is something called speedrunning, which is when you try and finish the game in as little time as possible. This is (usually) not how the developers intended for the game to be played, so successful speedrunning relies on two key concepts:

  • A sequence break allows the speedrunner to skip over some game content. For example, if the player would normally have to go out of their way to find a key for a locked door, a speedrunner might try to use a glitch to squeeze their character past the door, and save valuable time.

  • An any% (pronounced any-percent) finish ignores the game’s built-in system of measuring progress. The speedrunner decides that, as long as the credits roll (or some other arbitrary milestone is reached) then they have won, even though they have not completed the tasks the game set out for them.

But let’s say we’re not talking about games. We’re talking about software. The “rules” of our game are not arbitrary, but determined by the necessity of what it takes to make a product someone wants to buy or use.

And yet a lot of advice for “improving velocity” will resemble some form of speedrunning — unilaterally revising the success criteria and skipping crucial steps so that you can declare “I am done” as quickly as possible.

Yesterday, Jen Briselli published an excellent article on stasis theory as a way to look at collaboration. Since I’ve been noodling on that theme for a few weeks, the piece resonated with me and I strongly recommend reading it. But the key part for our purposes today are the types of stasis, which map neatly to the topics we’ve covered on this theme thus far:

  • Research is analogous to Fact: “Did this happen? Does the thing exist? What is observed?”

  • Synthesis maps to Definition: “How should we classify it? What is it related to?“

  • Problem framing is a perfect fit for Quality: “Is it good or bad? Right or wrong? What’s the significance?”

  • Solution framing is the equivalent of Policy: “What should we do? What interventions are possible, desirable?”

  • And finally project planning is a matter of Jurisdiction: “Who decides? Who acts?”

The basic idea is that people need to agree on what, exactly, they’re disagreeing about in order for an argument to progress toward productive outcomes.

Wow! That seems like a lot of work! If only you could apply some of our speedrunning concepts to this ponderous process and either sequence break (skip some of those steps) or any% (half-ass) them. Then you’d be done super quickly, I bet.

But what does “done” mean? Sure, you’ll have a roadmap. But without the alignment conversations that produce the roadmap, it is a worthless piece of paper. If stakeholders push back even a little bit (for example by asking you to explain some AI-generated “insights”) the entire thing falls apart.

Some teams might define “done” as “filled out the template.” I’ve written about that before, at length; the template is your enemy. It hides interesting ideas and constrains divergence. “The insight that doesn’t accommodate tidy operationalization and air-tight widgetization” is sifted out; what remains is average, commoditized, and banal.

The flattening effect of a template is useful at the end, as a convergence process that makes the conclusion legible to the organization. But “fill out the template” must never be your goal.

Disagreement isn’t just conflict to be resolved. It’s information.

If your goal is to just any% the divergence process, you will never find the signal that is found in the gaps, outliers, and extreme users. If you let those templates think for you, you’ll never pick up the skill of learning whether the rules you follow are fit for your purpose (rather than generic “best practices” which may not apply).

Try reinventing the wheel once in a while. You’ll learn a lot about wheels.

The LLM hype bubble has introduced a ubiquitous strategy to sequence break the product development process. Wherever we would need to go through the messy business of talking to humans, just substitute that with a machine! Deciding on strategy? Machine. User research? Machine. Synthesis? Machine. Sense-making, knowledge generation, dispersing that knowledge among those who need to act on it? Machine, machine, machine. The machine will never disagree with you and will never expect you to shut up.

This is, of course, a fallacy: just because the machine is anthropoglossic (it talks like a human) we are wired to believe that it thinks like a human. But it does not think at all. Amusingly, this delusion encourages people to use these tools in the way they ought to have been collaborating with humans in the first place; you don’t need an LLM to do active listening and you can take the time to describe what you want even if the recipient is not a chatbot.

I could have helped you in all the ways the system appears to have, probably more. I would have really enjoyed to do that. I would have learned an incredible amount from such a conversation even if my role had been primarily to be a 'prompt' for you. … Why are we interested in displacing those conversations?

Cameron Tonkinwise, in a comment on this post

All of these tools make you feel like you have done something faster, but the thing you have done is produce an output. You still have to get someone else to act on that output. This case study of increasing UX maturity in the German digital service illustrates this point very well: the entire project was about interfacing with teams and processes. Producing higher volumes of stuff more quickly was never the problem, because the bottleneck will always be getting anyone else to care about it.

Similarly, the easiest type of feedback to give is preference-based feedback; it is also the least useful. But it is response-shaped, and when you’re just speedrunning your job, maybe that’s all you wanted from it.

— Pavel at the Product Picnic