The AI age is the "age of no consent"

"Inevitability" means design decisions are no longer informed by user needs; now they are unilaterally imposed. The users are the ones being designed.

Five years ago, long before every LinkedIn influencer legally changed their middle name to “AI”, I made a prediction: PMs would be replacing user research with a GPT.

Don’t call me a prophet. I was highlighting PMs only in contrast to designers, because I naively believed that we would be better than that, because designers were the champions of the user, and at the end of the day the users were human. We knew our work was good because we were engaged in a feedback loop together with those humans: showing them what we were trying to do, and holding a space for them to contribute.

We’re in the age of no consent. A time where everyday people have to follow the rules, but AI companies and AI toolmen do not.

Two years later, when GPT was embedded into Chat and set off the next hype bubble, I was no longer as naive. The conversations on Design Twitter had turned from user-centered design to optimizing the velocity at which UX could deliver “tasteful” high-fidelity prototypes. Empathy transformed from something that invited the user into our process into something that displaced them with a synthetic, imaginary substitute.

AI didn’t create that attitude (I chronicle the forces that gave rise to it in the linked piece). But UX practitioners working within the Borg Complex have been diluting tech’s understanding of “ethical” and “empathy” into meaninglessness, and actively enabling today's AI boosters to scale disdain for users to never-before-seen heights.

Consumers have rarely hated a technology as passionately as they hate AI. And yes, hate is a strong word, but that is the word people are using, whether they are the target audience of slop, or the victims of the massive IP theft used to generate that slop.

I hate that writers and artists must suffer the indignity of being pickpocketed by the richest men who’ve ever lived. … I hate all of this in almost every way a thing can be hated.

And one of the reasons people hate it is because it is everywhere. With an ordinary product, they were given the grace to say “not for me” and go do something else. But AI is inevitable, and therefore you must be forced to use it. Through making AI the default for doing anything, using deceptive patterns to increase the salience of AI features, and straight up removing the option to opt out of pricing tiers that include AI, companies are reversing the traditional relationship between them and their customers.

What do they need UX for, when users can no longer refuse? What’s the point of user research when success metrics have nothing to do with whether the feature was useful, so long as there was AI in it?

Even if you manage to avoid the onslaught of ✨companions✨, you’re not out of the woods yet. AI slop will have muscled into some aspect of your life. Even if you choose not to engage with AI, its outputs will rudely engage with you regardless: shared by thoughtless commenters trying to save time on writing a few paragraphs, legitimized by being embedded into published papers, used to make decisions on your behalf.

And any one of those outputs could have been ideologically influenced — overtly or subtly.

Of course, all of these models were created without consent in the first place, trained on millions of stolen books and content scraped in violation of terms of service from all over the internet. And as the models run out of new data to steal, companies are contemplating new heists, like cashing in on the contents of files you share or documents you upload. Despite an uproar from long-term contributors, companies like Reddit and StackOverflow sold their troves of entirely user-generated data to OpenAI.

The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.

Jeremy Keith, Denial

Websites that refused to sell out have simply been bulldozed by scrapers, running up obscene costs or getting knocked down entirely.

One of the more accessible sources of new training data are the prompts themselves, which users are feeding into bots without realizing how easy it is to retrieve any part of a model’s training set. Not only is OpenAI training models on prompts (including extremely sensitive material such as people’s “therapy” sessions) but Google is indexing them for public search.

Maybe you’re too smart to feed your personal identifying information into a chatbot. But you fed it into a website that turned out to be vibe-coded, and it leaked all over the public internet due to the flimsy security.

Companies are getting away with all of this, and the users don’t get a say. Avenues of appeal that were once overseen by humans now end in an unforgiving, implacable AI endpoint:

AI is not “just a tool” because nothing is “just a tool.” All tools have embedded logic that guides you away from certain courses of action and towards others that its designers have chosen as more preferable. Tools and systems construct us as subjects.

The logic embedded within AI is the logic of surveillance, and so AI tools construct us as subjects of surveillance, with an impoverished sense of our relationship to our software. They normalize handing over privileged information and intellectual property without a second thought and try to make recording every screen and click of your PC a default that you have to actively opt out of.

If your agentic AI doesn't come with any legal responsibility to you as the principal, it's not your agent; it's your sleazy used car salesman with access to your credit card and all your data

This meek acquiescence to surveillance is now penetrating into every facet of tech. Amazon Ring has openly returned to its mission of snooping on neighbors and sending the data to cops. Luxury cars now include built-in cameras so your boss can watch you during your commute.

This all felt weird in 2020. It still felt weird in 2023. But is it starting to feel normal in 2025? I leave you with a great question from Bob Devney: would it be weird if a person did it?

— Pavel at the Product Picnic