Update Dec. 8, 2023—Since publishing this article, the European Union has reached agreement on new legislation in the AI Act including new transparency requirements where chatbots and software that generate content will have to make clear that content was generated by A.I. I believe this is a meaningful step towards the challenge I discuss in this article.

I believe it should be a rule, both morally and probably legally, that uses of AI identify as AI.

I see the optimistic side of things: AI sets us up for some exciting stuff in the future. We’re going to use AI sweep away a lot of drudgery and save a lot of time and probably also improve a lot of processes across a lot of fields well beyond tech.

What worries me is products and services that have us interact with or consume content created by AI, while pretending it’s a human or that a human created it. I think this will fracture trust and community across society if left unchecked.

In my mind, the most promising and ethical applications of AI / LLMs / machine learning / extensive complex switch statements are either those that are under the hood and an implementation detail, or those that are user-facing and upfront about not being human.

The conversations we engage in and the content we consume influence how we perceive and think about the world and each other. We have agency over this influence.

We choose who and what we interact with. We choose what content to consume.

We choose to engage with chatbots, Siri, and ChatGPT, knowing that they’re fundamentally machines with a conversational interface. We choose to use tools like GitHub’s Copilot or Airplane’s Autopilot, knowing that they’re fundamentally another iteration on Clippy.

We choose to watch movies, read articles, listen to music, knowing that we’re absorbing the creators’ ideas and perspectives and shaping our own in response, and knowing AI may have been just another tool used in the creative process.

But when we can’t tell the conversation we’re having, or when we can’t tell the content we’re consuming, is not, in fact, created by the human we believe it to be, it crosses the ethical boundary. We lose our agency over our own reality.

Legislating for “reality” isn’t a new idea. France introduced legislation that digitally retouched images like those used in fashion photography must be labelled with a disclaimer, in an attempt to combat unrealistic portrayals of—effectively—reality.

Some things we understand as interpretations of reality. We understand that an illustration or sculpture is stylised by its creator and not necessarily realistic. We understand that a photo can be altered and manipulated.

Sometimes, ethics and moral boundaries are enough: photojournalism and photo competitions set acceptable boundaries around retouching photos. They’re motivated to do so because they need to preserve trust. But fashion photography and advertising are guided by a different set of motivations that lead to a different set of behaviours. And the consequence, as France found, is that unlabelled retouched photographs distort our internalised understanding of what is and isn’t real, with serious social consequences. Here, legal frameworks step in.

Why does this matter if your own product, service, or company is already identifying AI as AI? Simple: none of our products or services exist in a vacuum. Our creations exist in a larger ecosystem of perception and influence. We’re all affected, because we’re threatened at a larger scale than retouched photos. AI threatens that everything—the content we read, watch, the conversations we’re having, the ideas we’re absorbing and sharing—may be ambiguously human. We’re threatened with a tragedy of the commons in reality itself.

This isn’t a far-off hypothetical scenario. Companies are already creating products and content that pretend you’re talking with a human, or that a human created the content you’re consuming. Sports Illustrated repeatedly published AI-written articles presented as if they’re written by real, but nonexistent people. AI is being used to churn out entirely AI-generated blogspam sites publishing hundreds of articles a day. AI-generated content like voice recordings mimicking Obama and other public figures proliferates on platforms like TikTok.

We risk losing our agency over reality. It’s not only about what we as individuals engage with. It’s also what every other real human—your friends, your partner, your children—engages with, which may or may not be human—and they may never know.

At the very least, we need to have a hard moral and likely legal stance that uses of AI must identify as AI. When engaging with AI through a conversational interface, users must be made aware that they are interacting with an AI, not a human. When AI is used as a tool to create content, that content must be labeled as created by an AI, or a real human or organization must take a creator’s credit. Knowing how and when AI is used will preserve some level of agency over our own reality by letting us choose how to engage, and how to filter and reflect on what we consume.

As designers and engineers and product folks and creators, we have direct impact and influence on the future of trust and community in our society.

If we don’t take a stance now, then: welcome to unreality.