The AI Reckoning
Part 1: More Regulation on a Sandwich Than on AI
The United States has more federal regulation on a sandwich sold in New York City than it does on the development of artificial intelligence. That is the current legal reality of the most consequential technology ever built — and it tells you almost everything you need to know about how seriously this country’s institutions have treated the question of who gets to shape it and how.
The companies developing it — OpenAI, Anthropic, Meta, Google DeepMind, and xAI, among others — are moving faster than any regulatory body can track and faster, by their own admission, than they fully understand. Their leaders have said publicly that they cannot promise the development of artificial general intelligence will go well for humanity. Yet they have not slowed down, because in a race where the first to build wins, restraint looks identical to defeat. Meanwhile, the people who will live inside whatever future that race produces — not engineers or investors or heads of state — have had virtually no say in how it gets built or who it serves. That gap is what this series is about, and closing it is why it matters that you’re reading this.
I’ve been writing about persuasive technology and platform manipulation for almost a year, and if anything, that work has made the landscape feel more disorienting, not less — more information, more noise, more competing claims about what AI actually is and where it’s actually headed. So when The AI Doc: Or How I Became an Apocaloptimist opened in theaters late last month, I saw it as an opportunity to cut through some of that noise. Academy award-winning Director Daniel Roher spent two years interviewing the people building this technology, the researchers sounding the alarm, and the ordinary people already living inside the consequences. The film sent me straight to The Oprah Podcast episode recorded with Roher and the film’s team, which spent an hour going even deeper into the questions the documentary raises and the people most affected by them.
The filmmakers built the entire documentary around a single word: apocaloptimist. They coined it to describe a third path — between paralyzing doomsday pessimism and naive, unhelpful optimism — for anyone willing to look at what’s coming without flinching and refuse to choose between panic and denial. An apocaloptimist sees the peril and the promise as inseparable and decides that’s not a reason to give up. It’s a reason to get moving. I’m borrowing it for this series because it’s the most accurate description I’ve found of what honest engagement with this topic actually requires.
One of the most powerful voices on that Oprah podcast was a sixteen-year-old girl from Texas named Elliston, whose story clarifies, better than any statistic, what it actually costs when technology outruns the law.
She was fourteen when a classmate took an innocent photo from her Instagram and ran it through an AI application that generated a sexualized image of her without her clothing. He distributed those images across social media to humiliate her and eight of her friends. When her mother went to the police, they said there was nothing they could do. The images didn’t meet the legal threshold for child pornography. The boy faced no consequences. Elliston described sitting in her room, rotting in fear and embarrassment and shame, while the images circulated freely. No one — not her teachers, not her school administrators, not local law enforcement — had any framework for what had just happened to her, because the law hadn’t caught up to the technology that made it possible. She was a freshman in high school.
This is not an isolated incident. It is what the gap looks like up close.
Tristan Harris, co-founder of the Center for Humane Technology, has spent years documenting how platform incentives produce outcomes that no individual engineer intended, and no company will claim responsibility for. Elliston’s story is exactly the kind of outcome he’s talking about. On Oprah’s podcast, Harris said: “We should not have eight soon-to-be trillionaires deciding the future for eight billion people.” The people making these decisions are not cartoonish villains — they are operating inside an incentive structure that rewards speed, scale, and market dominance while punishing restraint. Understanding that distinction is the difference between outrage that dissipates and pressure that actually builds into something.
Elliston’s story has a second act. She and her mother fought from local police to their congressman to a U.S. Senator, and eventually helped write the Take It Down Act — now federal law — making the creation and distribution of non-consensual AI images a felony. At sixteen, she was named to Time’s 100 Most Influential People in AI. No existing institution stepped in to protect her. The law required a fourteen-year-old’s humiliation as its catalyst. Her story is proof that ordinary people, when they understand what they’re up against, can change the terms.
I should tell you why I care about this as much as I do. A few years ago, I had an experience online that I’m not going to detail here but it was disorienting enough that I needed to understand how it had happened. So I started learning. About how these platforms work, who builds them, and what they’re actually optimized for. And the more I learned, the harder it became to look away. This newsletter is what came out of that experience. And this series is where it’s been heading.
Over the next six weeks, I’m going to go through all of it with you — why AI is genuinely different from any technology that preceded it, why the people building it cannot tell you how it ends, who is already being harmed, which voices are doing the most serious work on this problem, and what any of us can actually do that has a real chance of making a difference. Not as a gesture. As a strategy.
I’ve been trying to make sense of all of this for some time, and I still feel the weight of it. What I can tell you is that you deserve a clear and accurate picture of what’s happening and what you can do about it. That’s what this series is trying to provide.
The AI Doc is in theaters now, and it is worth two hours of your time. The Oprah podcast episode recorded with the film’s team is free on YouTube. Theaidocgetinvolved.com is where the movement lives.
The race has been underway for some time. The question is whether the rest of us are going to show up for it.
What was your first reaction to AI — and has it changed? Reply and tell me. I read everything.

