The Rehearsal(s)
Watching Nathan Fielder's The Rehearsal feels a lot like living in our world, as organizations similarly attempt to minimize uncertainty through large-scale tests with AI audiences.
In the newest season of The Rehearsal, Nathan Fielder has a thesis: that the Fielder Method of creating elaborate reproductions of real-life situations so that people can hyper-realistically rehearse and prepare for pivotal moments in their lives can be applied to improve aviation safety.
We begin in medias res, not knowing yet if what we are witnessing is simulated or real: we are inside the cockpit; two men, a pilot and a first officer, are flying the plane, nearing landing; the first officer, noticing that the GPS has them slightly to the right of their charted course, where there are mountains, voices his concern numerous times; the pilot repeatedly dismisses him; as the plane continues to descend, dangerously approaching terrain, a tense, brooding, elegiac symphony fills the stubborn silence from the pilot; the plane explodes on impact. The frame hangs on the first officer for a beat—his head hanging over his seat, his neck slack, as flames outside the window engulf the plane—before panning left and closing in on none other than: Nathan Fielder, standing solemnly, or ominously, amid the blaze. In the next shot, we see the whole set: it’s Nathan’s diorama, and we’re living in it.
The show’s concept is as slippery as Nathan’s reality, an infinitely recursive hall of mirrors reflecting as much as distorting the genres, motivations, truths, and bits it encompasses. Later in the episode, Nathan, apparently on the phone with an airline representative, attempts to explain the project: “So, you know, we’re really trying to make a somewhat sincere effort to explore and develop new ways to improve pilot communication in the cockpit,” he says. “And you said it’s a documentary?” the woman probes on the other line, still, as we are, unclear. “Yeah. I mean, I would use that term loosely, but yeah,” he replies, unconvincing in his equivocation. As Nathan, still taking the call on speakerphone, gets up from his desk and walks to another office in the building, the woman’s voice sounds twice: once from inside the office, and once from the speakerphone.
It’s a pretend phone call, a rehearsal production for the production of The Rehearsal. The whole show is like this, a kind of ineluctable doublespeak where, through careful replication, reality can be doubled—and become, itself, contrived.
At its core, The Rehearsal is an exploration of the degree to which the whole human experience—and all of its uncertainties—can be controlled through simulation, emulation, and practice. In the second episode, drawing from his first television job as a junior producer on Canadian Idol (it’s hard to believe that this, too, isn’t a bit until we see the undeniably Y2K photos of a 23-year-old Nathan Fielder), Nathan wonders if, by having pilots reject earnest contestants of a fabricated singing competition, he can help them overcome their aversion to confrontation in the cockpit. Beneath Nathan’s ostensible ambition to improve pilot communication, however, lies a private goal to become more likeable.
Over the course of the experiment, Nathan notices that one female pilot receives consistently high likeability scores from her pool of contestants, even from ones she rejects. Stiltedly, like a robot trying to become a real boy, Nathan strives to emulate her behavior but continues to receive disappointing scores: a 2, a 4, a 3. Confounded, he asks one of the contestants what differentiates the female pilot’s approach from his and how he can model it better. “I don’t know if that’s… you can really control, like, what your natural energy is, you know?” the contestant replies, after struggling to articulate the ineffability of the woman’s warmth and human heart. In a private reflection, Nathan voices over, “Even though it was hard to hear, I disagreed with this man. I believe that any human quality can be learned, or at least emulated. Sometimes, it just takes time.”
Thus begins Nathan’s progression from manipulative reality television host to mad scientist, as he plumbs the mechanisms of how humans become themselves. Like any good mad scientist, Nathan tests his hypothesis first on animals: his first experiment involves dogs, genetic clones of a family’s dead pet that, much to the family’s disappointment, bear little resemblance in personality and temperament to their old dog. With the nature variable fixed, Nathan tests if putting the clones through a simulation of the original dog’s upbringing—controlling the nurture variable—will close the delta between the clones and the deceased dog. The results are mixed: the clone learns to climb on the couch from its cat sibling but never responds to the woman’s diabetic attacks as the original dog did.
He then begins phase two of his trial, this time making himself the subject of the experiment. Fascinated by the memoir of pilot Chelsea Sullenberger, who successfully crash landed a plane in the Hudson River, Nathan lives as Chelsea, from infancy to adulthood, taking method acting to its (il)logical extreme to get inside every nook and cranny of the pilot’s head. The scenes verge on performance art, as Nathan at one point is hoisted by hooks in his diaper into a giant crib and at another point nurses from the breast of a giant puppet-mother. The ever-increasing absurdism underscores the perversity of the project, something which has always disconcerted us: humans attempting to play God.
But is the show really so far-fetched? This question of whether or not human behavior can be recreated, simulated, and predicted? Lately, it seems as though technologists are trying to answer the same sorts of questions.
If you’ve ever wished that you could be in two places at once, companies like Delphi are enabling people to create digital clones of themselves so that they can scale their content and reach (and so that you can talk to a probabilistic version of HBS lecturer Jeff Bussgang or even historical figures like Macchiavelli and Caesar, if you really want to get into taking over the world). “Your Mind. Now on Demand” is the headline; “Future-Proof Your Knowledge,” in an oblique promise of immortality, is another value proposition. The whole concept is redolent of the first episode of the latest season of Black Mirror—yes, that one, the one Danielle told you not to watch first—where a woman’s brain tumor causes her to lose consciousness, prompting her husband to sign up for a predatory healthcare subscription that will digitize and stream back her mind through the vessel of her body. I don’t watch Black Mirror anymore because I feel as though I’m living in a perpetual episode.
Other companies are creating digital clones of society to enable businesses to conduct cheaper, faster, higher-fidelity market research in AI sandboxes before they take new products and features to market. Think: AI simulations, focus groups, surveys, A/B tests, experiments. Of course, one can leverage synthetic audiences to understand—and manage—any number of outcomes beyond the private sector, or something as benign as what value proposition for oat milk will resonate most with young professionals with disposable income. With Viewpoints.ai, lawyers can forego expensive jury consultants and simulate jury responses to different arguments and presentations of evidence. And with Aaru, politicians can fine-tune their messaging to different voter profiles using its Dynamo model, while governments can “[c]raft messages that hit harder than any weapon” using Seraph.
(Edit, February 2026: Simile just emerged from stealth with $100M in funding led by Index Ventures, claiming to offer “the first AI simulation of society, populated by agents based on real humans,” with the all-encompassing vision of “simulating entire worlds: trillions of interacting decisions across individuals, organizations, cultures, and states.” Funnily enough, their first sentence of their blog post introducing themselves shares Fielder’s subject: “Pilots don’t train with real passengers.”)
It’s worth noting that multiple of these names—Delphi, as in the Oracle, and Seraph, as in the highest ranking angels of God—imply the elevation of human insight and control to the plane of religious, prophetic vision. Nathan the mad scientist (and amateur magician) finds himself in good company these days. Aaru “is a multi-agent system that recreates the world using the same principles that govern reality”; is The Rehearsal not the same? According to Aaru, “The human brain is incapable of truly comprehending hypotheticals, but simulation environments are entirely configurable to any time, media environment or location.” Does the Fielder Method not share the same premise?
Data-driven decision-making in areas that feel sacred—such as Netflix’s parsing television shows for their moments of highest engagement to inform new content investments—isn’t new. But lately, as AI has proven itself able to manipulate people’s perceptions of reality successfully, the pitch of AI-powered, probabilistic decision-making takes on a new timbre, upending our grand assumptions about the exceptionalism of human beings and leaving bare and small our sense that there’s something special about us that technology will never be able to replicate and therefore replace—that we are more than just base animals with predictable instincts, that we have free will, that we have souls. As the singing competition contestant argued, in classic Los Angeleno fashion, we have “auras,” or “energies,” both ineffable and inimitable—or so we like to believe.
Just as The Rehearsal grows increasingly unsettling, so too do real-world extrapolations of simulation-driven decision-making.
Let’s take the examples of love and friendship (leaving aside the obvious question of political control). You could imagine a world where people deploy AI agents to pick and court their love interests, or text their friends back. Indeed, this world already exists with companies like Rizz (the AI dating assistant) and Biography (the “world’s first socially aware AI assistant”). But what if you could simulate, before investing any time into a potential love interest, your entire partnership with that person to determine whether or not to pursue a real relationship?
At the end of the first season of The Rehearsal, Nathan comes to the realization that “Life’s better with surprises,” but the belief is loosely held as he quickly doubles back on it, adding, “I mean, some things you want to be prepared for, but you know what I mean.” Do we? Sure, we want to avoid plane crashes. But do we want to avoid heartbreak?
Aaru claims that its models “aren't just products. They're crystal balls.” It’s a tantalizing promise of certainty, of being able to read ahead to the last page of the story and know how it ends. I tend, perhaps unsurprisingly, to side with the woo-woo Los Angeles singing contestant. Maybe an AI twin can approximate my “aura,” but—to join the AI companies in borrowing from Greek mythology—its efforts will always be Sisiphean, its frustrations always Tantalean, always approaching, never reaching.
I find that there’s a lot of freedom in knowing how AI works, fundamentally. It’s not a crystal ball, or an oracle, or an angel at the side of God; it’s a very advanced, educated guessing machine. It’s all probabilistic. Even if you’re not intentionally trying to subvert AI’s expectations of you to assert your uncontainable individuality, like a teenager rebelling against an arrogant and controlling parent, sometimes you will. Because yeah, maybe nine times out of ten you’ll make a given choice. But one of those times, you won’t.
Much as we want to Marie Kondo the universe—organizing it into neat, controllable little containers—the entropy, or randomness, is always increasing. The universe is indifferent to whether or not life is better with surprises; it doles them out anyway. Remember: Anything Can Happen. Our stories are no exception.
Even if we wanted to, we cannot skip ahead to the end. And it’s this suspense that keeps us living, waiting to see what happens on the next page, tomorrow.



I agree that our ongoing efforts to simulate reality seem Sisyphean. I also like to believe we have some partially inscrutable aura.
Yet—as much as life is the best game, its flaws include no respawns and fewer checkpoints. There are some situations I’d rather go prepared into. Maybe that can save lives. Maybe that’s what keeps the builders going.
To restate the cliche: “we must imagine Sisyphus to be happy.”
I think I’ll subscribe to your whimsy.