I have an opinion piece in the New York Times today about my extreme distaste for Sora, the new social video app from OpenAI that was built to create and share fake videos.
“At a time when we are surrounded by fakes and fabrications, Sora seems precisely designed to further erode the idea of objective truth. It is a jackhammer that demolishes the barrier between the real and the unreal. No new product has ever left me feeling so pessimistic.”
This article started because a friend showed me Sora 2 and it immediately generated an extreme and visceral reaction in me that I hadn’t experienced before.
I’ve seen many terrible technologies and dumb products over the course of my career, and met many objectionable people too. But watching Sora in action created a very specific sense of sadness and disgust that genuinely surprised me, and I wanted to understand more about it. And I know many other folks have struggled to pin down the disquieting, upsetting feeling that AI tools can generate, and so wanted to attempt to capture what it was that left me so hollow.
(If you want a more fun exploration of the same topic, may I recommend The Oatmeal’s view on AI art?)
As is always the case when you’re under editorial constraints like time and space, there’s so much more I wanted to say in the Times essay. But the essential point is there: Whether intentional or not, it is a malicious act to build a system that is designed to inject deepfakes into the body politic.
(This is especially unpleasant when you combine it with a worryingly minimalist approach to safety. The irony of OpenAI calling its post about guardrails “launching Sora responsibly” at the same time it’s letting people generate deepfakes of everyone from Hitler to MLK was not lost on me.)
In the essay, I mention Stafford Beer’s dictum of “the purpose of a system is what it does.”
Mentioning Beer, a somewhat peculiar British management professor from the 1950s, is the kind of thing that creates a certain response from some corners. It’s possible to nitpick or disagree with his approach or outlook, and I think his framework is extremely imperfect. But at its core, the lens is helpful: if a system keeps generating a particular outcome, it is essentially a system made to do that thing.
Cigarettes were once seen as a tasty indulgence; we now see them as cancer sticks. Modern political campaigns were ways to help affect wider societal change; now they basically are voracious, self-propelling engines for raising money. If Sora makes you second-guess what’s real and what’s not, that is in the most important way, what it is
.⌘
It’s worth saying in all this that I would not call myself stridently anti-AI. I tend to agree with the position outlined in Karen Hao’s excellent book Empire of AI, and recently outlined by Anil Dash in a shorter form: there are useful purposes for machine learning and deep neural networks, but this ain’t it.
If you read Karen’s book, or Adam Becker’s More Everything Forever, a searing overview of the thinking that drives many of these companies, you will get a detailed understanding of the negative costs of the current, hyperscale approach to these technologies. We know about the environmental and energy drains that huge new AI data processing facilities put on society. We know about the endless appetite for more input, with AI models gobbling up vast tracts of information and content, often illegally or against people’s expressed wishes. And we know about the exploitation of labor and traumatic experiences of staff who train these neural networks, essentially nudging it to produce something that is “right” only by themselves constantly watching things that are very, very wrong.
And then, of course, there are the fundamental weaknesses of systems that don’t understand information but simply regurgitate it—machines that produce “hallucinations” and make factually incorrect statements with a glib and unearned confidence.
My issue with Sora is that hallucinations aren’t a bug; they are the point. And so that’s why I was left so unmoored by watching it in action.
You had all the power and the technology, and this is what you do with it?
⌘
One last note: I’m donating my fee for the article to the International Rescue Committee’s Gaza relief efforts. I admire a lot of what the Times does and many of the folks who work for it, but I also think it has taken the wrong stance on a number of important issues—not least the genocide against the Palestinian people. I don’t think a donation is really a satisfactory way to go about things, but it’s what I decided to do.
