The Ubearable Lightness of AI 

Milan Kundera’s The Unbearable Lightness of Being is perhaps one of the most contrarian novels ever written. Beneath its surface, it is radically revolutionary. It challenges the widely held belief in second chances, in the cyclical nature of life—an eternal return. For Kundera, each human life happens only once. Every decision produces an unrepeatable experience. That is the lightness of being—both liberating and unbearably heavy. Imagine being fully aware that every choice you make is irreversible. If taken seriously—and it should be—this realization can lead either to paralysis or to total surrender to the magic of the fleeting moment. 

 

You may have heard of Ilya Sutskever. If not, he was one of Geoffrey Hinton’s brightest students and a co-founder of OpenAI, where he served as Chief Scientist and helped develop the transformer architecture—the foundation of the "GPT" in ChatGPT, which you now use incessantly. Some call him a genius. He probably is. His public persona suggests both intellectual authority in AI and a strong ethical compass. Perhaps that is why, after leaving OpenAI, his new venture, Safe Superintelligence Inc., raised $1 billion, valuing it at $30 billion—despite no one knowing exactly what he is building. 

 

According to Good’s Intelligence Explosion Model, reaching the point Sutskever is striving for—no matter how well-aligned with human values—could trigger a runaway reaction culminating in the singularity. The real question is not whether Ilya is trustworthy but whether ASI can be controlled at all. 

 

Humanity is approaching its moment of truth. Despite millennia of consciousness since the Ice Age, we have failed to integrate our shadow. Now, we are on the verge of realizing there are no second chances. Life isn’t a succession of sequels—it isn’t even a film. It is a book that has always been there, sitting on the desk, its pages waiting to be flipped back and forth until fully understood. But no, homo sapiens never quite felt like reading it. Standing ovation. AI’s greatest contribution to humanity may not be technological but spiritual: the realization that history is linear and that ontology has always been subordinated to eschatology. 

 

But I digress. Or do I? Here is where I sell you the idea that this perspective is neither utopian nor dystopian. It is not accelerationist or doomerist. ASI could usher in an era of wonders, post-scarcity, even immortality. Or it could bring extinction—or at the very least, an existence not worth living (“You may live to see man-made horrors beyond your comprehension.”). Either way, there will be no second chances. The die is cast. Whatever happens next will be unlike anything humanity has ever witnessed. 

 

[TL;DR] AI’s greatest impact on humanity may not be technological but spiritual: Kundera was right. Whatever is decided, the consequences are irreversible. 

The Monkey in the Machine

I am a chimp in an astronaut suit. What else do you need to know? Seriously.

https://www.themonkeyinthemachine.com
Previous
Previous

Missing Out on the Tiger Lottery

Next
Next

A Commoditized AI is a Good AI