Oliver Stone, Carl Jung, and an AI Walk into a Bar
The film Platoon is a Jungian parable. Carl Jung proposed that within every individual lies an unconscious, darker aspect—the shadow—that must be integrated into the psyche to achieve wholeness. Chris Taylor, played by Charlie Sheen in the film, embodies this psychological struggle as his ego is torn between two forces: Sergeant Elias, representing moral clarity and compassion, and Sergeant Barnes, embodying primal violence and moral compromise. As Chris navigates the horrors of war, his journey is one of reconciliation—acknowledging his shadow and, through the act of killing Barnes, absorbing its darkness. In this act, he becomes less innocent but more complete, heading towards self-awareness. Jung described this process succinctly: “The shadow is a moral problem that challenges the whole ego-personality, for no one can become conscious of the shadow without considerable moral effort.” Like Chris, wannabe-regulators of AI must reconcile competing forces—principles, rules, and risks—seeking balance between moral idealism and pragmatic survival.
Chris’ moral integration journey mirrors the challenges faced by regulators in their pursuit of effective governance. Just as Chris must reconcile the conflicting forces of Elias and Barnes, regulators grapple with competing frameworks: principles-based, rules-based, and risk-based approaches. Each represents a distinct philosophy, with its own ideals and compromises. Principles-based regulation aspires to Elias-like moral clarity, emphasizing flexibility and ethical judgment, but it can falter in the face of ambiguity or exploitation. Rules-based regulation, like Barnes, prioritizes strict boundaries and enforcement, yet risks rigidity and unintended consequences. Risk-based approaches, meanwhile, demand a calculated balance between the two, although its handicaps are worth considering, e.g. subjectivity, regulatory capture, complexity, and resource intensity.
In regulating AI—as in most sectors perhaps—it would be disingenuous to unequivocally conclude one single approach is a silver bullet. The challenge lies in integration: acknowledging the strengths and flaws of each framework, and crafting a system that harmonizes their opposing forces. The reconciliation of these approaches requires the same “considerable moral effort” Jung described—a conscious and deliberate pursuit of balance that recognizes both light and shadow within the regulatory landscape.
Recently, the United Kingdom hinted at moving towards a principles-based AI regulation, just as they have for the financial sector. How did that work in 2000 or 2008? Without minimizing the enormous pain that economic meltdowns cause, their occurrence never precludes the possibility of applying corrective measures that, eventually, sort of help, e.g. quantitative easing, bailouts, stimulus spending, consumer protection tightening, and so on and so forth. But what happens in the event of an “AI meltdown”? The European Union, on the other hand, has given itself—body and soul—to risk-based regulation. The enforcement spending required will bankrupt the Union. Furthermore, it hasn’t even fully entered into force and it has already been crushed by the weight of the Collingridge dilemma. I am not saying it won’t work. What I am saying is that it already failed—and the rhetorical pivot after the Paris AI Summit hinted as much.