How Generative AI is destroying society
Two Boston University law professors, Woodrow Hartzog and Jessica Silbey, just posted preprint of a new paper that blew me away, called How AI Destroys Institutions. I urge you to read—and reflect—on it, ASAP.
Here’s the abstract.
I hope that before the paper is finally published the authors will insert the word “Generative” before “AI” in the title, since the force of their critique is (as the last sentence of the abstract acknowledges) really about current approaches, not all possible approaches to AI that one could imagine.
But, wow. This is one of the most lucid and powerful articles I have read in years. Here’s the opening paragraph.
If you wanted to create a tool that would enable the destruction of institutions that prop up democratic life, you could not do better than artificial intelligence. Authoritarian leaders and technology oligarchs are deploying AI systems to hollow out public institutions with an astonishing alacrity. Institutions that structure public governance, rule of law, education, healthcare, journalism, and families are all on the chopping block to be “optimized” by AI. AI boosters defend the technology’s role in dismantling our vital support structures by claiming that AI systems are just efficiency “tools” without substantive significance. But predictive and generative AI systems are not simply neutral conduits to help executives, bureaucrats, and elected leaders do what they were going to do anyway, only more cost-effectively. The very design of these systems is antithetical to and degrades the core functions of essential civic institutions, such as administrative agencies and universities.
In the third paragraph they lay out their central point:
In this Article, we hope to convince you of one simple and urgent point: the current design of artificial intelligence systems facilitates the degradation and destruction of our critical civic institutions. Even if predictive and generative AI systems are not directly used to eradicate these institutions, AI systems by their nature weaken the institutions to the point of enfeeblement. To clarify, we are not arguing that AI is a neutral or general purpose tool that can be used to destroy these institutions. Rather, we are arguing that AI’s current core functionality—that is, if it is used according to its design—will progressively exact a toll upon the institutions that support modern democratic life. The more AI is deployed in our existing economic and social systems, the more the institutions will become ossified and delegitimized. Regardless of whether tech companies intend this destruction, the key attributes of AI systems are anathema to the kind of cooperation, transparency, accountability, and evolution that give vital institutions their purpose and sustainability. In short, AI systems are a death sentence for civic institutions, and we should treat them as such.
§
Another astonishing thing about the paper is its back story. I was so impressed that I wrote fan mail to the first author, Woodrow Hartzog, and asked how the paper came to be. It turns out that Hartzog and his co-author Jessica Silbey originally planned to write a far more positive essay, but reality settled in. Honorably, they followed reality where it led:
The origin story of this paper is that this paper was originally intended as brief and provocative follow up to a very short little paper we wrote years ago called “The Upside of Deep Fakes,” where we argued that “deep fakes don’t create new problems so much as make existing problems worse….Journalism, education, individual rights, democratic systems, and voting protocols have long been vulnerable. Deep fakes might just be the straw that breaks them. And therein lies opportunity for [deep, structural] repair.”
However, as we started to build out this paper, we realized that the threat that AI poses to institutions was far more dire than we had anticipated. We also realized we were far too optimistic about the possible appetite for safeguarding institutions. So we tried to write something concise and urgent that laid out the different ways that the design of AI systems worked to enfeeble institutions to the point of collapse.
Read the full paper here.
