A terribly wise man once said—actually, he has said it again and again—that the core crisis of our time is an institutional crisis. The institutions of civil society—public and private, secular and religious—have seen public trust in them plummet in the polls year after year, like a Colorado Rockies fan watching the baseball standings. It’s indicative of the crisis—and the culture that feeds it—that the social-media slogan “burn it all down” rose to popularity among the cosplay revolutionaries of the Bluesky left and then became popular among the cosplay nihilists of the Twitter right.
At the intersection of nihilism and opportunism—which is to say, where we are right now—one can find the inevitable technology enthusiasts. Buoyed by a relentless optimism and unburdened by any sense of community or history, these cheerful Vandals argue that “burning it all down is good, #actually.” They have just the match to start the blaze: AI. There is no shortage of current legal scholarship on AI. (Surely a sentence with a double meaning. There’s lots of current scholarship about AI—and, no one doubts, lots of scholarship written by it, “legal scholarship on AI” in the sense that an addict is “on meth.”) There’s less on the landscape it is altering. So it’s a pleasure to find an article that focuses less on how AI is remaking everything, and more on what AI is remaking—or killing.
How AI Destroys Institutions, by Woodrow Hartzog and Jessica Silbey, focuses our attention on our civic institutions, which “form the invisible but essential backbone of social life.” Hartzog and Silbey argue that AI is “a death sentence” for these institutions. Even an AI non-enthusiast may find a lot to disagree with in this short, sharp paper. But the authors focus their lens commendably and well. It’s not a doctrinal paper. It does not, in fact, mention the Constitution. Not everything that is essential to our constitutional order does. (Arguably the most timely constitutional law book of the day is this one, and the Constitution won’t be the most important element there, either.) But the ongoing crisis of our civic institutions is both fundamentally constitutional in nature and affects our ability to respond to the more conventionally “constitutional” problems we face. AI is deeply embedded in both and should be understood as such.
Following the standard literature, Hartzog and Silbey define institutions as “the commonly circulating norms and values covering a recognizable field of human action, such as medicine or education”—the “rules of the game” for such practices. They distinguish them from organizations, while noting their key role as “the material instantiation of institutions.” Institutions, they argue, are not simply machines for the delivery of outputs, such as an educated pupil or a healthy patient. They “act in terms of extra-organizational social processes according to customs and norms,” including norms of hierarchy that “enable accountability” and norms of independence that avoid corruption. These norms “infuse the organization with value and legitimacy beyond the technical requirements of the task at hand.” The transmission and gradual adaptation of “knowledge and practices across generations of people” cultivates a sense of commitment for those within the institution, and a sense of legitimacy for those who benefit from them. These, they argue, are the features that AI threatens.
Hartzog and Silbey indict AI on three counts. First, they charge that AI “undermine[s] and degrade[s] institutional expertise.” Offloading cognitive functions disrupts the slow “structured transfer of knowledge and know-how” by which expertise is nurtured and maintained in an institution. It substitutes mere technology for true technê.
Second, it “short-circuits decisionmaking.” On this view, institutions generally function best and most responsibly through hierarchical structure, according to institutional rules whose legitimacy is accepted, and with “critical points of reflection and conflict” that enable it to decide better and, sometimes, to change course. Short-cutting those practices corrodes institutional structures “that require buy-in for legitimacy, adaptability, and longevity.” And its removal of opportunities for creativity and dissent deprives institutions of “a source of moral courage and insight, which is necessary for institutions to adapt and thrive.” Having killed technê, it sticks the knife in arête as well.
The final and probably most indisputable charge is that AI “isolates humans.” The “hyper-personalization” it both caters to and encourages “displaces and degrades human-to-human relationships.” People who are unused to “human interactions with all their friction and diversity” and unwilling to adhere to “institutional roles and rules” will not respect, accept, preserve, or even understand institutions and institutional purpose, leaving only “social chaos or the rule of the powerful.” Reader, look around you.
Hartzog and Silbey note the ways in which these phenomena are already reflected in myriad uses of AI to “streamline” government and its functions, in a crude and “opaque” way that has short-circuited judgment and “encouraged abuse, self-dealing, and oppression.” (Think DOGE.) But their primary illustrations and areas of concern lie elsewhere.
In law, they warn that the looming threat of “embedding AI systems in legal decisions” will destroy the accountability to human judgment expressed through reasons that is necessary for the rule of law. In higher education, they worry that whatever gains are realized by using AI to aid research will be outweighed by its destructive effects on our commitment to higher education as a structured, social, time-extended, human activity of transmitting and developing knowledge and the love of inquiry. They worry likewise about the press, where AI slop has already inundated the public sphere with cheap and/or false information. Although journalists have attempted to leverage AI for productivity, they argue that AI systems rob the press of the larger institutional practices and public trust that enable it both to make complex judgments and to “speak with institutional authority and avoid sycophancy.”
Finally, they argue that the isolating and alienating effects of offloading human functions to AI systems will erode “social capital and norms of reciprocity.” In the end, “our center—democracy and civil life—will not hold.” Claude or Grok will scour the collective knowledge of humanity to give us tips on bowling—but we will all be bowling alone.
How AI Destroys Institutions is an article in the prophetic genre, and its warnings are appropriately disquieting. Hartzog and Silbey offer a few prescriptions of a sincere but general, slapdash-universal nature: focus on “root causes,” address inequality, act locally, and the like. But these rather generic suggestions are almost afterthoughts compared to the warning: “AI systems are like a cancer in our struggling democracies,” a “death sentence for civic institutions,” an acid that “weakens to the point of demolition the institutions we created and sustained to survive and thrive together.” No Constitution can sustain a society that has lost any interest in the very concept of being constituted.
One obvious response to this piece is that AI is no more perilous to, say, journalism than was the shift from typewriters and Linotypes to the computer. In some respects, this is obviously true: Institutions not only survive but benefit from technological change. (The typesetters might beg to differ. But then, their resistance to change led to a Canute-like strike that didn’t fend off the technology but did help kill four newspapers, including the great New York Herald Tribune.) The criticism is valid. But it undersells this article’s key virtue: Its focus, not on technology as such, but on how it affects the social and hierarchical elements that give purpose, legitimacy, commitment, and longevity to civic institutions.
Another question readers may have about this grim forecast is conveniently raised in a response by Andrew Perlman, dean of Suffolk University Law School. It requires two asterisks. First, law school deans—who are both nodding to reality and bowing to donors and competitive forces—are mostly leaning in on AI (or saying that they are); Suffolk is doing so enthusiastically. Second, Perlman’s opening note discloses that although he “conceived of the substance of nearly all the points” in his response, “Claude was exceptionally helpful in drafting the text.” (Perlman adds that he did “draft[ ] the footnotes and citations largely the ‘old-fashioned way.’” That seems rather a case of the tail assigning the mindless scutwork to the dog.) Make of these facts what you will.
Nevertheless, Perlman—or, I guess, “Perlman”—raises the obvious question skillfully: AI “destroys institutions compared to…what?” Hartzog and Silbey acknowledge that “our institutions have been fragile and ineffective for some time.” But they describe civic institutions in an abstract or idealized form, focusing on how they are supposed to function rather than on their current flaws. So the question is not whether AI will rob universities of what makes them special. It’s how various AI tools will change actually existing universities, both for better and for worse. Similarly, “‘imperfect GenAI assistance’ versus ‘no assistance at all’ is not the same comparison as ‘imperfect GenAI assistance’ versus ‘competent human lawyer.’” For individuals facing access to justice barriers, the former is closer to their reality than the latter.
Perlman’s response is not unreservedly optimistic. But he counsels “calibration” rather than “paralysis,” inviting us to examine real institutions and their failings, sort carefully among different AI applications, and ask how to address AI “in a way that maximizes benefits and minimizes institutional costs.” One way to think about the difference between the two articles’ measures of how to respond to AI lies precisely in those words. Perlman is arguing for ordinary cost-benefit analysis. One might think of Hartzog and Silbey as arguing, a la Posner and Sunstein, that the threat AI poses to our bedrock civic institutions is so catastrophic and irreversible that it demands a precautionary principle approach.
In a sense, I think the “compared to what” response to this article is somewhat beside the point. But it does suggest that looking at the longer-term corrosion of our civic institutions from the inside, and the decline of trust in them from the outside, would give us a better sense of the ways in which AI both emerges from and responds to these changes. The flattening of hierarchies and lack of commitment to institutional roles and rules that Hartzog and Silbey see as a consequence of AI certainly preexists it. To take an example from an exemplar of one of the institutions they discuss, the flattening of hierarchies was already fully present when reporters at the New York Times criticized opinion-page editor James Bennet in 2020, and its publisher acceded to demands for Bennet to attend all-staff meetings and later fired him.
In a properly functioning journalistic organization, Bennet would have told the restive reporters that the news and editorial pages are separate, and that they have no more voice in such matters than someone in the accounting department does. He would, in short, have told them to do their jobs and mind their own business. Publisher A.G. Sulzberger would have backed him up, instead of capitulating to a staff revolt undertaken against an entirely separate division of the paper. He might then have fired Bennet for not doing his job, but not because of the ultra vires complaints. But the reporters’ statements (and, inevitably, tweets) made clear that many didn’t see things that way. (Notably, the sentiment was especially strong among young reporters who had come up on the Internet rather than the print side of the paper.) They felt entitled to a voice concerning the whole paper; they saw the paper’s traditional divisions as antiquated or irrelevant; and they were indifferent to hierarchy. And Sulzberger, whose job it was to preserve the institution, but who was pinioned by revolt from within and a precarious business model from without, met these forces with a spine of jelly.
One could say similar things about university administrators’ oscillating hot and cold approach to encampments in 2024, and student and faculty complaints that the university is “undemocratic,” as if students’ limited role in governing academic institutions were a bug rather than a feature. Or about White House staffers issuing anonymous protest letters. All of these things were a signal that the notion of committing to a purpose-driven, hierarchical institution and its roles and rules has, first slowly and then very quickly indeed, lost currency. And institutions’ leaders, even setting aside those—like the current president—who lack both the barest knowledge of and the slightest interest in institutions and their norms, have responded with confusion, inconsistency, and surrender. They have been training for years to capitulate—first to their own constituencies, then to Trump’s White House, and now to technology. The center certainly cannot hold if those at the center of the center cannot convince either the members of their own institutions or, ultimately, themselves to stand fast.
All of this suggests both a strength of How AI Destroys Institutions and a looming question about it—and about how we live now. The key strength of the article is that it doesn’t focus on AI as either a magical key to knowledge and efficiency or an infernal engine of falsehood and error. Nor does it focus on institutions as producers of mere outputs. Instead, it rightly demands that the reader see institutions as uniquely valuable social processes, driven by norms and practices. It is unashamed to say that they are hierarchical and rule-based in nature, and that destroying these features destroys the institution—and destroying these civic institutions in turn destroys the constitutional order of which they form the supporting architecture. Its contribution is to ask how AI will affect these very features.
What it does not and perhaps cannot answer is whether, in simultaneously overemphasizing the “autonomy” of atomized and isolated individuals and undermining the authority and autonomy of institutions themselves, AI is simply expressing a preexisting general will. On this view, AI isn’t a match helping a dangerous minority to “burn it all down.” It’s an accelerant, poured over a house that’s already on fire, in a world full of arsonists.







This is a test. Are all your writers as wonderfully smart as Paul H? Do you treat him as the treasure he is?
Is it possible to search across the universe of jots, across all jots on all topics? Your accumulation is a corpus of sorts that might be sorted in revealing and thoughtful ways. If you are worried about the larger world doing whatever it might want with your conjunctions you could restrict access to editors, authors, and UM folks however defined.