Final week, Anthropic, the AI firm behind the Claude chatbot, settled a landmark class-action lawsuit for $1.5 billion. The quantity could be very massive within the context of copyright authorized circumstances, but it represents only a fraction of Anthropic’s estimated $183 billion valuation.
Authors and publishers, led by figures like Andrea Bartz and Charles Graeber, accused Anthropic of illegally downloading tens of millions of pirated books from shadow libraries like Library Genesis to coach Claude, violating copyright regulation. The settlement will compensate roughly 500,000 authors and publishers at about $3,000 per affected work. Whereas Anthropic didn’t admit legal responsibility, it agreed to destroy the illicit recordsdata and pay authors, avoiding a trial. The Authors Guild hailed the result as a precedent for licensing content material in AI improvement.
This case raises questions on property rights within the age of Massive Language Fashions (LLMs). Courts have dominated that recombining current texts into new outputs qualifies as truthful use, however the Anthropic lawsuit hinged on the piracy itself, not the coaching course of. What ought to the regulation say about compensating authors whose works not directly gasoline AI innovation? The reply might form not simply equity however the future high quality of AI.
The time period “AI slop” more and more describes low-quality, machine-generated textual content produced with minimal human oversight. If human writing ceases to be a viable profession attributable to insufficient compensation, will LLMs lose entry to contemporary, high-quality coaching knowledge? Might this create a suggestions loop the place AI fashions, educated on degraded outputs, stagnate? This dilemma mirrors the basic “entry versus incentives” debate in mental property regulation: Entry to a wealthy corpus of human-written textual content in the present day allows entrepreneurs to construct highly effective, reasonably priced LLMs. However with out incentives for human authors to maintain producing, the properly of high quality coaching knowledge might run dry.
This case additionally blurs the standard divide between copyright and patents. Copyrighted materials, as soon as seen as static, now drives “follow-on” innovation derived from the unique work. That’s, the copyright safety on this case impacts AI-content influenced by the copyrighted materials in a means that beforehand utilized to new expertise that constructed on patented technical innovations. Thus, “entry versus incentives” principle applies to copyright as a lot because it used to use to patents. The Anthropic settlement alerts that mental property regulation, lagging behind AI’s fast evolution, should adapt. Authors may want compensation, however halting AI progress to resolve authorized disputes dangers stifling innovation.
At $1.5 billion, the settlement’s dimension sends a transparent message: bypassing authorized channels may very well be pricey. This might deter smaller AI companies from getting into the market, particularly as related lawsuits loom in opposition to different firms. The precedent could push builders towards licensing offers or public area knowledge, elevating prices and probably concentrating the AI business amongst deep-pocketed gamers like Anthropic, backed by billions in funding. Smaller startups, unable to afford licensing or litigation, could wrestle. This is able to change into a case of regulatory obstacles favoring incumbents. Might Anthropic’s willingness to pay such a hefty sum replicate a strategic transfer to fortify a moat round well-capitalized AI companies, discouraging upstarts?
In a 2024 submit, I speculated that AI firms, flush with money, may strategically rent writers to replenish the commons of high-quality textual content. In that submit, I wrote:
“AI firms have cash. Might we be headed towards a world the place OpenAI has some paid writers on employees? Replenishing the commons is comparatively low cost if finished strategically, in relation to the cash being raised for AI firms.”
The Anthropic settlement partly validates this concept. For an AI arms race during which Mark Zuckerberg spends tens of millions luring engineers from OpenAI, $1.5 billion looks like a modest value for an opportunity of creating AI dominance.
For now, the Anthropic case marks a pivotal second. It underscores the necessity for a balanced strategy and units the stage for a way AI and mental property regulation will coexist in an period of unprecedented technological change.
Though, at a sure level, the LLMs may attain a take-off level the place they’re so clever and agentic that they don’t want new enter from people anymore. That may be a horizon past which I can not see.
Pleasure Buchanan is an Affiliate Professor of economics at Samford College. She blogs at Economist Writing Each Day.


















