Discussing the history of weapons, George Orwell argued that some, like tanks, naturally lend themselves to despotism because they are complex, expensive and difficult to make, while others, like muskets and rifles, are “inherently democratic.” I’ve always remembered that notion, so when ChatGPT burst into public consciousness in November of 2022, I immediately thought about how much ChatGPT and other Large Language Models (LLMs) looked like a tank.
Since then, the technology has evolved in dramatic and often surprising ways, and today the situation is much less clear — there are now reasons to hope that LLMs will end up less like tanks and more like muskets. In 2022 LLMs were a technology resting in the hands of the few giant tech companies with the expertise, access to data, and deep pockets to create it. There were already various stripes of “AI,” but LLMs seemed more powerful and more centralized than anything that had been seen before. Today there are more players able to build models, a general commodification of models, and a plausible path toward open-source LLMs that can compete at the top tier. Models have also become far more compressed, so that useful work can be done locally rather than through cloud servers controlled by a few big companies.
The technology world has been following the evolution of LLMs with rapt attention — and rightly so. But many people have been asking the wrong questions: Who will win a geopolitical battle for AI dominance, the US or China? Will the technology evolve into human-level “Artificial General Intelligence” (AGI)? When will LLMs start becoming effective “agents” not just processing information but helping people perform tasks?
Some of these are interesting questions, but none are as important for the freedom and empowerment of ordinary people as the question: Who will AI empower? That is a far more urgent question than the stock valuation of big tech companies, speculative musings about AGI, or the state of a US-China race for dominance. The shape of LLM science has crucial implications for intellectual freedom, scientific research, the democratization of information, control over access to technology, privacy, and ultimately democracy itself.
Beyond Orwell’s “tank vs. musket” framing, the notion that technologies carry an inherent politics has also been explored by thinkers such as Langdon Winner, who looked at how their qualities can reflect and reinforce specific power structures. Nuclear power, he argues, inherently requires hierarchical management, rigorous security regimes, and centralized control because of its scale, complexity, and danger. Solar power, on the other hand, is more compatible with the values of decentralization and democracy because it can be implemented at small scales, requires less specialized expertise, and doesn’t pose catastrophic risks that require intense security measures.
Longstanding questions about AI — and new ones
Beyond the built-in characteristics of a technology, of course, a lot depends on the particularities of design and deployment. For example, the copy machine, which was used by Soviet dissidents, Daniel Ellsberg, and others to distribute forbidden information, might be a more inherently democratic technology than the broadcast television station, which naturally lends itself to despotic government. But the effects of centralized broadcast television can be neutralized through careful protections such as independent control, free speech rights, and the guarding of diversity and competition. (Before the internet, those were very prominent public issues in the United States to which groups like the ACLU devoted a lot of attention.) Conversely, the potentially pro-freedom tendencies of other technologies can be neutralized — for example when copy machines include fingerprinting technology that can link printouts to particular machines and even operators. Solar power may lend itself to decentralized deployment, but it could certainly be implemented in a centralized, authoritarian manner.
The various forms of AI have long raised civil liberties and fairness issues, especially those around transparency, the composition of training data, bias, automated decision making, inappropriate deployments, and due process. Many of those issues have remained substantially the same whether the algorithm is a neural network trained on millions of data points or a formula in a spreadsheet.
But the advent of LLMs (based on an entirely different technology, known as transformers, than most previous AI products) has intensified and expanded those issues — and raised new ones:
- LLMs intensify transparency concerns because they operate in an even more opaque and unpredictable manner than other AI systems.
- The data on which they are trained can be even harder to evaluate.
- The models’ appearance of greater intelligence is likely tempting more people to use them in more decisionmaking roles, even as their biases and other irrationalities remain at least as strong as in other forms of AI.
- LLMs appear to be supercharging the use of AI in communications and video
The same policy battles that have been fought over automated algorithms for years will continue to be fought over LLMs. But LLMs also increase the stakes of all of the above because they are beginning to play an educational and linguistic role that no other form of AI has approached — potentially influencing how people write, communicate, and potentially think across a wide variety of contexts.
The stakes
Aside from longstanding issues around the deployment of AI, it is important to consider Orwell’s question — whether the technology itself is shaping up to be inherently biased toward democracy or authoritarianism. And that will depend both on policy choices that we make and on how the science develops. On one hand, it’s possible that the technology has had its quantum evolutionary leap and is now stalling out — that it is and will remain a “normal technology,” even if a gradually transformative one such as electricity or the Internet. On the other hand, if the technology does improve rapidly, including conceivably to some form of human-like conscious intelligence, then the power profile of this technology will matter all the more.
The worst case scenario is that we end up in a world where one or a handful of entities control the LLMs that (for whatever reason) everyone uses. If that happens, those players will come to wield enormous power, even if the technology improves only gradually or progress stalls out.
- Those who control this tool will gain the dangerous power to filter the acquisition of knowledge as people increasingly use these tools for research about the world, analysis of their own data, the production of writing and media, and perhaps as agents that perform tasks besides fetching information.
- They will be able to do that by choosing which deep-seated biases to try to correct and which to ignore, and perhaps by instantiating certain biases into the products in the first place — biases that may be subtle and very hard to detect or measure. Think of a more subtle version of the “Great Firewall of China,” which the government uses to engage in mass filtering and censorship of what information people in China can access. Large companies are temperamentally conservative and do not generally support significant challenges to the status quo of which they are a significant part. And historically, big companies have accommodated themselves to authoritarian governments.
- An LLM monopoly or oligarchy is also likely to try to keep any secrets about advances and improvements in the technology to themselves, stifling democratic access and scientific inquiry. And they could have the power to exclude some parties from using their LLMs at all, much like the credit card oligopoly today, which blocks payments by sexually oriented businesses and journalists disfavored by government officials.
- They’re also likely to surveil everyone who uses the models. From the moment that ChatGPT arrived on the scene, there has been speculation that LLMs would displace search and become an enormous source of advertising revenue. The data that can be collected ranges from people’s LLM queries — like search, an enormous source of sensitive data — to the documents and videos they upload for analysis, to the text of personal therapeutic or “friend” chats.
In short, LLMs could provide the newest form of dangerously concentrated power, following in the footsteps of the big tech giants of today.
In better-case scenarios, on the other hand, LLMs could empower individuals in positive ways. Rather than remaining in the hands of a few, a thousand flowers could bloom as a flourishing marketplace of diverse models trained for all kinds of specialties emerges, many of them transparent and open source and small enough to run on local computers under the control of individuals. Just as the printing press broke the medieval Catholic Church’s near-monopoly on the ability to read, interpret, and publish the written word, LLMs might democratize skills and abilities that are currently held by only a relatively small elite, such as the ability to program computers and create apps. They could allow reporters or citizens to search and analyze overwhelming volumes of government or corporate data for reporting and oversight, put the ability to create a feature-length film in the hands of anyone, and democratize many other things that are now the exclusive domain of experts or well-funded businesses.
What to watch
So how are we to evaluate whether LLMs are tilting in democratic or authoritarian directions? The technology is developing rapidly and unpredictably, and since the advent of ChatGPT there have been dramatic developments that bear directly on the balance between the above outcomes. In particular, there are three interrelated areas that have significant implications for freedom:
- The degree to which training and running an LLM emerges as a form of big science — large-scale, high-cost projects like the Manhattan Project, physics supercolliders, or space telescopes — or whether training the models people want to use ends up being a broadly accessible thing.
- The ability to run desirable models on local hardware, which will remove the possibility of AI titans engaging in gatekeeping, censorship, and privacy invasion.
- The health of open source models and research, which will help ensure that no company enjoys a monopoly on models people want to use.
I will take a closer look at these areas in follow-up posts.
Published August 14, 2025 at 11:14PM
via ACLU https://ift.tt/xQWuMAP
No comments:
Post a Comment