Thursday, 14 August 2025

ACLU: Many Are Focused on the Wrong Questions When it Comes to AI

Many Are Focused on the Wrong Questions When it Comes to AI

Discussing the history of weapons, George Orwell argued that some, like tanks, naturally lend themselves to despotism because they are complex, expensive and difficult to make, while others, like muskets and rifles, are “inherently democratic.” I’ve always remembered that notion, so when ChatGPT burst into public consciousness in November of 2022, I immediately thought about how much ChatGPT and other Large Language Models (LLMs) looked like a tank.

Since then, the technology has evolved in dramatic and often surprising ways, and today the situation is much less clear — there are now reasons to hope that LLMs will end up less like tanks and more like muskets. In 2022 LLMs were a technology resting in the hands of the few giant tech companies with the expertise, access to data, and deep pockets to create it. There were already various stripes of “AI,” but LLMs seemed more powerful and more centralized than anything that had been seen before. Today there are more players able to build models, a general commodification of models, and a plausible path toward open-source LLMs that can compete at the top tier. Models have also become far more compressed, so that useful work can be done locally rather than through cloud servers controlled by a few big companies.

The technology world has been following the evolution of LLMs with rapt attention — and rightly so. But many people have been asking the wrong questions: Who will win a geopolitical battle for AI dominance, the US or China? Will the technology evolve into human-level “Artificial General Intelligence” (AGI)? When will LLMs start becoming effective “agents” not just processing information but helping people perform tasks?

Some of these are interesting questions, but none are as important for the freedom and empowerment of ordinary people as the question: Who will AI empower? That is a far more urgent question than the stock valuation of big tech companies, speculative musings about AGI, or the state of a US-China race for dominance. The shape of LLM science has crucial implications for intellectual freedom, scientific research, the democratization of information, control over access to technology, privacy, and ultimately democracy itself.

Beyond Orwell’s “tank vs. musket” framing, the notion that technologies carry an inherent politics has also been explored by thinkers such as Langdon Winner, who looked at how their qualities can reflect and reinforce specific power structures. Nuclear power, he argues, inherently requires hierarchical management, rigorous security regimes, and centralized control because of its scale, complexity, and danger. Solar power, on the other hand, is more compatible with the values of decentralization and democracy because it can be implemented at small scales, requires less specialized expertise, and doesn’t pose catastrophic risks that require intense security measures.

Longstanding questions about AI — and new ones
Beyond the built-in characteristics of a technology, of course, a lot depends on the particularities of design and deployment. For example, the copy machine, which was used by Soviet dissidents, Daniel Ellsberg, and others to distribute forbidden information, might be a more inherently democratic technology than the broadcast television station, which naturally lends itself to despotic government. But the effects of centralized broadcast television can be neutralized through careful protections such as independent control, free speech rights, and the guarding of diversity and competition. (Before the internet, those were very prominent public issues in the United States to which groups like the ACLU devoted a lot of attention.) Conversely, the potentially pro-freedom tendencies of other technologies can be neutralized — for example when copy machines include fingerprinting technology that can link printouts to particular machines and even operators. Solar power may lend itself to decentralized deployment, but it could certainly be implemented in a centralized, authoritarian manner.

The various forms of AI have long raised civil liberties and fairness issues, especially those around transparency, the composition of training data, bias, automated decision making, inappropriate deployments, and due process. Many of those issues have remained substantially the same whether the algorithm is a neural network trained on millions of data points or a formula in a spreadsheet.

But the advent of LLMs (based on an entirely different technology, known as transformers, than most previous AI products) has intensified and expanded those issues — and raised new ones:

  • LLMs intensify transparency concerns because they operate in an even more opaque and unpredictable manner than other AI systems.
  • The data on which they are trained can be even harder to evaluate.
  • The models’ appearance of greater intelligence is likely tempting more people to use them in more decisionmaking roles, even as their biases and other irrationalities remain at least as strong as in other forms of AI.
  • LLMs appear to be supercharging the use of AI in communications and video

The same policy battles that have been fought over automated algorithms for years will continue to be fought over LLMs. But LLMs also increase the stakes of all of the above because they are beginning to play an educational and linguistic role that no other form of AI has approached — potentially influencing how people write, communicate, and potentially think across a wide variety of contexts.

The stakes
Aside from longstanding issues around the deployment of AI, it is important to consider Orwell’s question — whether the technology itself is shaping up to be inherently biased toward democracy or authoritarianism. And that will depend both on policy choices that we make and on how the science develops. On one hand, it’s possible that the technology has had its quantum evolutionary leap and is now stalling out — that it is and will remain a “normal technology,” even if a gradually transformative one such as electricity or the Internet. On the other hand, if the technology does improve rapidly, including conceivably to some form of human-like conscious intelligence, then the power profile of this technology will matter all the more.

The worst case scenario is that we end up in a world where one or a handful of entities control the LLMs that (for whatever reason) everyone uses. If that happens, those players will come to wield enormous power, even if the technology improves only gradually or progress stalls out.

  • Those who control this tool will gain the dangerous power to filter the acquisition of knowledge as people increasingly use these tools for research about the world, analysis of their own data, the production of writing and media, and perhaps as agents that perform tasks besides fetching information.
  • They will be able to do that by choosing which deep-seated biases to try to correct and which to ignore, and perhaps by instantiating certain biases into the products in the first place — biases that may be subtle and very hard to detect or measure. Think of a more subtle version of the “Great Firewall of China,” which the government uses to engage in mass filtering and censorship of what information people in China can access. Large companies are temperamentally conservative and do not generally support significant challenges to the status quo of which they are a significant part. And historically, big companies have accommodated themselves to authoritarian governments.
  • An LLM monopoly or oligarchy is also likely to try to keep any secrets about advances and improvements in the technology to themselves, stifling democratic access and scientific inquiry. And they could have the power to exclude some parties from using their LLMs at all, much like the credit card oligopoly today, which blocks payments by sexually oriented businesses and journalists disfavored by government officials.
  • They’re also likely to surveil everyone who uses the models. From the moment that ChatGPT arrived on the scene, there has been speculation that LLMs would displace search and become an enormous source of advertising revenue. The data that can be collected ranges from people’s LLM queries — like search, an enormous source of sensitive data — to the documents and videos they upload for analysis, to the text of personal therapeutic or “friend” chats.

In short, LLMs could provide the newest form of dangerously concentrated power, following in the footsteps of the big tech giants of today.

In better-case scenarios, on the other hand, LLMs could empower individuals in positive ways. Rather than remaining in the hands of a few, a thousand flowers could bloom as a flourishing marketplace of diverse models trained for all kinds of specialties emerges, many of them transparent and open source and small enough to run on local computers under the control of individuals. Just as the printing press broke the medieval Catholic Church’s near-monopoly on the ability to read, interpret, and publish the written word, LLMs might democratize skills and abilities that are currently held by only a relatively small elite, such as the ability to program computers and create apps. They could allow reporters or citizens to search and analyze overwhelming volumes of government or corporate data for reporting and oversight, put the ability to create a feature-length film in the hands of anyone, and democratize many other things that are now the exclusive domain of experts or well-funded businesses.

What to watch
So how are we to evaluate whether LLMs are tilting in democratic or authoritarian directions? The technology is developing rapidly and unpredictably, and since the advent of ChatGPT there have been dramatic developments that bear directly on the balance between the above outcomes. In particular, there are three interrelated areas that have significant implications for freedom:

  • The degree to which training and running an LLM emerges as a form of big science — large-scale, high-cost projects like the Manhattan Project, physics supercolliders, or space telescopes — or whether training the models people want to use ends up being a broadly accessible thing.
  • The ability to run desirable models on local hardware, which will remove the possibility of AI titans engaging in gatekeeping, censorship, and privacy invasion.
  • The health of open source models and research, which will help ensure that no company enjoys a monopoly on models people want to use.

I will take a closer look at these areas in follow-up posts.



Published August 14, 2025 at 06:44PM
via ACLU https://ift.tt/u970LQh

ACLU: Many Are Focused on the Wrong Questions When it Comes to AI

Many Are Focused on the Wrong Questions When it Comes to AI

Discussing the history of weapons, George Orwell argued that some, like tanks, naturally lend themselves to despotism because they are complex, expensive and difficult to make, while others, like muskets and rifles, are “inherently democratic.” I’ve always remembered that notion, so when ChatGPT burst into public consciousness in November of 2022, I immediately thought about how much ChatGPT and other Large Language Models (LLMs) looked like a tank.

Since then, the technology has evolved in dramatic and often surprising ways, and today the situation is much less clear — there are now reasons to hope that LLMs will end up less like tanks and more like muskets. In 2022 LLMs were a technology resting in the hands of the few giant tech companies with the expertise, access to data, and deep pockets to create it. There were already various stripes of “AI,” but LLMs seemed more powerful and more centralized than anything that had been seen before. Today there are more players able to build models, a general commodification of models, and a plausible path toward open-source LLMs that can compete at the top tier. Models have also become far more compressed, so that useful work can be done locally rather than through cloud servers controlled by a few big companies.

The technology world has been following the evolution of LLMs with rapt attention — and rightly so. But many people have been asking the wrong questions: Who will win a geopolitical battle for AI dominance, the US or China? Will the technology evolve into human-level “Artificial General Intelligence” (AGI)? When will LLMs start becoming effective “agents” not just processing information but helping people perform tasks?

Some of these are interesting questions, but none are as important for the freedom and empowerment of ordinary people as the question: Who will AI empower? That is a far more urgent question than the stock valuation of big tech companies, speculative musings about AGI, or the state of a US-China race for dominance. The shape of LLM science has crucial implications for intellectual freedom, scientific research, the democratization of information, control over access to technology, privacy, and ultimately democracy itself.

Beyond Orwell’s “tank vs. musket” framing, the notion that technologies carry an inherent politics has also been explored by thinkers such as Langdon Winner, who looked at how their qualities can reflect and reinforce specific power structures. Nuclear power, he argues, inherently requires hierarchical management, rigorous security regimes, and centralized control because of its scale, complexity, and danger. Solar power, on the other hand, is more compatible with the values of decentralization and democracy because it can be implemented at small scales, requires less specialized expertise, and doesn’t pose catastrophic risks that require intense security measures.

Longstanding questions about AI — and new ones
Beyond the built-in characteristics of a technology, of course, a lot depends on the particularities of design and deployment. For example, the copy machine, which was used by Soviet dissidents, Daniel Ellsberg, and others to distribute forbidden information, might be a more inherently democratic technology than the broadcast television station, which naturally lends itself to despotic government. But the effects of centralized broadcast television can be neutralized through careful protections such as independent control, free speech rights, and the guarding of diversity and competition. (Before the internet, those were very prominent public issues in the United States to which groups like the ACLU devoted a lot of attention.) Conversely, the potentially pro-freedom tendencies of other technologies can be neutralized — for example when copy machines include fingerprinting technology that can link printouts to particular machines and even operators. Solar power may lend itself to decentralized deployment, but it could certainly be implemented in a centralized, authoritarian manner.

The various forms of AI have long raised civil liberties and fairness issues, especially those around transparency, the composition of training data, bias, automated decision making, inappropriate deployments, and due process. Many of those issues have remained substantially the same whether the algorithm is a neural network trained on millions of data points or a formula in a spreadsheet.

But the advent of LLMs (based on an entirely different technology, known as transformers, than most previous AI products) has intensified and expanded those issues — and raised new ones:

  • LLMs intensify transparency concerns because they operate in an even more opaque and unpredictable manner than other AI systems.
  • The data on which they are trained can be even harder to evaluate.
  • The models’ appearance of greater intelligence is likely tempting more people to use them in more decisionmaking roles, even as their biases and other irrationalities remain at least as strong as in other forms of AI.
  • LLMs appear to be supercharging the use of AI in communications and video

The same policy battles that have been fought over automated algorithms for years will continue to be fought over LLMs. But LLMs also increase the stakes of all of the above because they are beginning to play an educational and linguistic role that no other form of AI has approached — potentially influencing how people write, communicate, and potentially think across a wide variety of contexts.

The stakes
Aside from longstanding issues around the deployment of AI, it is important to consider Orwell’s question — whether the technology itself is shaping up to be inherently biased toward democracy or authoritarianism. And that will depend both on policy choices that we make and on how the science develops. On one hand, it’s possible that the technology has had its quantum evolutionary leap and is now stalling out — that it is and will remain a “normal technology,” even if a gradually transformative one such as electricity or the Internet. On the other hand, if the technology does improve rapidly, including conceivably to some form of human-like conscious intelligence, then the power profile of this technology will matter all the more.

The worst case scenario is that we end up in a world where one or a handful of entities control the LLMs that (for whatever reason) everyone uses. If that happens, those players will come to wield enormous power, even if the technology improves only gradually or progress stalls out.

  • Those who control this tool will gain the dangerous power to filter the acquisition of knowledge as people increasingly use these tools for research about the world, analysis of their own data, the production of writing and media, and perhaps as agents that perform tasks besides fetching information.
  • They will be able to do that by choosing which deep-seated biases to try to correct and which to ignore, and perhaps by instantiating certain biases into the products in the first place — biases that may be subtle and very hard to detect or measure. Think of a more subtle version of the “Great Firewall of China,” which the government uses to engage in mass filtering and censorship of what information people in China can access. Large companies are temperamentally conservative and do not generally support significant challenges to the status quo of which they are a significant part. And historically, big companies have accommodated themselves to authoritarian governments.
  • An LLM monopoly or oligarchy is also likely to try to keep any secrets about advances and improvements in the technology to themselves, stifling democratic access and scientific inquiry. And they could have the power to exclude some parties from using their LLMs at all, much like the credit card oligopoly today, which blocks payments by sexually oriented businesses and journalists disfavored by government officials.
  • They’re also likely to surveil everyone who uses the models. From the moment that ChatGPT arrived on the scene, there has been speculation that LLMs would displace search and become an enormous source of advertising revenue. The data that can be collected ranges from people’s LLM queries — like search, an enormous source of sensitive data — to the documents and videos they upload for analysis, to the text of personal therapeutic or “friend” chats.

In short, LLMs could provide the newest form of dangerously concentrated power, following in the footsteps of the big tech giants of today.

In better-case scenarios, on the other hand, LLMs could empower individuals in positive ways. Rather than remaining in the hands of a few, a thousand flowers could bloom as a flourishing marketplace of diverse models trained for all kinds of specialties emerges, many of them transparent and open source and small enough to run on local computers under the control of individuals. Just as the printing press broke the medieval Catholic Church’s near-monopoly on the ability to read, interpret, and publish the written word, LLMs might democratize skills and abilities that are currently held by only a relatively small elite, such as the ability to program computers and create apps. They could allow reporters or citizens to search and analyze overwhelming volumes of government or corporate data for reporting and oversight, put the ability to create a feature-length film in the hands of anyone, and democratize many other things that are now the exclusive domain of experts or well-funded businesses.

What to watch
So how are we to evaluate whether LLMs are tilting in democratic or authoritarian directions? The technology is developing rapidly and unpredictably, and since the advent of ChatGPT there have been dramatic developments that bear directly on the balance between the above outcomes. In particular, there are three interrelated areas that have significant implications for freedom:

  • The degree to which training and running an LLM emerges as a form of big science — large-scale, high-cost projects like the Manhattan Project, physics supercolliders, or space telescopes — or whether training the models people want to use ends up being a broadly accessible thing.
  • The ability to run desirable models on local hardware, which will remove the possibility of AI titans engaging in gatekeeping, censorship, and privacy invasion.
  • The health of open source models and research, which will help ensure that no company enjoys a monopoly on models people want to use.

I will take a closer look at these areas in follow-up posts.



Published August 14, 2025 at 11:14PM
via ACLU https://ift.tt/xQWuMAP

Wednesday, 13 August 2025

ACLU: I’m a Columbia Student Journalist. I Watched Censorship Unfold on My Own Campus.

I’m a Columbia Student Journalist. I Watched Censorship Unfold on My Own Campus.

Editor’s Note: The ACLU is committed to fighting for all reporters’, including the student press’, right to hold those in power accountable. Through “Press in Peril,” our ongoing series, we’re highlighting the challenges facing the press in a democracy under pressure. For student press resources visit the Student Press Law Center.

Last year, during the Spring semester of my sophomore year at Barnard College, I was leaving class when I received a text from my colleague at the Columbia Daily Spectator, our university newspaper where I work as an editor. She wanted me to come see what was unfolding on the South lawn of Columbia’s Manhattan campus. NYPD officers had lined up, preparing to arrest more than 100 students who had set up an encampment on the lawn as then-Columbia President Minouche Shafik testified before Congress.

For the first time since 1968, Columbia administration had authorized NYPD presence on campus to arrest students. Swaths of students gathered around the lawns as the police entered the side gates to arrest protestors, carrying them off to buses one by one. This would not be anything like the protests we had seen the past school year. That finals season, I spent hours huddled with friends in dorm rooms and libraries. We listened closely to on-the-ground reporters from WKCR, Columbia’s student-run radio, and constantly refreshed the Columbia Daily Spectator’s website for updates on the protests and arrests happening outside our windows.

With threats to academic freedom on the rise and a spotlight on universities across the country, student journalists are essential to providing perspective and balanced coverage of campuses. In some cases, student publications also provide essential reporting to communities beyond campus. Since campuses erupted as students protested Israel's actions in Gaza, student reporters have shared essential information about the protests and kept the campus community updated on important news. For example, Columbia’s student-run radio station, WKCR, livestreamed for nearly 24 hours straight during the two weeks The Gaza Solidarity Encampment rested on our West South Lawn.

As Columbia University gained immense attention – nationally and internationally – our newspapers’ reporting and op-eds gained newfound attention from reporters, elected officials, and other stakeholders across the country grasping for an idea of what was going on behind Columbia’s closed gates. Columbia Daily Spectator is financially distinct from Columbia University, which ensures that student journalists can report independently. In the opinion section, we worked tirelessly throughout the spring of 2024 to edit and publish op-eds from various stakeholders, including diverse groups on campus, alumni, and even the university administration. The Editorial Board deliberated for hours while writing a series of staff editorials capturing students' feelings and frustrations on campus.

A year later, it was springtime on campus again. The stakes for the student press looked different. President Donald Trump took office in January 2025, and his administration began to launch investigations into Columbia University, soon cutting approximately $400 million in federal funding. With another national spotlight on the university, Columbia began to grapple with federal investigations and negotiations, continued student protests, and international media scrutiny. Columbia’s highest leadership refused to sit down with Spectator journalists throughout Spring 2025. With each protest that happened on campus, the university cracked down harder on media attention, turning away student journalists from protest areas. This made it harder for our student press to do our jobs.

For student journalists, our fears of censorship and retaliation compounded when ICE agents took Tufts University Ph.D. student, Rümeysa Öztürk in broad daylight and detained her in Louisiana because she co-wrote an op-ed in 2024 with three other students in The Tufts Daily. The op- ed criticized the school for dismissing the student senate's role in student governance and called on the Tufts administration to implement the resolutions it had passed, including disclosing investments and divesting from Israel. Her byline led to her arrest. We saw this as an attack on our rights as student press and on the ability for those who authored pieces on our opinion page to share their voices and viewpoints.

Since October 2023, the fall of my sophomore year, Spectator’s opinion page had housed at least a dozen articles that addressed the university’s suppression of student speech on the war in Gaza. We had to have conversations about staff editorials and op-eds written during Spring 2024 when both Barnard and Columbia had passed similar student council resolutions calling for divestment. Were writers for Spectator’s opinion section next to be arrested for writing an op-ed? We feared this new administration would not stop there. Perhaps Trump would come after international students on our editorial board for their participation in writing staff editorials on the president’s policies.

Our fears were realized when the school began to launch investigations into student protestors, using their participation in writing Spectator op-eds to justify the school’s charges. When I heard this was happening, it unlocked a new sense of paranoia for me and other staffers. I wondered if the Editorial Board’s criticism of Trump’s policies would place us under investigation next.

As a U.S. citizen, I had the ability to share my views on current events with less fear than other students I had worked alongside for so long. I had honest conversations in pitch meetings and on production nights with other editors and writers about how to frame an argument or whether to pursue a specific topic in their writing, as well as many conversations with international students who were hesitant to express their views on paper.

Part of this fear stemmed not only from the administration’s crackdown but also the campus environment. At Columbia and Barnard, during the height of the student-led protests, it was not uncommon to pass trucks or scroll through websites bearing the faces of fellow students with their personal information displayed because of the opinions they shared in op-eds or on social media. This reality led to requests of anonymity or off-the-record commentary from students afraid their words could lead to harassment, doxxing, or retaliation.

After a protest in Barnard College’s Milstein Library, two WKCR journalists received a “fact-finding” request from Barnard’s Community Accountability, Response, and Emergency Services office. They requested a meeting with the students to provide information refuting that they were involved in the protests. Barnard asked these student journalists to prove they were with the press and provide other information in a closed-door meeting or face punishment via the student code of conduct. The meeting was later cancelled by the school but not before stoking fear amongst student reporters that their reporting could implicate them in the campus disciplinary investigations.

Despite some barriers to campus press freedom during the tensest moments of the Spring 2024 protests, Barnard and Columbia did initially separate student reporters from protestors, allowing reporters to enter protests and cover the events without penalty. This is in sharp contrast to some schools, such as Stanford and Dartmouth, where student journalists have been arrested for similar reporting on student protests, with some facing felony charges and academic discipline.

This past Spring, however, my school broke from their previous behavior. For the first time, three student journalists were suspended and quickly unsuspended after covering the Butler Library protest. For less than 24 hours, these students were prohibited from taking final exams and told they must vacate their student housing. Rather than studying for finals, these students spent the day in anxious attempts to prove to the school that they were student journalists despite doing all the right things: identifying themselves on the scene, wearing press badges, and reporting in a professional manner.

During this same protest, student press outside the library were restrained and shoved by school public safety officers and denied access to the main area where protests were unfolding – despite having press badges. If not for the three journalists who faced suspension, crucial reporting from the field would not have existed, including a timeline of events, video footage, and photos inside of the library. Even though these suspensions were temporary, it certainly sent a chilling message throughout Columbia and Barnard’s student press community — reporting on protests could impact their academic status.

After months of trying to meet with the Acting Columbia President Claire Shipman, she finally agreed to an interview with Spectator student journalists in the wake of Columbia’s $200 settlement with the Trump administration. This is the first time Shipman has sat down with the student press – a significant turning point in the silence from administration toward the student press for the past six months.

Student journalists at Barnard and Columbia are not alone. Universities across the country continue to crack down on student press. I know that a free press is critical to our society. Columbia’s student journalists have worked tirelessly to follow protocol instituted by the university, while also learning how to function as the University’s Fourth Estate, keeping power accountable. As I start the new school year and my senior year, I hope student reporters, editors, and press broadly can continue to hold those in power on campus accountable for their actions and inform communities in the process. I hope to spend my last year on the opinion section working on new projects, continuing to cover unfolding situations on campus, and upholding the opinion section’s motto to “reflect and direct campus discourse.”



Published August 14, 2025 at 01:05AM
via ACLU https://ift.tt/oWjTczd

ACLU: I’m a Columbia Student Journalist. I Watched Censorship Unfold on My Own Campus.

I’m a Columbia Student Journalist. I Watched Censorship Unfold on My Own Campus.

Editor’s Note: The ACLU is committed to fighting for all reporters’, including the student press’, right to hold those in power accountable. Through “Press in Peril,” our ongoing series, we’re highlighting the challenges facing the press in a democracy under pressure. For student press resources visit the Student Press Law Center.

Last year, during the Spring semester of my sophomore year at Barnard College, I was leaving class when I received a text from my colleague at the Columbia Daily Spectator, our university newspaper where I work as an editor. She wanted me to come see what was unfolding on the South lawn of Columbia’s Manhattan campus. NYPD officers had lined up, preparing to arrest more than 100 students who had set up an encampment on the lawn as then-Columbia President Minouche Shafik testified before Congress.

For the first time since 1968, Columbia administration had authorized NYPD presence on campus to arrest students. Swaths of students gathered around the lawns as the police entered the side gates to arrest protestors, carrying them off to buses one by one. This would not be anything like the protests we had seen the past school year. That finals season, I spent hours huddled with friends in dorm rooms and libraries. We listened closely to on-the-ground reporters from WKCR, Columbia’s student-run radio, and constantly refreshed the Columbia Daily Spectator’s website for updates on the protests and arrests happening outside our windows.

With threats to academic freedom on the rise and a spotlight on universities across the country, student journalists are essential to providing perspective and balanced coverage of campuses. In some cases, student publications also provide essential reporting to communities beyond campus. Since campuses erupted as students protested Israel's actions in Gaza, student reporters have shared essential information about the protests and kept the campus community updated on important news. For example, Columbia’s student-run radio station, WKCR, livestreamed for nearly 24 hours straight during the two weeks The Gaza Solidarity Encampment rested on our West South Lawn.

As Columbia University gained immense attention – nationally and internationally – our newspapers’ reporting and op-eds gained newfound attention from reporters, elected officials, and other stakeholders across the country grasping for an idea of what was going on behind Columbia’s closed gates. Columbia Daily Spectator is financially distinct from Columbia University, which ensures that student journalists can report independently. In the opinion section, we worked tirelessly throughout the spring of 2024 to edit and publish op-eds from various stakeholders, including diverse groups on campus, alumni, and even the university administration. The Editorial Board deliberated for hours while writing a series of staff editorials capturing students' feelings and frustrations on campus.

A year later, it was springtime on campus again. The stakes for the student press looked different. President Donald Trump took office in January 2025, and his administration began to launch investigations into Columbia University, soon cutting approximately $400 million in federal funding. With another national spotlight on the university, Columbia began to grapple with federal investigations and negotiations, continued student protests, and international media scrutiny. Columbia’s highest leadership refused to sit down with Spectator journalists throughout Spring 2025. With each protest that happened on campus, the university cracked down harder on media attention, turning away student journalists from protest areas. This made it harder for our student press to do our jobs.

For student journalists, our fears of censorship and retaliation compounded when ICE agents took Tufts University Ph.D. student, Rümeysa Öztürk in broad daylight and detained her in Louisiana because she co-wrote an op-ed in 2024 with three other students in The Tufts Daily. The op- ed criticized the school for dismissing the student senate's role in student governance and called on the Tufts administration to implement the resolutions it had passed, including disclosing investments and divesting from Israel. Her byline led to her arrest. We saw this as an attack on our rights as student press and on the ability for those who authored pieces on our opinion page to share their voices and viewpoints.

Since October 2023, the fall of my sophomore year, Spectator’s opinion page had housed at least a dozen articles that addressed the university’s suppression of student speech on the war in Gaza. We had to have conversations about staff editorials and op-eds written during Spring 2024 when both Barnard and Columbia had passed similar student council resolutions calling for divestment. Were writers for Spectator’s opinion section next to be arrested for writing an op-ed? We feared this new administration would not stop there. Perhaps Trump would come after international students on our editorial board for their participation in writing staff editorials on the president’s policies.

Our fears were realized when the school began to launch investigations into student protestors, using their participation in writing Spectator op-eds to justify the school’s charges. When I heard this was happening, it unlocked a new sense of paranoia for me and other staffers. I wondered if the Editorial Board’s criticism of Trump’s policies would place us under investigation next.

As a U.S. citizen, I had the ability to share my views on current events with less fear than other students I had worked alongside for so long. I had honest conversations in pitch meetings and on production nights with other editors and writers about how to frame an argument or whether to pursue a specific topic in their writing, as well as many conversations with international students who were hesitant to express their views on paper.

Part of this fear stemmed not only from the administration’s crackdown but also the campus environment. At Columbia and Barnard, during the height of the student-led protests, it was not uncommon to pass trucks or scroll through websites bearing the faces of fellow students with their personal information displayed because of the opinions they shared in op-eds or on social media. This reality led to requests of anonymity or off-the-record commentary from students afraid their words could lead to harassment, doxxing, or retaliation.

After a protest in Barnard College’s Milstein Library, two WKCR journalists received a “fact-finding” request from Barnard’s Community Accountability, Response, and Emergency Services office. They requested a meeting with the students to provide information refuting that they were involved in the protests. Barnard asked these student journalists to prove they were with the press and provide other information in a closed-door meeting or face punishment via the student code of conduct. The meeting was later cancelled by the school but not before stoking fear amongst student reporters that their reporting could implicate them in the campus disciplinary investigations.

Despite some barriers to campus press freedom during the tensest moments of the Spring 2024 protests, Barnard and Columbia did initially separate student reporters from protestors, allowing reporters to enter protests and cover the events without penalty. This is in sharp contrast to some schools, such as Stanford and Dartmouth, where student journalists have been arrested for similar reporting on student protests, with some facing felony charges and academic discipline.

This past Spring, however, my school broke from their previous behavior. For the first time, three student journalists were suspended and quickly unsuspended after covering the Butler Library protest. For less than 24 hours, these students were prohibited from taking final exams and told they must vacate their student housing. Rather than studying for finals, these students spent the day in anxious attempts to prove to the school that they were student journalists despite doing all the right things: identifying themselves on the scene, wearing press badges, and reporting in a professional manner.

During this same protest, student press outside the library were restrained and shoved by school public safety officers and denied access to the main area where protests were unfolding – despite having press badges. If not for the three journalists who faced suspension, crucial reporting from the field would not have existed, including a timeline of events, video footage, and photos inside of the library. Even though these suspensions were temporary, it certainly sent a chilling message throughout Columbia and Barnard’s student press community — reporting on protests could impact their academic status.

After months of trying to meet with the Acting Columbia President Claire Shipman, she finally agreed to an interview with Spectator student journalists in the wake of Columbia’s $200 settlement with the Trump administration. This is the first time Shipman has sat down with the student press – a significant turning point in the silence from administration toward the student press for the past six months.

Student journalists at Barnard and Columbia are not alone. Universities across the country continue to crack down on student press. I know that a free press is critical to our society. Columbia’s student journalists have worked tirelessly to follow protocol instituted by the university, while also learning how to function as the University’s Fourth Estate, keeping power accountable. As I start the new school year and my senior year, I hope student reporters, editors, and press broadly can continue to hold those in power on campus accountable for their actions and inform communities in the process. I hope to spend my last year on the opinion section working on new projects, continuing to cover unfolding situations on campus, and upholding the opinion section’s motto to “reflect and direct campus discourse.”



Published August 13, 2025 at 08:35PM
via ACLU https://ift.tt/pIOtG0J

Thursday, 7 August 2025

ACLU: Surveillance Company Flock Now Using AI to Report Us to Police if it Thinks Our Movement Patterns Are “Suspicious”

Surveillance Company Flock Now Using AI to Report Us to Police if it Thinks Our Movement Patterns Are “Suspicious”

The police surveillance company Flock has built an enormous nationwide license plate tracking system, which streams records of Americans’ comings and goings into a private national database that it makes available to police officers around the country. The system allows police to search the nationwide movement records of any vehicle that comes to their attention. That’s bad enough on its own, but the company is also now apparently analyzing our driving patterns to determine if we’re “suspicious.” That means if your police start using Flock, they could target you just because some algorithm has decided your movement patterns suggest criminality.

There has been a lot of reporting lately about Flock but I haven’t seen anyone focus on this feature. It’s a significant expansion in the use of the company’s surveillance infrastructure — from allowing police to find out more about specific vehicles of interest, to using the system to generate suspicion in the first place. The company’s cameras are no longer just recording our comings and goings — now, using AI in ways we have long warned against, the system is actively evaluating each of us to make a decision about whether we should be reported to law enforcement as potential participants in organized crime.

In a February 13 press release touting an “Expansive AI and Data Analysis Toolset for Law Enforcement,” the company announced several new capabilities, including something called “Multi-State Insights”:

Many large-scale criminal activities—such as human and narcotics trafficking and Organized Retail Crime (ORC)—involve movement across state lines. With our new Multi-State Insights feature, law enforcement is alerted when suspect vehicles have been detected in multiple states, helping investigators uncover networks and trends linked to major crime organizations.

Flock appears to offer this capability through a larger “Investigations Manager,” which urges police departments to “Maximize your LPR data to detect patterns of suspicious activity across cities and states.” The company also offers a “Linked Vehicles” or “Convoy Search” allowing police to “uncover vehicles frequently seen together,” putting it squarely in the business of tracking people’s associations, and a “Multiple locations search,” which promises to “Uncover vehicles seen in multiple locations.” All these are variants on the same theme: using the camera network not just to investigate based on suspicion, but to generate suspicion itself.

In a democracy, the government shouldn’t be watching its citizens all the time just in case we do something wrong. It’s one thing if a police officer out on a street sees something suspicious in public and reacts. But this is an entirely different matter.

First, the police should not be collecting and storing data on people’s movements and travel across space and time in the first place, or contracting to use a private company’s technology to accomplish the same thing. Second, they shouldn’t be taking that data and running it through AI algorithms to potentially swing the government’s eye of suspicion toward random, innocent civilians whose travel patterns just happen to fit what that algorithm thinks is worth bringing to the attention of the police.

And of course because Flock is a private company not subject to checks and balances such as open records laws and oversight by elected officials, we know nothing about the nature of the algorithm or algorithms that it uses— what logic it may be based upon, the data upon which it was trained, or the frequency and nature of its error rates. Does anyone actually know whether there are movement patterns characteristic of criminal behavior that won’t sweep in vastly larger numbers of innocent people?

We also don’t know what kind of biases the company’s algorithms might exhibit; it’s very easy to imagine an algorithm trained on past criminal histories in which low-income neighborhoods and communities of color are highly over-represented because of the well-established, top-to-bottom biases in our criminal justice system. That could mean that just living in such a neighborhood could make you inherently suspicious in the eyes of this system in a way that someone living in a wealthier place would never be. Among other problems, that’s just plain unfair.

The bottom line is that Flock, having built its giant surveillance infrastructure, is now expanding its uses — validating all our warnings about how such systems inevitably undergo mission creep, and providing all the more reason why communities should refuse to allow the police departments that serve them to participate in this mass surveillance system.



Published August 7, 2025 at 11:55PM
via ACLU https://ift.tt/m2iApkG

ACLU: Surveillance Company Flock Now Using AI to Report Us to Police if it Thinks Our Movement Patterns Are “Suspicious”

Surveillance Company Flock Now Using AI to Report Us to Police if it Thinks Our Movement Patterns Are “Suspicious”

The police surveillance company Flock has built an enormous nationwide license plate tracking system, which streams records of Americans’ comings and goings into a private national database that it makes available to police officers around the country. The system allows police to search the nationwide movement records of any vehicle that comes to their attention. That’s bad enough on its own, but the company is also now apparently analyzing our driving patterns to determine if we’re “suspicious.” That means if your police start using Flock, they could target you just because some algorithm has decided your movement patterns suggest criminality.

There has been a lot of reporting lately about Flock but I haven’t seen anyone focus on this feature. It’s a significant expansion in the use of the company’s surveillance infrastructure — from allowing police to find out more about specific vehicles of interest, to using the system to generate suspicion in the first place. The company’s cameras are no longer just recording our comings and goings — now, using AI in ways we have long warned against, the system is actively evaluating each of us to make a decision about whether we should be reported to law enforcement as potential participants in organized crime.

In a February 13 press release touting an “Expansive AI and Data Analysis Toolset for Law Enforcement,” the company announced several new capabilities, including something called “Multi-State Insights”:

Many large-scale criminal activities—such as human and narcotics trafficking and Organized Retail Crime (ORC)—involve movement across state lines. With our new Multi-State Insights feature, law enforcement is alerted when suspect vehicles have been detected in multiple states, helping investigators uncover networks and trends linked to major crime organizations.

Flock appears to offer this capability through a larger “Investigations Manager,” which urges police departments to “Maximize your LPR data to detect patterns of suspicious activity across cities and states.” The company also offers a “Linked Vehicles” or “Convoy Search” allowing police to “uncover vehicles frequently seen together,” putting it squarely in the business of tracking people’s associations, and a “Multiple locations search,” which promises to “Uncover vehicles seen in multiple locations.” All these are variants on the same theme: using the camera network not just to investigate based on suspicion, but to generate suspicion itself.

In a democracy, the government shouldn’t be watching its citizens all the time just in case we do something wrong. It’s one thing if a police officer out on a street sees something suspicious in public and reacts. But this is an entirely different matter.

First, the police should not be collecting and storing data on people’s movements and travel across space and time in the first place, or contracting to use a private company’s technology to accomplish the same thing. Second, they shouldn’t be taking that data and running it through AI algorithms to potentially swing the government’s eye of suspicion toward random, innocent civilians whose travel patterns just happen to fit what that algorithm thinks is worth bringing to the attention of the police.

And of course because Flock is a private company not subject to checks and balances such as open records laws and oversight by elected officials, we know nothing about the nature of the algorithm or algorithms that it uses— what logic it may be based upon, the data upon which it was trained, or the frequency and nature of its error rates. Does anyone actually know whether there are movement patterns characteristic of criminal behavior that won’t sweep in vastly larger numbers of innocent people?

We also don’t know what kind of biases the company’s algorithms might exhibit; it’s very easy to imagine an algorithm trained on past criminal histories in which low-income neighborhoods and communities of color are highly over-represented because of the well-established, top-to-bottom biases in our criminal justice system. That could mean that just living in such a neighborhood could make you inherently suspicious in the eyes of this system in a way that someone living in a wealthier place would never be. Among other problems, that’s just plain unfair.

The bottom line is that Flock, having built its giant surveillance infrastructure, is now expanding its uses — validating all our warnings about how such systems inevitably undergo mission creep, and providing all the more reason why communities should refuse to allow the police departments that serve them to participate in this mass surveillance system.



Published August 7, 2025 at 07:25PM
via ACLU https://ift.tt/yQYdJl2

Wednesday, 6 August 2025

ACLU: Trump's Birthright Citizenship Executive Order: What Happens Next

Trump's Birthright Citizenship Executive Order: What Happens Next

On his first day in office, President Donald Trump issued an executive order seeking to end the constitutionally-guaranteed right to birthright citizenship. The American Civil Liberties Union and our partners swiftly sued to block that cruel and lawless action, as did other groups of plaintiffs around the country.

The Trump administration took several of these cases to the Supreme Court, asking it to limit lower courts’ ability to block illegal policies like this one, and for permission to enforce its order against thousands of babies nationwide. The Court ruled for the Trump administration in part, leaving many of those families confused and afraid of what might come next.

Several months later the case remains complex, with multiple legal challenges and appeals. Below, we lay out how the court ruled, where birthright citizenship stands today and what happens next.


The Fight for Birthright Citizenship, Explained

Hours after Trump signed his executive order, the ACLU and our partners filed a lawsuit, NHICS v. Donald J. Trump, challenging Trump’s executive order in federal court. Within days, other legal challenges followed and several judges issued injunctions that blocked the order, temporarily halting enforcement and preventing harm while the legal challenges proceeded.

In response, the Trump administration filed emergency applications asking the Supreme Court to narrow the injunctions by limiting their protections from the entire country to just a handful of individual plaintiffs. Usually, this kind of request is decided based on written arguments. But, in an unusual move, the court agreed to hear oral arguments in a special session held on May 15.

On June 27, the Supreme Court issued a ruling that potentially cleared the way for Trump’s order to take effect nationwide. In Trump v. CASA, Inc. the court limited the availability of what’s known as “universal injunctions.” These legal tools prohibit or require certain actions not just for the parties involved in a particular case, but for all persons or entities. The Supreme Court limited the availability of these universal injunctions in general, but ultimately left it to lower courts to decide whether broad relief was justified in these particular cases. That meant that, as of July 27, tens of thousands U.S.-born babies could be left vulnerable to arrest, deportation, discrimination, and denial of critical early-life nutrition and health care.

Immediately after the Supreme Court’s ruling, the ACLU and our partners filed a new class action lawsuit, Barbara v. Donald J. Trump. A class action lawsuit is a different legal tool that can also broadly block harmful or unconstitutional policies. We identified a class of people – in this case all children born on U.S. soil to parents who are undocumented or have temporary status – and asked the court to let us proceed with the case on behalf of the entire class (called “class certification”) and to block the executive order as to everyone in the class. On July 10, a federal court provisionally granted nationwide class certification, recognizing the protected class and again blocking the executive order from taking effect while questions over the legality of Trump’s order continue to move through the courts.


Where the Legal Battle Stands Now

Right now, every child whose citizenship was threatened by the executive order is protected. While the legal fight is far from over, these families can now take solace in knowing that they are protected while the cases make their way through the courts.

The fight over “universal injunctions” that culminated in the Supreme Court’s CASA decision and limited how this legal tool is used will have significant implications for future cases. But the Trump administration’s procedural win in that case was ultimately an empty victory when it comes to birthright citizenship. The Barbara injunction protects everyone and it is not vulnerable to the kind of arguments that the government offered in CASA. In fact, the Supreme Court pointed to class actions,like Barbara, as an appropriate way to obtain nationwide protection.

Importantly, the Trump administration has not yet appealed the ruling in Barbara or sought to have the Supreme Court block it. They were granted a seven-day window to seek an emergency stay, but that deadline passed without any action from the government.


Birthright Citizenship Remains Protected

Importantly, the CASA decision did not in any way question whether the 14th Amendment protects birthright citizenship. It does; as the Supreme Court made crystal clear more than 125 years ago. Trump’s executive order is blatantly illegal, violating both the 14th Amendment and a statute passed by Congress. President Trump has no power to change those facts.

One court of appeals has already ruled against the Trump administration on these merits questions, concluding that the executive order cannot stand. In the NHICS lawsuit — the one we filed just hours after Trump signed the order, directly challenging its legality – a court of appeals heard oral arguments on August 1.


Where the Fight Goes From Here

The legal fight could take many paths from here. Right now, however, our win in Barbara ensures that there is no reason for families to fear whether they need to move, give birth in another state, or take other drastic steps to secure their children’s citizenship. Expectant parents can feel confident that their babies will still be recognized as U.S. citizens at birth — regardless of their immigration status or where they live.

In the courts, the appeals process could take some time, or move quickly to the Supreme Court. When the government presumably brings one or more cases to the Supreme Court, the court will have to decide whether to hear the appeals — and if it accepts the case or cases, more written legal briefs and oral arguments will follow. It may take years to work through the process and put a final end to this lawless executive order. Whatever path the litigation takes, the ACLU is ready to fight every step of the way. Our goal is to ensure that nobody is ever subjected to this unconstitutional order.

Birthright citizenship isn’t just a legal doctrine — it’s central to who we are as a nation. It reflects that all children born in this country belong here, and are equal members of our national community, no matter who their parents may be. The Constitution is on our side, and the ACLU will keep fighting to ensure this remains a fundamental right for future generations.


ACLU’s co-counsel in the NHICS and Barbara cases are: NAACP Legal Defense and Education Fund, Asian Law Caucus, Democracy Defenders Fund, and ACLU’s of New Hampshire, Maine, and Massachusetts. Our organizational clients in NHICS are New Hampshire Indonesian Community Support, League of United Latin American Citizens, and Make the Road New York.



Published August 7, 2025 at 01:36AM
via ACLU https://ift.tt/oV30HUz