The invasive, dystopian recording devices are being used in hundreds of schools, hospitals, and other public places around the world.
Last year, I attended the expo at a large law enforcement conference in Philadelphia, where the police officials in attendance were being sold a wide variety of gadgets and technologies. One of the technologies on offer was a so-called “aggression detector” made by a company called Louroe Electronics. This system involved placing microphones in public areas and using computers to listen in and sound alerts when human voices were overheard expressing “aggression.” The technology seemed silly and I doubted that any police departments would buy it. I wrote a skeptical post about the company, and moved on.
This week, however, we learned that some schools and other public facilities are actually buying and installing these supposed aggression detectors. ProPublica and Wired published a story this week reporting that the technology is being used in hundreds of schools, hospitals, and other public places around the world, including over 100 in the United States.
This is a surveillance technology that we should all oppose.
First, it’s an invasion of privacy. Already our public places have become blanketed by uncounted surveillance cameras, but allowing audio recording of our public lives adds a whole new layer of surveillance. In many cases, audio can be more invasive than video; if you’re sitting in a hospital lobby having a private conversation with a loved one, the video of that scene won’t reveal much, but an audio recording could be an enormous invasion of privacy. Indeed, laws in every state ban third parties from recording private conversations where no participant knows about the recording or has given permission. Yet the Louroe system allows administrators to record and store snippets of conversation indefinitely, raising a number of legal questions.
In addition, while the system does not include voiceprint technology to try to identify who is speaking in a recording, it could be combined with face recognition, or with giant voiceprint databases that big banks and other institutions are creating. This could allow the Louroe system to not only know what is being said, but also who is saying it.
Second, this technology is bogus. It’s not clear that human beings can reliably interpret which human voices indicate aggression and having a computer do the interpretation adds a whole new layer of uncertainty. In addition, emotion is expressed in ways that vary by culture, individual, and situation. A forthcoming review of over a thousand scientific papers by a panel of scientists commissioned by the Association for Psychological Science has found that there is no scientific support for the common assumption “that a person’s emotional state can be readily inferred from his or her facial movements.” There is no reason to think what is true of our faces would not also be true of our voices.
Indeed the Wired/ProPublica investigation found that the Louroe machine learning-based aggression detector was less than reliable. In tests, it triggered “aggression” alarms to the sound of lockers slamming and kids laughing, calling out to their friends, talking excitedly during a game, and cheering at the arrival of pizza. And yet the detector failed to trigger on an agitated man in a hospital who was screaming and pounding on a desk.
And, as an auditory expert quoted by ProPublica points out in the piece, even a theoretically accurate audio aggression detector wouldn’t detect “cold anger” — the “quiet, detached fury” that violent people exhibit at least as often as loud, angry words.
When deployed by schools or other government entities, this technology also has First Amendment implications. By monitoring the speech of everyone — and targeting speech that may be loud and passionate — this system threatens to chill First Amendment-protected speech, such as discussions that students might have in the hallways.
This technology is part of two disturbing trends. One is the use of AI analytics to automate mass monitoring, as discussed in depth in this recent ACLU report. No school district or hospital is going to hire someone to sit and listen to multiple audio feeds waiting for sounds that they think are “aggressive.” But by using AI to automate the process — even if inaccurately — institutions can create an inexpensive system for blanket surveillance that can create just as many chilling effects as if humans were listening. As we detail in the report, we’re likely to see more and more attempts to install such AI monitoring systems, and we need strong checks and balances to keep from becoming an automated surveillance state.
The other trend that this technology reflects is increasing surveillance in schools. From video cameras to face recognition to social media monitoring, “aggression detectors” are merely one in a growing suite of surveillance technologies being directed at students. Such technologies are likely to further the disproportionate treatment of youth of color as criminally disruptive simply for acting like kids. They also threaten to create a generation of young people who are taught to expect to be monitored by the authorities at every moment. That’s not a promising recipe for maintaining a free society. We can push back by demanding that our school districts — and state legislatures — ban such technologies from our schools.
Published July 1, 2019 at 06:30PM
via ACLU https://ift.tt/2Nrfkrr
No comments:
Post a Comment