Last month we learned that Amazon is planning to deploy AI cameras that will constantly scrutinize drivers inside the cabins of its delivery vehicles, and inform their bosses when the camera thinks they’ve done something questionable.
The device Amazon is installing (called “Driveri,” pronounced “driver eye”) has cameras pointing in four directions, one of which is toward the driver. In a video posted online, the company says the “camera records 100 percent of the time when you’re out on your route,” and watches for 16 behaviors that will “trigger Driveri to upload recorded footage.” These include not only accidents but also such things as following another car too closely, making a U-turn, failing to wear the seatbelt, obstructing the camera, “hard” braking or accelerating, and appearing to be distracted or drowsy — or what the AI interprets as those activities, anyway. Sometimes the robot camera will shout commands at you, such as “maintain safe distance!” or “please slow down!” One driver told CNBC that if the camera catches you yawning, it will tell you to pull over for at least 15 minutes — and if you don’t comply, you may get a call from your boss.
The cameras in this system are not streamed live to management; this is an AI monitoring system. The device itself decides when to send video clips to the bosses and when to issue verbal alerts to drivers. But as we have long argued, nobody should make the mistake of thinking that we can’t suffer many forms of privacy harm when being monitored by machines, not least because those machines are programmed to “snitch” to actual humans when they see something they think is bad. The company that makes Driveri, Netradyne, also advertises that its product keeps scores on drivers that are updated — and provided to management — in real time. (Such a function is not mentioned in Amazon’s video).
Given how bad AI is at understanding the subtleties of human behavior and dealing with anomalies, this system could lead to real fairness and accuracy issues. Automated test proctoring software, which also uses video to monitor people for subtle behaviors (in this case, cheating) has certainly been rife with bias and accuracy problems. Machine vision is very brittle and can fail spectacularly — even at the fundamentals, like recognizing a stop sign. Netradyne boasts that “every stop sign & traffic signal is identified and analyzed for compliance measurement.” But what happens when the AI thinks it sees a stop sign where there is none, and flags the driver for “running” it?
Ideally a human being would review the video and exonerate the driver, but given how automated Amazon’s management is, we don’t know how often that will happen. Workers in Amazon’s warehouses, for example, are constantly supervised by robots that judge whether they’re moving packages quickly enough. If they don’t like what they see, those robots issue warnings and even fire workers automatically — without any human input.
Amazon touts the system as a beneficial safety measure. It could indeed reduce accidents — though that should be proven — but as a society we’re going to need to figure out how much to allow ourselves to be overseen by automated AI cameras that engage in intrusive monitoring, judging, nagging, and reporting of our behaviors. Potential fairness issues aside, that kind of monitoring would probably make anyone miserable. There are almost certainly ways to be found to use AI to protect the safety of workers that feel empowering and protective, instead of infantilizing and oppressive.
Meanwhile, this kind of robot monitoring is becoming an increasingly prominent sore spot for workers. Some UPS drivers, for example, have opposed that company’s use of such cameras. (UPS drivers, unlike Amazon’s, are unionized and actually employed by the company whose uniforms they wear.)
Amazon workers’ complaints about robot management are part of growing labor tensions and criticism of the company for unethical labor practices. The company has been sued by the New York attorney general for failing to protect workers against COVID-19 and retaliating against those who complained, and was fined last month by the Federal Trade Commission for stealing workers’ tips. Amazon drivers in particular reportedly face brutal working conditions, and critics charge that the company places performance demands on them that pressure them to drive dangerously fast, while evading responsibility for the resulting accidents by insisting that they’re contractors. The Amazon drivers I have spoken to confirmed that they are urged to drive safely but also pushed to complete an unrealistic number of deliveries within a shift.
Driveri thus looks like a company’s attempt to use technology to solve a problem that its own managerial practices and profit drive may be creating. These technologies are like factory farms that pump our food with antibiotics — an attempt to use technology to unnaturally suppress the side effects of unhealthy and inhumane practices. This is something that we’ve already seen in the trucking industry: Instead of giving drivers protections from unhealthy productivity demands, they get micro-surveillance. And workers end up squeezed on both ends.
That squeeze may only increase as the AI is refined. For example, if sunglasses defeat Driveri’s drowsiness and inattentiveness detectors, drivers may be told they aren’t allowed to wear them. That could be just the beginning of many ways they are forced to conform their behavior, movements, and dress to the needs of the AI that is watching them. We’ve already seen that happen in other areas; we’re no longer allowed to smile in our passport photos, for example, because it reduces the effectiveness of face recognition technology. Ultimately, the technology threatens to enable a modern-day version of Taylorism, a 19th century industrial movement also known as “scientific management” that involved monitoring and controlling the minutiae of industrial workers’ bodily movements to maximize their productivity.
The issues raised by AI video monitoring extend far beyond Amazon and its particular practices. To begin with, Amazon is not the only company experimenting with this kind of robot surveillance; a number of trucking companies, for example, are imposing it on their drivers. More broadly, as AI cameras get smarter, there are many institutions that have different incentives to use them to visually monitor people. We could soon see not just employers but also everything from museums to restaurants to government agencies deploying this technology — anyone who wants to enforce a rule, protect an asset, or gain a new efficiency.
Technological monitoring of workers has long taken place through other data-collection devices, down to and including the time clock, but these new tools don’t require expensive or specialized data-collection devices, or efforts to get workers to use them properly. All that’s needed is a camera. And improving AI is likely to open up ever-wider possibilities for automated visual monitoring, as we discussed in our 2019 report, The Dawn of Robot Surveillance.
Employees like drivers and factory workers whose jobs are most at risk of being supplanted by AI (but for now are just being integrated with it) will be the first to be placed under oppressive AI surveillance microscopes, and we should support their rights to maximize their self-determination through unionization and other measures. But AI monitoring will soon move beyond those groups, starting with less powerful people across our society — who, like Amazon’s nonmanagerial workforce, are disproportionately people of color and are likely to continue to bear the brunt of that surveillance. And ultimately, in one form or another, such monitoring is likely to affect everyone — and in the process, further tilt power toward those who already have it.
Published March 23, 2021 at 07:02PM
via ACLU https://ift.tt/2NLewi9
No comments:
Post a Comment