AI & RoboticsNews

Work-at-home AI surveillance is a move in the wrong direction

While we have all been focused on facial recognition as the poster child for AI ethics, another concerning form of AI has quietly emerged and rapidly advanced during COVID-19: AI-enabled employee surveillance at home. Though we are justifiably worried about being watched while out in public, we are now increasingly being observed in our homes.

Surveillance of employees is hardly new. This started in earnest with “scientific management” of workers led by Frederick Taylor near the beginning of the 20th century, with “time and motion” studies to determine the optimal way to perform a job. Through this, business management focused on maximizing control over how people performed work. Application of this theory extends to the current day. A 2019 report from the U.C. Berkeley Labor Center states that algorithmic management introduces new forms of workplace control, where the technological regulation of workers’ performance is granular, scalable, and relentless. There is no slacking off while you are being watched.

Implementation of such surveillance had existed primarily in factory or warehouse settings, such as at Amazon. Recently, the Chinese Academy of Sciences reported that AI is being used on construction sites. These AI-based systems can offer benefits to employees by using computer vision to check whether employees are wearing appropriate safety gear, such as goggles and gloves, before giving them access to a danger area. However, there is also a more nefarious use case. The report said the AI system with facial recognition was hooked up to CCTV cameras and able to tell whether an employee was doing their job or “loitering,” smoking or using a smartphone.

Last year, Gartner surveyed 239 large corporations and found that more than 50% were using some type of nontraditional monitoring techniques of their workforce. These included analyzing the text of emails and social-media messages, scrutinizing who is meeting with whom, and gathering of biometric data. A subsequent Accenture survey of C-suite executives reported that 62% of their organizations were leveraging new tools to collect data on their employees. One monitoring software vendor has noted that every aspect of business is becoming more data-driven, including the people side. Perhaps it’s true, as former Intel CEO Andy Grove famously stated, that “only the paranoid survive.”

Work-at-home AI surveillance

With the onset of COVID-19 and many people working remotely, some employers have turned to “productivity management” software to keep track of what employees are doing while they work from home. These systems have purportedly seen a sharp increase in adoption since the pandemic began.

A rising tide of employer worry appears to be lifting all the ships. InterGuard, a leader in employee monitoring software claims three to four times growth in the company’s customer base since COVID-19’s spread in the U.S. Similarly, Hubstaff and Time Doctor claim interest has tripled.  Teramind said  40% percent of its current customers have added more user licenses to their plans.  Another firm, aptly named Sneek, said sign-ups surged tenfold at the onset of the pandemic.

The software from these firms operates by tracking activities, whether it is time spent on the phone, the number of emails read and sent, or even the amount of time in front of the computer as determined by screen shot captures, webcam access, and number of keystrokes. Some algorithmically produce a productivity score for each employee that is shared with management.

Enaible claims its remote employee monitoring Trigger-Task-Time algorithm is a breakthrough at the “intersection of leadership science and artificial intelligence.” In an op-ed, the vendor said its software empowers leaders to lead more effectively by providing them with necessary information. In this respect, it appears we have advanced from Taylorism mostly in sophistication of the technology. A university research fellow shared a blunt assessment, saying these “are technologies of discipline and domination … they are ways of exerting power over employees.”

What’s at risk

While the ever-present push for productivity is understandable on one level — managers have a right to make reasonable requests of workers about their productivity and to minimize “cyber-loafing” — such intense observation opens yet another front in the AI-ethics conversation, especially concerns regarding the amount of information collected by monitoring software, how it might be used, and the potential for inherent bias in the algorithms that would influence results.

Monitoring of employees is legal in the U.S. down to the keystroke. based on the Electronic Communications Privacy Act of 1986. But we’re now living in an age where monitoring those employees means monitoring them at home — which is supposed to be a private environment.

In the 1921 dystopian Russian novel  that may have influenced the later , all of the citizens live in apartments made entirely of glass to enable perfect surveillance by the authorities. Today we already have AI-powered digital assistants such as Google Home and Amazon Alexa that can monitor what is said at home, though allegedly only after they hear the “wake word.” Nevertheless, there are numerous examples of these devices listening and recording other conversations and images, prompting privacy concerns. With home monitoring of employees, we have effectively turned our work computers into another device with eyes and ears — without requiring a wake word — adding to home surveillance. These tools can track not only our work interactions but what we say and do on or near our devices. Our at-home lifestyles and non-work conversations could be observed and translated into data that risk managers such as insurers or credit issuers might find illuminating, should employers share this content.

Perhaps work-from-home surveillance is now a fait accompli, an intrinsic part of the modern Information Age that risks the right to privacy of employees within their homes, as well as the office. Already there are employee surveillance product reviews in mainstream media, normalizing the monitoring practice. Nevertheless, in a world where boundaries between work and home have already blurred, the ethics of using AI technologies to monitor employees’ every move in the guise of productivity enhancement could be a step too far and another topic for potential regulation. The constant AI-powered surveillance risks turning the human workforce into a robotic one.

Gary Grossman is the Senior VP of Technology Practice at Edelman and Global Lead of the Edelman AI Center of Excellence.


Author: Gary Grossman, Edelman.
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!

Worth reading...
Adobe CIO Stoddard: AI bots can ‘cut toil,’ reduce demands for IT help