What’s more, all test participants had to agree that their data could be used for machine learning and object detection training. Specifically, the global test agreement’s section on “use of research information” required an acknowledgment that “text, video, images, or audio … may be used by iRobot to analyze statistics and usage data, diagnose technology problems, enhance product performance, product and feature innovation, market research, trade presentations, and internal training, including machine learning and object detection.”
What isn’t spelled out here is that iRobot carries out the machine-learning training through human data labelers who teach the algorithms, click by click, to recognize the individual elements captured in the raw data. In other words, the agreements shared with us never explicitly mention that personal images will be seen and analyzed by other humans.
Baussmann, iRobot’s spokesperson, said that the language we highlighted “covers a variety of testing scenarios” and is not specific to images sent for data annotation. “For example, sometimes testers are asked to take photos or videos of a robot’s behavior, such as when it gets stuck on a certain object or won’t completely dock itself, and send those photos or videos to iRobot,” he wrote, adding that “for tests in which images will be captured for annotation purposes, there are specific terms that are outlined in the agreement pertaining to that test.”
He also wrote that “we cannot be sure the people you have spoken with were part of the development work that related to your article,” though he notably did not dispute the veracity of the global test agreement, which ultimately allows all test users’ data to be collected and used for machine learning.
What users really understand
When we asked privacy lawyers and scholars to review the consent agreements and shared with them the test users’ concerns, they saw the documents and the privacy violations that ensued as emblematic of a broken consent framework that affects us all—whether we are beta testers or regular consumers.
Experts say companies are well aware that people rarely read privacy policies closely, if we read them at all. But what iRobot’s global test agreement attests to, says Ben Winters, a lawyer with the Electronic Privacy Information Center who focuses on AI and human rights, is that “even if you do read it, you still don’t get clarity.”
Rather, “a lot of this language seems to be designed to exempt the company from applicable privacy laws, but none of it reflects the reality of how the product operates,” says Cahn, pointing to the robot vacuums’ mobility and the impossibility of controlling where potentially sensitive people or objects—in particular children—are at all times in their own home.
Ultimately, that “place[s] much of the responsibility … on the end user,” notes Jessica Vitak, an information scientist at the University of Maryland’s College of Information Studies who studies best practices in research and consent policies. Yet it doesn’t give them a true accounting of “how things might go wrong,” she says—“which would be very valuable information when deciding whether to participate.”