Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform.
This statement was originally published on eff.org on 12 August 2021.
Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform. This has major implications: not just for transparency, but for user autonomy and the fight for interoperable software.
Ad Observer is a free/open source browser extension used to collect Facebook ads for independent scrutiny. Facebook has long opposed the project, but its latest decision to attack Laura Edelson and her team is a powerful new blow to transparency. Worse, Facebook has spun this bullying as defending user privacy. This “privacywashing” is a dangerous practice that muddies the waters about where real privacy threats come from. And to make matters worse, the company has been gilding such excuses with legally indefensible claims about the enforceability of its terms of service.
Taken as a whole, Facebook’s sordid war on Ad Observer and accountability is a perfect illustration of how the company warps the narrative around user rights. Facebook is framing the conflict as one between transparency and privacy, implying that a user’s choice to share information about their own experience on the platform is an unacceptable security risk. This is disingenuous and wrong.
This story is a parable about the need for data autonomy, protection, and transparency – and how Competitive Compatibility (AKA “comcom” or “adversarial interoperability”) should play a role in securing them.
What is Ad Observer?
Facebook’s ad-targeting tools are the heart of its business, yet for users on the platform they are shrouded in secrecy. Facebook collects information on users from a vast and growing array of sources, then categorizes each user with hundreds or thousands of tags based on their perceived interests or lifestyle. The company then sells the ability to use these categories to reach users through micro-targeted ads. User categories can be weirdly specific, cover sensitive interests, and be used in discriminatory ways, yet according to a 2019 Pew survey 74% of users weren’t even aware these categories exist.
To unveil how political ads use this system, ProPublica launched its Political Ad Collector project in 2017. Anyone could participate by installing a browser extension called “Ad Observer,” which copies (or “scrapes”) the ads they see along with information provided under each ad’s “Why am I seeing this ad?” link. The tool then submits this information to researchers behind the project, which as of last year was NYU Engineering’s Cybersecurity for Democracy.
The extension never included any personally identifying information – simply data about how advertisers target users. In aggregate, however, the information shared by thousands of Ad Observer users revealed how advertisers use the platform’s surveillance-based ad targeting tools.
This improved transparency is important to better understand how misinformation spreads online, and Facebook’s own practices for addressing it. While Facebook claims it “do[es]n’t allow misinformation in [its] ads”, it has been hesitant to block false political ads, and it continues to provide tools that enable fringe interests to shape public debate and scam users. For example, two groups were found to be funding the majority of antivaccine ads on the platform in 2019. More recently, the U.S. Surgeon General spoke out on the platform’s role in misinformation during the COVID-19 pandemic – and just this week Facebook stopped a Russian advertising agency from using the platform to spread misinformation about COVID-19 vaccines. Everyone from oil and gas companies to political campaigns has used Facebook to push their own twisted narratives and erode public discourse.
Revealing the secrets behind this surveillance-based ecosystem to public scrutiny is the first step in reclaiming our public discourse. Content moderation at scale is notoriously difficult, and it’s unsurprising that Facebook has failed again and again. But given the right tools, researchers, journalists, and members of the public can monitor ads themselves to shed light on misinformation campaigns. Just in the past year Ad Observer has yielded important insights, including how political campaigns and major corporations buy the right to propagate misinformation on the platform.
Facebook does maintain its own “Ad Library” and research portal. The former has been unreliable and difficult to use without offering information about targeting based on user categories; the latter comes swathed in secrecy and requires researchers to allow Facebook to suppress their findings. Facebook’s attacks on the NYU research team speak volumes about the company’s real “privacy” priority: defending the secrecy of its paying customers – the shadowy operators pouring millions into paid disinformation campaigns.
This isn’t the first time Facebook has attempted to crush the Ad Observer project. In January 2019, Facebook made critical changes to the way its website works, temporarily preventing Ad Observer and other tools from gathering data about how ads are targeted. Then, on the eve of the hotly contested 2020 U.S. national elections, Facebook sent a dire legal threat to the NYU researchers, demanding the project cease operation and delete all collected data. Facebook took the position that any data collection through “automated means” (like web scraping) is against the site’s terms of service. But hidden behind the jargon is the simple truth that “scraping” is no different than a user copying and pasting. Automation here is just a matter of convenience, with no unique or additional information being revealed. Any data collected by a browser plugin is already, rightfully, available to the user of the browser. The only potential issue with plugins “scraping” data is if it happens without a user’s consent, which has never been the case with Ad Observer.
Another issue EFF emphasized at the time is that Facebook has a history of dubious legal claims that such violations of service terms are violations of the Computer Fraud and Abuse Act (CFAA). That is, if you copy and paste content from any of the company’s services in an automated way (without its blessing), Facebook thinks you are committing a federal crime. If this outrageous interpretation of the law were to hold, it would have a debilitating impact on the efforts of journalists, researchers, archivists, and everyday users. Fortunately, a recent U.S. Supreme Court decision dealt a blow to this interpretation of the CFAA.
Last time around, Facebook’s attack on Ad Observer generated enough public backlash that it seemed Facebook was going to do the sensible thing and back down from its fight with the researchers. Last week however, it turned out that this was not the case.
Facebook’s Bogus Justifications
Facebook’s Product Management Director, Mike Clark, published a blog post defending the company’s decision to ban the NYU researchers from the platform. Clark’s message mirrored the rationale offered back in October by then-Advertising Integrity Chair Rob Leathern (who has since left for Google). These company spokespeople have made misleading claims about the privacy risk that Ad Observer posed, and then used these smears to accuse the NYU team of violating Facebook users’ privacy. The only thing that was being “violated” was Facebook’s secrecy, which allowed it to make claims about fighting paid disinformation without subjecting them to public scrutiny.