Facebook’s murky history of letting third party apps help themselves to user data which you may recall blew up into a major global privacy scandal back in 2018 (aka, Cambridge Analytica), gouging the company’s stock price, leading to its founder being hauled in front of Congress — and finally, in mid 2019, to a $5BN settlement with the FTC over what sometimes got euphemistically reported as ‘privacy lapses’ — appears to be coming back to haunt it via unsealed legal discovery.
Internal documents in a related privacy litigation that emerged late last month have trigged the chairs of the US Senate Select Committee on Intelligence, Mark Warner and Marco Rubio, to write a letter to Meta’s Mark Zuckerberg asking fresh questions about what he and his company knew about how much user data the platform was leaking back then. And what security implications may be attached to said leaks.
Thing is, per the unsealed documents, the company now known as Meta appears to have suspected that developers from high risk jurisdictions where authoritarian regimes are known to “gather data for intelligence targeting and cyber-espionage” — including North Korea, Russia, China and Iran — were among thousands also accessing Facebook users’ personal data via the same sort of friends’ data permissions route that the Cambridge Analytica data-set was extracted by the contracted developer, GSR.
“It appears from these documents that Facebook has known, since at least September 2018, that hundreds of thousands of developers in countries Facebook characterized as “high-risk,” including the People’s Republic of China (PRC), had access to significant amounts of sensitive user data,” they write.
“As the chairman and vice chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity,” the pair add, pressing Meta to respond to a series of questions about how it acted after its internal audit flagged that user data may have been accessed by thousands of developers in high risk locations.
It’s fair to say that Meta doesn’t like to dwell on a data access/policy enforcement failure scandal that led to its founder sitting on a booster cushion in Congress and being plied with questions by irate US lawmakers. Quite possibly because it paid $5BN to the FTC to make the whole scandal go away — a settlement that conveniently granted blanket immunity to its executives for any known or unknown privacy violations.
But the problem with Meta wanting the whole episode to be filed away under ‘forever resolved’, is that it has never actually answered all the questions lawmakers asked at the time. Nor in the years following — as additional details have emerged.
It hasn’t even published the results of the third party app audit Zuckerberg pledged in 2018 would be carried out. (Although we did find out — indirectly, in 2021 — that a settlement it reached with the UK’s privacy watchdog included a gag clause that prevented the commissioner from talking publicly about the investigation.)
Yet this still unpublished third party app audit formed the keystone of Facebook’s crisis PR response at the time — a promised comprehensive accounting that successfully shielded Zuckerberg and his company from deeper scrutiny. Exactly when the pressure was greatest on it to explain how information on millions of users was lifted out of its platform by a developer with bona fide access to its tools without the knowledge or consent of the actual Facebook users.
The price of this shielding has probably actually been pretty high — both reputationally for Meta (which, after all, felt the need to undertake an expensive corporate rebranding and try to reframe its business in the new arena of VR); and also in future compliance costs (which obviously won’t only affect Meta) as a number of laws drafted in the years since the scandal seek to put new operational limits on platforms. Limits that are often justified by a framing that foregrounds a perception of Big Tech’s lack of accountability. (See, for e.g., the UK’s Online Safety Bill which even includes, in a recent addition, criminal sanctions for CEOs who breach requirements. Or the EU’s Digital Services Act and Digital Markets Act.)
Still, Meta has remained extremely successful at avoiding the kind of in-depth scrutiny of its internal processes, policies and decision-making which paved the way for Cambridge Analytica to take place on Zuckerberg’s watch — and, potentially, for scores of similar data heists, at least per details emerging via legal discovery.
This is why the spectre of Facebook’s failed accountability reappearing is compelling viewing. (See also: A privacy litigation that Meta finally moved to settle last year, with a timing that apparently spared Zuckerberg and former COO Sheryl Sandberg from having to appear in person after they’d been deposed to give testimony — for a settlement price-tag that was not disclosed.)
Whether anything substantial comes of the latest visitation of the ghost of unresolved Facebook privacy scandals remains to be seen. But Meta now has a new long-list of awkward questions from lawmakers. And if it tries to duck substantive answers its execs could face a fresh summons to a public committee grilling. (It’s never the crime, it’s the cover-up etc etc.)
Here’s what the Committee is asking Meta to answer re: the findings of the internal investigation:
1) The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC [People’s Republic of China] and Russia “given the risk associated with those countries.”
- What additional reviews were conducted on these developers?
- When was this additional review completed and what were the primary conclusions?
- What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify?
- What communications, if any, has Facebook had with these developers since its initial identification?
- What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?
2) For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.
3) Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed?
4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).
5) The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]”
- How did Facebook establish this rubric?
- How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold?
- What were the specific characteristics of these developers that gave rise to this determination?
- Did Facebook identify any developers as too risky to safely operate with? If so, which?
6) The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.”
- How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process?
7) Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?
8) Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting?
Asked for a response to the lawmakers concerns, Meta spokesman Andy Stone did not respond to specific questions — including whether it will finally publish the app audit; and whether it will commit to informing users whose information was compromised as a result of features of its developer platform (so presumably that’s a ‘no’ and a ‘no’) — opting instead to send this brief statement:
These documents are an artifact from a different product at a different time. Many years ago, we made substantive changes to our platform, shutting down developers’ access to key types of data on Facebook while reviewing and approving all apps that request access to sensitive information.
Leave a Reply