Researchers have developed a method to deliver a Facebook ad campaign to just one person out of 1.5 billion, based only on the user’s interests, and not on personally identifiable information (PII), such as the email addresses, phone numbers, or geographical location typically associated with ‘targeting’ scandals of recent years.
Users have limited control over these interests, which are algorithmically determined based on browsing habits, ‘likes’, and other forms of interaction that Facebook is able to identify, and which are included in the criteria for being served a Facebook ad.
Since interests are associated with Facebook users based on the content they post and interact with, users can be individually targeted without ever explicitly stating what their interests are in any of the content that they post, and in opposition to nearly all current measures that they might take to protect themselves from hyper-specific ad-targeting.
The research also suggests that ‘nanotargeting’ users in this way is not only cheap, but occasionally free, since Facebook often will not charge an advertiser for an underserved campaign (i.e. a campaign that only reached one person).
In 2018 an AdNews study established that on average, Facebook algorithmically assigns 357 interests per user, out of which 134 were rated as ‘accurate’.
High Interest Rates
The authors of the new paper tested its assumptions on themselves, creating a Facebook ad campaign designed to ‘nanotarget’ the authors out of a potential audience of 1.5 billion Facebook users, based on a random array of target interests; the ads were successfully and exclusively delivered to the targets where higher numbers of the randomly-chosen interests were considered (see results table towards end of article).
The researchers estimate that an individual can be identified and targeted, based only on their interests, with 90% accuracy, though the number of interests needed vary depending on how common the interests are:
‘Our results indicate that the 4 rarest [Facebook] interests of a user make them unique in the mentioned user base with a 90% probability. If we instead consider a random selection of interests, then, 22 interests would be required to make a user unique with a 90% probability.’
The authors suggest that this approach to the sniper targeting of supposedly generalized or semi-anonymous Facebook user audiences is only ‘the tip of the iceberg’ in terms of using non-PII data to undo recent efforts and initiatives to protect user privacy in the wake of Cambridge Analytica.
The paper, titled Unique on Facebook: Formulation and Evidence of (Nano)targeting Individual Users with non-PII Data, is a collaboration between three researchers at the Universidad Carlos de III in Madrid, together with a data scientist from GTD System & Software Engineering and a professor at Austria’s Graz University of Technology.
The research was undertaken on a dataset collected in January of 2017. The following year, Facebook increased the minimum Potential Reach crowd size for an ad campaign from 20 to 1000, but the researchers note that this does not stop advertisers targeting groups of less than 1000, but only from knowing the actual size of the target audience obtained.
The researchers also note that prior work has demonstrated that the 1000-user limit can effectively be lowered to as little as 100, and that 100 users is the smallest target group available for those wishing to reproduce the work.
However, since the dataset was compiled, Facebook has added ‘Whole world’ as a potential catchment area for the campaign, which means that the researchers have proved their hypothesis under additional restrictions that no longer exist (they had instead to submit a filtered location target including the 50 countries where Facebook has the largest user presence, resulting in a potential audience of 1.5 billion users).
The data was obtained from a body of 2,390 genuine Facebook users that had installed the authors’ FDVT browser extension (see image below and video at end of article) prior to January 2017, all volunteers. The extension provides users with a real-time estimation of the revenue that their browsing generates for Facebook, based on PII and demographic data that the volunteers agree to share with the researchers.
The researchers obtained 1.5 million data points from 99,000 unique Facebook interests associated with the participants, who had a median of 426 registered interests.
The researchers then calculated a formula to establish the minimum number of interests necessary to perform nanotargeting on an individual, establishing that only 4 ‘marginal’ interests are required, and that an attack probability increases as the interests become more specialized and less representative of broad interest trends.
For ‘random interests’ – interests drawn arbitrarily from the pool of all available interest categories – the formula estimated that ’12, 18, 22, and 27 random interests make a user unique on FB with a probability of 50%, 80%, 90%, and 95%, respectively’.
The authors created targeted ad campaigns aimed at themselves using random sets of interests assigned by the Facebook ads interface. Though more precise results could have been obtained by setting ‘marginal’ interests, the authors preferred to prove the broad applicability of the theory, rather than ‘cheating’ by keying on hyper-specific interests.
Using several criteria, including snapshots of the ‘Why am I seeing this ad?’ notice included with Facebook ads, the authors established criteria for success in terms of the target being exclusively served an ad based on their interests alone. ‘Failure’ was defined by cases where the ad was shown not only to the author, but to other readers as well.
Nine out of the 21 campaigns run, with varying numbers of interests as target criteria, successfully ‘monotargeted’ the intended recipient of the ad, with success rising according to the number of interests identified (and remembering that ‘random’ interests were used to obtain these results, not crafted and user-specific interests).
The authors acknowledge that the high cost of manipulative Facebook ad campaigns could make this kind of attack non-feasible. However, it transpires that the cost was minimal:
‘Unfortunately, results extracted from the [Facebook] Ad Campaign Manager [prove] that nanotargeting a user is rather cheap. Indeed, the overall cost of the 9 successful nanotargeting campaigns was only 0.12€. Surprisingly, [Facebook] did not charge us anything in three of the successful nanotargeting campaigns that delivered only 1 ad impression to the targeted user.
‘Therefore, rather than a discouraging factor, the extremely low cost of nanotargeting may encourage attackers to leverage this practice.’
Skirting Facebook ‘Protections’
The paper notes that Facebook’s ad services have ‘minimum list sizes’ that a user can target, technically making it impossible to upload a specific individual as an ad campaign target. However, the authors observe that these restrictions are disingenuously trivial to circumvent.
For instance, the report observes, a CEO reported in 2017 how he was able to poach a potential staffer from another company by orchestrating a Facebook campaign designed only to reach that target individual, a male. This involved satisfying Facebook’s minimum (30) criteria by uploading a list of twenty-nine females and one male (the target), and then selecting ‘Male’ as a delivery criteria.
The paper contends that Facebook’s restrictions, though subsequently updated, are imperfectly enforced and inconsistent. While the results of an earlier paper forced the social media giant to disallow configuring audiences of less than 20 in its Ads Campaign Manager, the authors dispute the effectiveness of the policy change, stating that ‘Our research shows that this limit is not currently being applied’.
Besides the general cultural backlash from the Cambridge Analytica scandal, which incited reluctant change from advertising giants such as Google, nanotargeting of advertisements undermines the common-sense understanding that ad culture is ‘general’ culture, shared, if not by everyone, at least by a broad demographic or geographical group.
The paper’s authors point out a number of cases where nanotargeting was used in a deceptive manner, including the time in 2017 that UK Labour politician Jeremy Corbyn, then leader of the government’s opposition party, decreed that Labour should run a Facebook ad campaign to encourage voter registration.
Labour party chiefs disapproved the idea, but rather than entering into conflict, simply implemented a £5000 ad campaign designed to only target Corbyn and his associates, as well as a select number of sympathetic journalists. No-one else saw those ads.
The authors state:
‘[Nanotargeting] can be effectively used to manipulate a user to persuade them to buy a product or to convince them to change their mind regarding a particular issue. Also, nanotargeting could be used to create a fake perception in which the user is exposed to a reality that differs from what the rest of the users see (as happened in the case of Corbyn). Finally, nanotargeting could be exploited to implement some other harmful practices such as blackmailing.’
‘Finally, it is worth noting that our work has only revealed the tip of the iceberg regarding how non-PII data can be used for nanotargeting purposes. Our work exclusively relies on users’ interests, but an advertiser can use other available socio-demographic parameters to configure audiences in the [Facebook] Ads Manager such as the home location (country, city, zip code, etc.), workplace, college, number of children, mobile device used (iOS, Android), etc, to rapidly narrow down the audience size to nanotarget a user.’