stub How Facebook's AI Spreads Misinformation and Threatens Democracy - Unite.AI
Connect with us

Thought Leaders

How Facebook’s AI Spreads Misinformation and Threatens Democracy

mm
Updated on

Dan Tunkelang who oversaw AI research at LinkedIn, stated: “The moment that recommendations have the power to influence decisions, they become a target for spammers, scammers, and other people with less than noble motives.”

This is the quandary social media companies such as Facebook are experiencing. Facebook uses implicit feedback to track clicks, views, and other measurable user behaviors. This is used to design what is identified as a “recommendation engine”, an AI system which has the ultimate power on deciding who sees what content and when.

Facebook has optimized their recommender engine to maximize user engagement which is measured by the amount of time that is spent glued to the Facebook platform. Maximization of time takes priority over any other variable including the quality or accuracy of what is being recommended.

The system is designed to reward sensationalist headlines which engages users by exploiting cognitive bias, even if those headlines happen to be written by Russian trolls with the intention of dividing society or swaying political elections.

How Facebook Uses AI

There is a lack of awareness on how Facebook uses AI to decide on what its users see and interact with.  One must first understand what confirmation bias is. Psychology Today describes this as:

Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values.

Facebook understands that users are more likely to click on news which feeds into the human tendency to seek confirmation bias. This sets a dangerous precedent for both the spread of conspiracy theories and for creating echo chambers where users are spoon fed exclusively what they want to see regardless of accuracy, or the societal impact of what is being seen.

A study by MIT was able to demonstrate that fake news on Twitter spreads 6 times faster than real news.

This means that both Twitter and Facebook can be weaponized. While Twitter enables anyone to intentionally follow feeds with narrow or biased viewpoints, Facebook takes this a step further. A user on Facebook currently has zero way to control  or measure what is being seen, this is controlled entirely by Facebook's recommender engine, how it measures user engagement, and how it optimizes for this same user engagement.

Facebook attempts to shape and predict their users desires.  Facebook estimates to which degree a user will like or dislike a news item which is not yet experienced by the user. In order to avoid a loss in user engagement, Facebook then chooses to bypass news items which may reduce the level of engagement, and chooses instead to engage the user by feeding news items which feeds the confirmation bias, ensuring more clicks, comments, likes and shares.

Facebook also uses automated collaborative filtering of historical user actions and opinions to automatically match participants (friends) with similar opinions. Facebook uses a utility function that automatically and mathematically predicts and ranks your preferences for items that you want to see.

This causes users to fall in a rabbit hole, they are trapped in fake news, being fed content which reinforces their bias.  The content that is presented is inherently designed with the goal of influencing what you click on. After all, if you believe the conspiracy that Bill Gates is attempting to microchip the human population by using vaccines, why should Facebook present you with contradictory evidence which may cause you to disengage with the platform? If you support a certain political candidate why should Facebook offer news which may contradict your positive views of this same candidate?

As if this wasn't sufficient Facebook also engages what is known as “social proof”. Social proof is the concept that people will follow the actions of the masses. The idea is that since so many other people behave in a certain way, it must be the correct behavior.

Facebook provides this social proof in the context of likes, comments, or shares. Since only certain friends may see the newsfeed item (unless they specifically search a users newsfeed), the social proof simply serves to reinforce the confirmation bias.

Facebook also uses filter bubbles to limit exposure to conflicting, contradicting, and or challenging viewpoints.

Facebook Ads

Unsuspecting Facebook users may be clicking on ads without being aware that they are being presented with ads. The reason for this is simple, if there is an ad only the first person who is presented with the ad will see the ad disclaimer. If that user shares the ad, everyone on their friends list simply sees the “share” as a newsfeed item as Facebook intentionally drops the ad disclaimer. Immediately, users drop their guard, they are unable to differentiate between what is an ad, and what would have organically appeared on their newsfeed.

Facebook Shares

Unfortunately, things get worse. If a user has 1000 friends who simultaneously share content, the recommender engine will prioritize content from the minority who share the same views, even if these often consist of unproven conspiracy theories. The user will then be under the illusion that these newsfeed items are being seen by everyone. By engaging with this newsfeed, these users are optimizing each other's social proof.

Should a user attempt to enlighten another user about a misleading or fake item, the very act of commenting or engaging with the newsfeed simply increases the original users engagement time. This feedback loop causes Facebook to reinforce engaging that user with additional fake news.

This causes an echo chamber, a filter bubble where a user is trained to only believe in what they see. Truth is simply an illusion.

Seriousness of Issue

Over 10 million people engaged with a newsfeed claiming that Pope Francis came out in favor of Trump's election in 2016. There was no evidence of this, it was simply a fake news story that came out of Russia, yet this was the most-shared news story on Facebook in the three months leading up to the 2016 election.

The news item was generated by a Russian troll farm which calls itself the “Internet Research Agency”.  This very same organization was responsible for promoting and sharing on Twitter and Facebook race baiting articles, in favor of demonizing Black Lives Matter, and weaponing fake news items spreading false claims about American politicians.

The Select Committee on Intelligence released an 85 page report detailing Russian Active Measures Campaigns and Interference, the bulk of which involved the spread of divisive fake news, and propaganda which had the sole intent of influencing the 2016 US election.

Fast forward to the 2020 election and the problem has only intensified. In September, 2020 after an FBI tip Facebook & Twitter terminated social media accounts for a news organization going by the name of PeaceData, which is linked to Russia's state propaganda efforts.

Unfortunately, shutting down accounts is a temporary and ineffective solution. Russian accounts often take the form of friend requests, often disgusted by women with attractive profiles which target men, or else with hijacked user accounts with a history of regular posts. These hijacked accounts slowly shift to more political posts, until they are dominated by propaganda or conspiracy theories.

An unsuspecting user may be unaware that a friends account has been compromised. If that user is vulnerable to conspiracy theories they may engage in the fake newsfeed item, the Russian troll which is often a bot, then provides additional social proof by the form of likes or comments.

Vulnerable users are often those who least understand how technology and AI works. The over 65 demographic which is the population most likely to vote, is also the most likely to spread fake news as reported by the New York Times.

According to the study Facebook users aged 65 and older posted seven times as many articles from fake news websites as adults 29 and younger.  A lack of digital media literacy has this age group unprepared for a newsfeed that is not grounded in facts or on accuracy, but exclusively on user engagement.

Bad actors are taking advantage of Facebook's recommender engine which exploits our cognitive biases against us. These same organizations have optimized the abuse of Facebook's AI to spread conspiracy theories and propaganda. Conspiracy theories which may seem innocent at first, are often used as funnels into white supremacy, far right-wing nationalism, or QAnon a bizzare conspiracy theory involving Trump trying to save the world from liberal pedophiles, a conspiracy that has zero basis in reality.

Summary

Facebook is clearly aware that there is a problem and they have publicly announced a strategy that focuses on removing content that violates Facebook’s Community Standards. The problem is deleting accounts is a temporary stopgap measure which is ineffective when accounts are generated in bulk by bots, or the mass hacking of user accounts. It also does not solve the problem that most of the sharing is by regular users who are unaware that they are spreading misinformation.

Adding warning labels simply serves to reinforce conspiracy theories that social media giants are biased against conservatives which are the most susceptible to fake news.

The solution needs to be a new recommender engine which measures not only user engagement, but is optimized for user happiness by delivering truth and promoting enhanced self-awareness.

In the meantime, Facebook should follow the path that Twitter took to ban political ads.

Lastly, an important question needs to be asked. If people no longer have a choice on the news that they see, when does it stop being a recommendation and when does it become mind control?

Recommended Reading:

Russian Active Measures Campaigns and Interferences – Report by the Select Committee on Intelligence United States Senate.

The Shocking Paper Predicting the End of Democracy – By Rick Shenkman, founder of George Washington University’s History News Network.

Older People Share Fake News on Facebook More – By New York Times

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.