How much of what we do on Facebook influences the content we see, and how much of it is Facebook’s own rules? Are we underestimating the power of the largest social media network on the planet?
In the past year, the issue of Facebook’s control over what its users see and post has come into sharp relief. While it has always been clear that Facebook is in charge, a recent string of bans on certain users and posts have people asking questions about how much Facebook should be allowed to control what is shared on its platform. Concerns have sprung up about how and what Facebook decides each user will see, and what responsibility, if any, Facebook has in this area. Following allegations that fake news on Facebook may have swayed voters during the recent U.S. election and stats showing that 88% of millennials are now getting at least some of their news from Facebook, understanding what it is that users are seeing is becoming ever-more important.
Algorithms & Confirmation Bias
When someone logs into Facebook and scrolls through their news feed, what are they really looking at? Over the course of Facebook history the answer to that question has changed many times. The social media goliath employs an ever-changing set of algorithms that use data based on each user’s past activity to predict what they will find the most interesting and useful of any number of items that their friends, groups and pages have posted. These algorithms have a profound influence on what shows up in a user’s feed, and the code behind them is a Facebook trade secret.
In 2015 many non-profit organisations found that the reach of their posts was narrowing, and that fewer people were seeing their content. Even people who had liked an NGO’s Facebook page weren’t seeing all of its posts. Part of this is a product of the rising levels of competing content on Facebook, but another factor is a change in the news feed algorithm that blocked what Facebook determined to be “overly promotional” posts from people’s feeds.
That’s fine if it means people see less of the advertising they find annoying, but it also means that, to stay competitive, non-profits have to resort to paying Facebook to boost their posts.
It is frustrating for users not to know what’s controlling their access to specific posts, and there has been pushback. However, their own actions – their choices in terms of friends and what they “like” – are part of how the algorithms select what will show up in their news feeds.
The combination of personal choice and algorithms creates a situation where those of us on Facebook increasingly see only content that is consistent with what we have previously liked. This is because the algorithms are designed to show people what they determine will be the most engaging content for them, and if we, as users, continue to “like” what we already like, there is a danger of confirmation bias. Users end up in a bubble where they are less and less likely to be exposed to differing points of view.
According to Mark Zuckerberg, the social media giant’s CEO, social media is more diverse than any other media source, and denies that Facebook creates filter bubbles. However, many voices, including those of prominent media outlets and Facebook’s own researchers are pushing back against Zuckerberg’s assertion. The bubbles have variously been described as “a warren of walled gardens” and “online echo chambers” leading to users’ tunnel vision and a lack of interest in searching out opposing viewpoints. A review of Facebook’s own research stated, “You are seeing fewer news items that you’d disagree with which are shared by your friends because the algorithm is not showing them to you.”
Content policies
Facebook’s famous Community Standards outline the type of content users can and can’t post, and describe how offensive content can be reported to Facebook. For example, certain images of nudity and images that glorify violence are not allowed.
Earlier this year there was a temporary ban of The Terror of War, the famous so-called “napalm girl” photo, and then more recently an “offensive” breast cancer awareness video was taken down.
Facebook says their policies “can sometimes be more blunt than we would like and restrict content shared for legitimate purposes”. In the last year there have been many instances of content being restricted and then reinstated after it was deemed it had been posted for “legitimate purposes”.
If there is a functioning corrective mechanism – if Facebook is policing offensive content and correcting mistakes when they happen, is there really a problem? There can be, for example when the content is time-sensitive. When the girlfriend of Philando Castile – an African-American man shot by police – live-streamed a graphic video of the aftermath to show the world the truth, as it happened, the video was briefly removed due to a “technical glitch”. Facebook was also accused of censoring the video.
But there is another problem. The onus on users to understand Faceboook’s policies in the first place is huge. How many users actually read them? And how many can find them in their own language? While the interface of Facebook is available in many languages, the Community Standards and privacy settings are not. For example, if you choose to change the Community Standard settings to read it in Kinyarwanda (a language spoken in Rwanda) the content remains in English. In addition, privacy controls in particular have been infamously difficult to navigate, even in one’s mother tongue.
At a higher level, there are concerns about hidden motivations behind certain removals, as well as what responsibilities Facebook has – to its users, and to the global community.
After reinstating them, the reasons given by Facebook spokespeople for removing profiles and pages are becoming quite familiar – usually, no explanation at all: “The pages were removed in error”, “one of our automated policies was applied incorrectly” or “it was a ‘technical glitch’”. Variations of those non-reasons were given for a variety of page take-downs, ranging from pro-Bernie Sanders groups that were removed in April, just before he made a formal announcement on his candidacy for the Democratic nomination; the removal of the Philando Castille shooting video; the removal of two libertarian Facebook pages over unspecified posts that violated standards; and the disabling of several Palestinian journalists’ accounts. In the last case, multiple editors from at least three different media outlets found their accounts had been disabled when they attempted to log on. This had followed the announcement of an agreement between Facebook and the Israeli government to tackle “incitement”, leading some of the Palestinian journalists to think they had been targeted as part of that arrangement.
All of these examples have a common element of political speech. Buzzfeed reported that Facebook did not respond to questions about whether it would review the tendency that the social media giant has towards mistakenly muzzling politically significant speech.
Misinformation
Then there is the issue of incorrect and intentionally misleading information being mistakenly promoted to Facebook users. Back in August, after Facebook faced criticism that human intervention had played a significant role in its Trending Topics – which were not chosen solely by algorithm, as previously thought – they did away with the human element. Just days later, misinformation started to make its way into users’ feeds. Users who hovered over the name of Fox News journalist Megyn Kelly found a factually incorrect headline suggesting she had been fired by Fox, among other outrageous and offensive trending headlines.
But the Trending Topics scandal is not the only way that fake news has been making its way into users’ feeds. Facebook’s exact algorithm may be a mystery, but we do know that in addition to showing us stories similar to those we’ve already liked, it also shows us stories that lots of other people have liked, whether or not the content is true. The algorithm doesn’t take into account if any given story is accurate or not. As long as it’s being read and clicked on, it will move to the top of the pile. Facebook fact-checkers cannot scan everything being posted by its more than one billion users worldwide.
This Facebook trending story is 100% made up.Nothing in it is true.This post of it alone has 10k shares in the last six hours. pic.twitter.com/UpgNtMo3xZ
— Ben Collins (@oneunderscore__) November 14, 2016
But does Facebook have a responsibility to make sure everything that is posted is true? While more than half of Americans get some news from Facebook, and senior executives have been known to direct content policy and make editorial judgment calls, Facebook denies that it is a media company. Mark Zuckerberg maintains that Facebook is a technology company that gives the media the platform and tools to share their stories, but because Facebook does not create its own content, it is not a media company.
But it does host, distribute and profit from content the way other media outlets do.
Just this week Facebook announced it would block ads displaying fake news, a move aimed at making it less lucrative to advertise fake stories. This was, in part, a response to allegations that fake news on Facebook during the U.S. presidential campaign ended up swaying voters and influencing what they knew about candidates.
However in terms of checking and removing fake news posts that individual users share, Facebook has no incentive to do so. In this post-truth world, Facebook’s business model relies on clicking, sharing and engaging content, not on the credibility of its content.
Buyer beware.
The cover of Norway’s Aftenposten with an open letter to Mark Zuckerberg, accusing him of threatening the freedom of speech and abusing powerNTB Scanpix/Cornelius Poppe/via REUTERS