FACEBOOK KNOWS IT PROMOTES MISINFORMATION AND DOES SO TO MAXIMIZE PROFITS
Facebook promotes misinformation. It knows this is harmful, it knows how to fix it, but it does it anyway – for the sake of profits. This is true across the full range of content from racist and misogynistic disinformation to Russian propaganda. It is true globally and across languages with the worst abuses probably occurring outside the U.S. and in languages other than English.
Facebook undermines democracy and promotes divisiveness and hate (as do other social media platforms such as Instagram, TikTok, Twitter, and YouTube) based on conscious decisions by senior management. (See this previous poston the harm being done by Facebook and other social media platforms.)
The reason that Facebook (and other social media platforms) refuse to effectively control (i.e., “moderate”) content is that profits come first. In 2021, Facebook made $39.4 billion in profits primarily from advertising exquisitely targeted to its almost three billion users.
Perhaps the ultimate confirmation of this is that Facebook and Instagram (both owned by Meta) have been blocked in Russia after the invasion of the Ukraine, but Facebook and Instagram are still publishing and promoting Russian propaganda around the world. Although they claim to be moderating disinformation from Russia, 80% of disinformation about U.S. biological weapons has been posted without being flagged or blocked. [1]
Currently, Facebook’s only incentives to moderate the content it allows and promotes are to avoid government regulation and to not be so offensive that advertisers pull their ads. In an effort to address concerns about content moderation – which admittedly sometimes requires making difficult, judgmental decisions that will be unpopular with some people – Facebook created an “Oversight Board” in 2019 to review its moderation decisions. Facebook claims the Board is independent and recruited an impressive set of individuals to serve on it. [2]
Roughly a year ago, the Board issued its first major report, a 12,000-word review of Facebook’s decision to indefinitely suspend Donald Trump from Facebook. The Board affirmed the decision to suspend Trump, but stated that it was inappropriate to make the suspension indefinite.
The Board said Facebook should either make the suspension permanent or set a specific length of time for it. The Board noted that Facebook management was seeking to dodge responsibility and that it should impose and justify a specific penalty.
The Board also posed questions to Facebook management whose answers it felt were essential to enabling it to do its oversight job. However, Facebook management refused to answer questions and failed to provide information on:
· The extent to which the Facebook’s design decisions, including algorithms, policies, procedures, and technical features, amplified Trump’s posts.
· Whether an internal analysis had been done of whether such design decisions might have contributed to the insurrection at the Capitol on January 6, 2021.
· Content violations by followers of Trump’s accounts.
The Board noted that without this information it was difficult for it to assess whether less severe measures, taken sooner, might have been effective in solving the problem of Trump’s violations of Facebook’s standards.
As the Board suggests, the central issue is not simply Trump’s posts, but Facebook’s amplification of those posts and others like them. In other words, the real issue is the nature of Facebook’s content promotion algorithm and whether it promotes posts from Trump and from people expressing views like or in support of Trump’s posts. However, the Board’s jurisdiction, as defined by Facebook management, excludes oversight of Facebook’s algorithm and business practices. Furthermore, the Board has no power to compel Facebook management to abide by its decisions and recommendations – or even to simply answer its questions. It will be effective only to the extent that Facebook management voluntarily cooperates, which would mean reducing profits – not something they will do voluntarily.
Although Facebook founder and now chief executive of its parent Meta, Mark Zuckerberg, once stated: “At the heart of these accusations is this idea that we prioritize profit over safety and well-being. That's just not true.” The data clearly show that this is true – and hardly anyone believed Zuckerberg when he said it wasn’t.
My next post will provide documentation of Facebook’s promotion of disinformation and divisiveness, as well as its conscious decision to do this and its ability – and occasional willingness – to change this. The post will also include steps that can and should be taken to force Facebook and other social media platforms to change th
[1] Benavidez, N., & Coyer, K., 4/17/22, “Facebook ought to be protecting democracy worldwide every day,” The Boston Globe
[2] Legum, J., 5/6/21, “Facebook’s problem isn’t Trump – it’s the algorithm,” Popular Information (https://popular.info/p/facebooks-problem-isnt-trump-its)