Facebook algorithms are partly programmed and partly based on machine learning, which explains why the company had to carry out research to learn that they are effectively promoting divisive content to users…
The primary job of the algorithms is to maximize user engagement, so they highlight content that achieves this and reduce the visibility of content that doesn’t.
An internal investigation found that one of the unintended consequences of this was that users were actively exposed to sensational and polarized content, as that drove them to respond. The Wall Street Journal reports that senior executives at the social network were asked to take action to limit visibility of divisive content, but chose not to do so.
The report says there were two reasons for this, in addition to the obvious one of not wanting to reduce eyeball time for ads.
‘Our algorithms exploit the human brain’s attraction to divisiveness,’ read a slide from a 2018 presentation. ‘If left unchecked,’ it warned, Facebook would feed users ‘more and more divisive content in an effort to gain user attention & increase time on the platform’ […]
The high number of extremist groups was concerning, [an earlier] presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth, [finding] that ‘64% of all extremist group joins are due to our recommendation tools’ and that most of the activity came from the platform’s ‘Groups You Should Join’ and ‘Discover’ algorithms: ‘Our recommendation systems grow the problem’ […]
Facebook had kicked off an internal effort to understand how its platform shaped user behavior and how the company might address potential harms. Chief Executive Mark Zuckerberg had in public and private expressed concern about ‘sensationalism and polarization.’
But in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.
First, the company took the view that it should not interfere with free speech, even if that were in the interests of users, as that would be “paternalistic.” CEO Mark Zuckerberg is said to be a particularly vigorous proponent of this argument.
Second, that any action to limit divisiveness might be perceived as politically motivated.
The whole piece is worth reading.