Facebook is now starting to remove media that’s been manipulated and could be misleading, including deepfakes. Facebook has routinely come under pressure of late for taking a different stance to other social media platforms when it comes to stemming the flow of disinformation. Manipulated videos now appear to be one of the few exceptions.

Facebook has continuously defended the right for people to post whatever they want with limited intervention. This even includes information that might be misleading, even if it’s a paid-for ad, and even if that misleading paid-for ad is from a politician. While that stance is largely still in use by Facebook, manipulated videos pose a different challenge as these can be made and uploaded by others, resulting in a completely and artificially misleading video.

Today, Facebook confirmed it is now going to take a stance against this type of media by removing it when it surfaces on the social media platform. However, Facebook has added some caveats to the decision stating it will only remove sophisticated edits that could genuinely lead some to believe someone has said something they didn’t. Facebook specified this policy won’t result in the removal of any content that’s deemed parody or satire. Any videos that don’t meet the policy’s requirements might still be subjected to a manual check. While that check won’t result in them being removed, if they are deemed to be false or containing false information they will have their distribution reduced and be labelled with a `false” warning. The company explains that even though the video has been identified as false, removing it won’t stop it from showing up elsewhere on the internet and argues that keeping it live with a warning is more informative.

Facebook Change Might Not Stop Misleading Election Information

With 2020 here, this is a big year for U.S. politics due to the upcoming presidential election. Over the last few months, the impending election has placed greater pressure on social media sites to take responsibility for the information they allow to be shared. In turn, some have taken measures to try and limit the spread of misleading information. Spotify recently announced it won’t allow any paid-for political advertisements on its platform due to the overall level of regulation it would need to responsibly review them all.

In contrast, Facebook has not been making any similar and sweeping decisions opting to continue to allow postings, regardless of their accuracy. However, this still marks a fairly major change for the company who has up until now taken a more liberal approach to the content it is happy to promote on the platform. Although, and in keeping with its wider viewpoint, Facebook is still making it possible for misleading content to be shared. The key takeaway from the Facebook announcement is that it will only remove content that’s wholly artificial. As an example, it won’t remove content if the person speaking said those words - even if the video has been edited to make it sound like they said something different. In Facebook’s own (and unedited) words, “video that has been edited solely to omit or change the order of words.”

More: Facebook And Zuckerberg Accused Of Rigging Political Ads, Manipulating Voters

Source: Facebook