Facebook Live restrictions have been introduced in response to the New Zealand terrorist attack, when the gunman live-streamed footage of the shootings, making good on a promise made back in March …
However, while Facebook makes the move sound tough, describing it as a ‘one strike policy,’ the details are vague and the remedy seems weak. The company announced the new approach yesterday.
The vagueness arises from the fact that Facebook doesn’t list the specific violations which will trigger the Facebook Live restrictions, nor does it specify the time-outs that will be imposed, merely giving 30 days as an example.
Today we are tightening the rules that apply specifically to Live. We will now apply a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.
That example too seems ridiculously weak. Someone shares a statement from a terrorist group and 31 days later can use Facebook Live?
The company does acknowledge the difficult balancing act between freedom of expression and the prevention of abuse.
However, while governments have to act with enormous care in restricting freedoms in the name of safety, access to Facebook Live is hardly a basic human right. It seems to me that erring on the side of caution here would be more appropriate.
On a more positive note, Facebook is investing in new research to help it automatically detect and block modified versions of banned videos – a problem shared by YouTube.
The social network is partnering with the University of Maryland, Cornell University, and the University of California, Berkeley, on the AI research.
Photo: Shutterstock