Don’t regulate social media out of existence - The Centre for Independent Studies
Donate today!
Your support will help build a better future.
Your Donation at WorkDonate Now

Don’t regulate social media out of existence

The Christchurch massacre is providing further impetus to a move that has been underway for a while to dramatically increase social media companies’ obligations for the content they host. In Australia, the federal government has proposed a taskforce to look at how quickly and effectively social media companies deal with violent images.

To be clear, this push goes far beyond just preventing terrorists from having a platform to spread their ideas. ‘Regulate the internet’ is pushed as the solution to the radicalisation of disaffected youth, the dissemination of hate speech online, sexist and derogatory abuse towards women on social media, even the spread of ‘fake news’.

However, making Facebook, Google and Twitter morally (or even legally) culpable on the basis they failed to prevent the bad behaviour of a minority of users is wrong.

For a start, it lets the actual perpetrators of these heinous acts — and those who cheer on their efforts — off the hook.

It’s not Facebook’s fault that the Christchurch killer chose to livestream his appalling massacre. It’s not Twitter’s fault that people tweet death threats to any woman who has the temerity to appear on a public affairs television show.

Social media allows people to interact with one another. So much like all places where people have congregated throughout history, some of those interactions are positive, some are negative, and some of them are criminal.

The harmful acts are solely the responsibility of the perpetrators. There can be no moral equivalence between creating a space that can be misused to commit violence, or not policing this space quickly enough, and committing violence.

We can expect social media platforms, like phone carriers, to work with police to bring criminals to justice. At a minimum, they owe it to their users to enforce their terms of service and ban people who are misusing their platform.

However, if we want the digital public square to be a place of freedom and global connectivity — and there is no question that we should want this — we cannot expect platforms to pre-vet all content before publication.

We cannot expect them to prevent users from sharing objectionable content at all. To do so would undermine their primary purpose; probably fatally.

This is an edited extract of an opinion piece published in the Australian Financial Review.