Daily Bulletin

Business Mentor

.

  • Written by David Glance, Director of UWA Centre for Software Practice, University of Western Australia
image

In the wake of violence in the US town of Charlottesville, the tech industry has started removing access to some of their services from groups associated with the far-right and those espousing racial intolerance.

Apple has disabled Apple Pay from sites selling clothing, stickers and other merchandise with Nazi logos and other white supremacist slogans. GoDaddy and Google removed support for the “Daily Stormer”, a far-right website. Other companies like Uber, Facebook, Twitter, MailChimp and Wordpress have all taken varying degrees of action

The battle between protection and censorship

The moves by the tech companies, whilst generally welcomed given the events of Charlottesville including the tragic death of Heather Heyer, are still provoking the ongoing debate of the tension between regulating hate speech and preserving, for American’s at least, the sanctity of freedom of speech.

Groups like the Electronic Frontier Foundation (EFF), who support the actions against neo-Nazi groups, at the same time express concern for free speech and upholding the First Amendment of the US Constitution that enshrines that right. The EFF is concerned that these platforms will not exercise these rights properly and other groups and voices will be silenced, wrongly, in the same way.

Facebook has come under recent criticism for censoring LGBTQ people’s posts because they contained words that Facebook deem offensive. At the same time, the LGBTQ community are one of the groups frequently targetted with hate speech on the platform.

If users seem to “want their cake and eat it too”, the tech companies are similarly conflicted.

In Facebook’s community standards, it says it will remove posts it deems to be hate speech.

At the same time however, Facebook has fought strongly against a German law that will see it, and other social media platforms, fined up to Euro 50 million if they fail to remove hate speech and other illegal content from their site within days of being notified.

In its fight against the law, Facebook claimed it could not technologically filter and deal with the sheer volume of images and content posted on its platform. It further claimed that dealing with hate speech on its platform was not its responsibility but that of the “public and state”.

It would be easy to think that the tech companies simply wanted to be seen to be doing something about hate speech whilst at the same time, limiting their responsibility to deal with the problem systematically.

A difficult problem

On the surface, it may seem to be a significant challenge to allow free speech whilst stopping hate speech that targets people based on their race, religion, ethnicity, national origin, sexual orientation, sex, gender, or gender identity, or serious disabilities or diseases.

In Germany, Facebook argued that it would need to hire thousands of lawyers to review posts that were brought to its attention. At the same time however, Facebook markets its platform to advertisers exlicitly on the basis that it is able to provide detailed personal information based on what its 2 billion monthly users post and read. Facebook often talks about its advances in machine learning and text and image recognition that are certainly capable of at least highlighting problematic posts for human review or identifying copies of images that it has already deemed problematic.

Distinguishing freedom of speech from hate speech

The right of freedom of speech is not unique to the United States. This right is also enshrined in Article 19 of The Universal Declaration of Human Rights. At the same time, the laws of many countries like Germany, and other international conventions, explicitly limit these freedoms when it comes to hate speech. The illegality of hate speech is made explicit in Article 13(5) of American Convention of Human Rights and the UN’s International Convention on the Elimination of all Forms of Racial Discrimination.

National and international courts have already dealt with numerous cases that have led to determinations of the differences between freedom of speech and hate speech.

It would not be impossible for tech companies to form clear guidelines within their own platforms about what was and wasn’t permissable. For the mainly US companies, this would mean that they would have to be increasingly aware of the differences between US law and culture and those of other countries.

Will their actions continue?

It is always unfortunate that it takes the loss of human life to spur the tech companies into behaviour that should have been their default. It remains to be seen how long this activity will persist before they revert back to claiming that ultimately it is not their problem.

Authors: David Glance, Director of UWA Centre for Software Practice, University of Western Australia

Read more http://theconversation.com/tech-companies-can-distinguish-between-free-speech-and-hate-speech-if-they-want-to-82695

Business News

Choosing the Right Mini Digger: Factors to Consider

In the vast landscape of construction and landscaping projects across Australia, mini diggers have become indispensable tools. These compact machines pack a powerful punch, offering versatility and ...

Daily Bulletin - avatar Daily Bulletin

Effective Strategies to Generate and Nurture Sales Leads for Business Growth

Boost your business's growth. Discover effective strategies to generate and nurture sales leads! A sales lead refers to an individual or business entity that is potentially interested in purchasing...

Daily Bulletin - avatar Daily Bulletin

Products Made from Petroleum

From transportation to healthcare, petroleum has become an integral part of our everyday lives. This fossil fuel serves as the foundation for a wide range of products that surround us, offering conv...

Daily Bulletin - avatar Daily Bulletin

Tomorrow Business Growth