Daily Bulletin

The Times Real Estate

.

  • Written by Andre Oboler, Senior Lecturer, Master of Cyber-Security Program (Law), La Trobe University
New livestreaming legislation fails to take into account how the internet actually works

In response to the live streamed terror attack in New Zealand last month, new laws have just been passed by the Australian Parliament.

These laws amend the Commonwealth Criminal Code, adding two substantive new criminal offences.

Both are aimed not at terrorists but at technology companies. And how that’s done is where some of the new measures fall down.

Read more: Livestreaming terror is abhorrent – but is more rushed legislation the answer?

The legislation was rushed through with neither consultation nor sufficient discussion.

The laws focus on abhorrent violent material, capturing the terrorist incident in New Zealand, but also online content created by a person carrying out a murder, attempted murder, torture, rape or violent kidnapping.

The laws do not cover material captured by third parties who witness a crime, only content from an attacker, their accomplice, or someone who attempts to join the violence.

The aim is to prevent perpetrators of extreme violence from using the internet to glorify or publicise what they have done. This will reduce terrorists’ ability to spread panic and fear. It will reduce criminals’ ability to intimidate. This is about taking away the tools harmful actors use to damage society.

What the legislation aims to do

Section 474.33 of the Criminal Code makes it a criminal offence for any internet service provider, content service or hosting service to fail to notify the Australian Federal Police, within a reasonable time, once they become aware their service is being used to access abhorrent violent material that occurred or is occurring in Australia. Failing to comply can result in a fine of 800 penalty units (currently $128,952).

Section 474.34 makes it a criminal offence for a content service or hosting service, whether inside or outside Australia, to fail to expeditiously take down material made available through their service and accessible in Australia.

The criminal element of fault is not that the service provider deliberately makes the material available, but rather that they are reckless with regards to identifying such content or providing access to it. Reckless, however, has been given a rather special meaning.

What we’ve got right

There is a clear need for new laws.

Focusing on regulating technology services is the right approach. Back in 2010 when I first raised this idea it was considered radical; today even Mark Zuckerberg supports government regulation.

Read more: Zuckerberg's 'new rules' for the internet must move from words to actions

We’ve moved away from the idea of technology companies of all types being part of a safe harbour that keeps the internet unregulated. That’s to be welcomed.

Penalties for companies that behave recklessly – failing to build suitable mechanisms to find and remove abhorrent violent material – are also to be welcomed. Such systems should indeed be expanded to cover credible threats of violence and major interference in a country’s sovereignty, such as efforts to manipulate elections or cause mass panics through fake news.

Recklessness as it is ordinarily understood – that is, failing to take the steps a reasonable person in the same position would take – allows the standard to slowly rise as technology and systems for responding to such incidents improve.

Also to be welcomed is the new ability for the eSafety Commissioner to issue a notice to a company identifying an item of abhorrent violent material and to demand its removal. When the government is aware of such content, there must be a way to require rapid action. The law does this.

Where we’ve fallen down

One potential problem with the legislation is the requirement for internet service providers (ISPs) to notify the Australian Federal Police if they are aware their service can be used to access any particular abhorrent violent material.

As ISPs provide access for consumers to everything on the internet, this seeks to turn ISPs into a national surveillance network. It has the potential to move us from an already problematic meta-data retention scheme into an expectation for ISPs to apply deep packet inspection monitoring of everything that is said.

Read more: Australians accept government surveillance, for now

Content services (including social media platforms such as Facebook, YouTube and Twitter, and regular websites) and hosting services (provided by companies such as Telsta, Microsoft and Amazon through to companies like Servers Australia and Synergy Wholesale) have a more serious problem.

Under the new laws, if content is online at the time a notice is issued by the eSafety Commissioner, the legal presumption will be that the company was behaving recklessly at that time. The notice is not a demand to respond, but rather a finding that the response is already too slow. The relevant section (s 474.35(5)) states (emphasis added) that if a notice has been correctly issued:

…then, in that prosecution, it must be presumed that the person was reckless as to whether the content service could be used to access the specified material at the time the notice was issued

While the presumption can be rebutted, this is still quite different from what the Attorney General’s press release (dated 4 April 2019) claimed:

… the e-Safety Commissioner will have the power to issue notices that bring this type of material to the attention of social media companies. As soon as they receive a notice, they will be deemed to be aware of the material, meaning the clock starts ticking for the platform to remove the material or face extremely serious criminal penalties.

As the law is written, the notice is more of a notification that the clock has already run out of time. It’s like arguing that the occurrence of a terrorist act means “it must be presumed” the government was reckless with regards to prevention. That’s not a fair standard. The idea of the notice starting the clock would in fact be much fairer.

Under this law, a content service provider can be found to have been reckless and to have failed to expeditiously remove content even if no notice was ever issued. In some cases that may be a good thing, but what was passed as law, and what they say they intended, don’t appear to match.

Read more: Why we need to fix encryption laws the tech sector says threaten Australian jobs

Hosting services have the worse of it. They provide the space on servers that allows content to appear on the internet. It’s a little like the arrangement between a landlord and a tenant. With hosting plans starting from around $50 a year, there’s no margin to cover monitoring and complaints management.

The new laws suggest hosting services will be acting recklessly if they don’t monitor their clients so they can take action before the eSafety Commissioner issues a notice. They just aren’t in a position to do that.

A lot still needs to be done

As it stands, only the expeditious removal of content or suspension of a client’s account can avoid the new offence. The legislation does not define what expeditious removal means. There is nothing to suggest the clock would start only after the service provider becomes aware of the content, and the notice from the eSafety Commissioner doesn’t start a clock but says a response is already over due.

This law is designed to apply pressure on companies so they improve their response times and take preemptive action.

What’s missing too is a target with safe harbour protections, that is, a clear standard and a rule that says if companies can meet that standard they can enjoy an immunity from prosecution under this law. That would give companies both a goal and an incentive to reach it.

Read more: Technology and regulation must work in concert to combat hate speech online

Also missing is a way to measure response times. If we can’t measure it, we can’t push for it to be continually improved.

Rapid removal should be required after a notice from the eSafety Commissioner, perhaps removal within an hour. Fast removal, for example within 24 hours, should be required when reports come from the public.

The exact time lines that are possible should be the subject of consultation with both industry and civil society. They need to be achievable, not merely aspirational.

Working together, government, industry and civil society can create systems to monitor and continually improve efforts to tackle online hate and extremism.

That includes the most serious content such as abhorrent violence and incitement to violent extremism.

Trust, consultation and goodwill are needed to keep people safe.

Authors: Andre Oboler, Senior Lecturer, Master of Cyber-Security Program (Law), La Trobe University

Read more http://theconversation.com/new-livestreaming-legislation-fails-to-take-into-account-how-the-internet-actually-works-114911

Business News

Insulation Solutions for Meeting Modern Industrial Standards

As global energy costs soar and environmental regulations tighten, industries face unprecedented pressure to optimise their operations while minimising their ecological footprint. Modern industrial ...

Daily Bulletin - avatar Daily Bulletin

How Australian Startups Should Responsibly Collect, Use and Store Customer Data?

Owing to the digital landscape, data is the most important currency in the market. From giant e-commerce sharks to small businesses, every company is investing heavily to responsibly collect data an...

Daily Bulletin - avatar Daily Bulletin

Revolutionising Connections - The Power of Customer Engagement Software

As time goes by, customer expectations keep on rising ever so rapidly. Businesses that must keep pace will need future-ready tools to deliver connectedness at every touchpoint. Customer engagement a...

Daily Bulletin - avatar Daily Bulletin

LayBy Deals