Daily Bulletin

  • Written by Nicholas Davis, Adjunct Professor of Swinburne Social Innovation Institute, Swinburne University of Technology
Hope and fear surround emerging technologies, but all of us must contribute to stronger governance

This article is part of our occasional series Zoom Out. Here we offer authors a slightly longer essay format to widen their focus, and explore key ideas in science and technology in the broader context of society and humanity.

It’s been a big year for companies pushing the boundaries of technology – and not in a good way. The Cambridge Analytica scandal led to a public outcry about privacy, the Commonwealth Bank’s loss of customer data raised concerns about cybersecurity, and a fatal self-driving car crash put the safety of automated systems in the spotlight.

These controversies are just the latest warning signs that we urgently need better governance of the technologies redefining the world. There is a widening gap in knowledge between those creating and using emerging technologies and those we charge with regulating them. Governance cannot be left just to the public sector – it is a job for all citizens.

Read more: Keeping up or holding back? The regulation challenge for government

Until now, we’ve been sleepwalking through the early stages of the Fourth Industrial Revolution. We dream of a future where artificial intelligence, synthetic biology, distributed ledgers and neurotechnologies magically make life better for all.

As we begin to wake up, it’s becoming clear the world has already changed around us in profound ways. We’re realising that creating and commercialising powerful new technologies is the easy part – the hard bit is making sure these new capabilities give us what we need and want, rather than what we imagine and fear.

Building the technology we want

What we want is to realise the benefits of revolutionary new digital technologies to the economy, our quality of life and a more sustainable world.

Analysis by consultancy AlphaBeta suggests that automation could add A$2.2 trillion to cumulative Australian GDP between 2017 and 2030. In healthcare, diagnostic approaches and treatments targeted to individuals could be as dramatic a change in our ability to prevent and treat illness as was the introduction of sanitation and antibiotics.

More generally, advances in machine learning are demonstrating that algorithms can simultaneously benefit companies, shareholders, citizens and the environment. We may be amazed at the prowess of computers beating the world’s best Go players, but perhaps more impressive is that Google DeepMind’s AI managed to reduce Google’s Data Centre energy use by 15%. That’s a recurring benefit amounting to hundreds of millions of dollars. DeepMind subsequently launched discussions with the UK’s National Grid to try and save 10% of the UK’s energy bill.

What we fear is that history will rhyme, and not in a good way.

The social and environmental damage resulting from previous industrial revolutions taught us that new technologies don’t inevitably lead to better outcomes for everyone. For a start, the benefits are often unevenly distributed – witness the one billion people around the world who still lack access to electricity. And when we do discover that harm is occurring, there’s often a significant lag before the law catches up.

What it means to be awake

Most fundamentally, being awake means recognising that the same exciting systems that promise openness and deliver convenience come with significant costs that are affecting citizens right now. And many of those costs are being borne by those least able to afford them – communities with less access to wealth or power, and those already marginalised.

These costs go well beyond risks to our privacy.

Read more: Big data algorithms can discriminate, and it's not clear what to do about it

When an algorithm fails to predict the next word you want to type, that’s generally not a big deal. But when an algorithm – intelligent or otherwise – uses a flawed model to decide whether you are eligible for government benefits, whether you should get bail or whether you should be allowed to board a flight, we’re talking about potential violations of human rights and procedural fairness.

And that’s without getting into the challenge of harassment within virtual reality, the human security risks posed by satellite imagery that refreshes every day, and the ways in which technologies that literally read our minds can be used to manipulate us.

The government alone can’t fix this

It’s tempting to say that this isn’t yet a big problem. Or that if it is a problem, it must be up to the government to find a solution.

Unfortunately our traditional, government-led ways of governing technologies are far from fit for purpose. Many emerging technologies, such as novel applications of machine learning, cryptocurrencies and promising biotechnologies are being developed – and often commercialised – at breakneck speed that far exceeds legislative or regulatory cycles. As a result, public governance is continually out of date.

Meanwhile, the novelty and complexity of emerging technologies is widening the knowledge and skills gap between public and private sectors.

Even communication is getting harder. As former US Secretary of State Madeleine K. Albright put it:

Citizens are speaking to their governments using 21st century technologies, governments are listening on 20th century technology and providing 19th century solutions.

Our governance solutions are out of step with today’s powerful technologies. This is not the fault of government – it’s a design flaw affecting every country around the world. But given the flaw exists, we should not be surprised that things are not going as well as we’d like.

How do we get out of this pickle?

Here are three suggestions.

1. Take an active role in shaping future directions

We need to shift our mindset from being passive observers to active participants.

The downside of talking about how powerful and transformational new technologies are is that we forget that human beings are designing, commercialising, marketing, buying and using this technology.

Adopting a “wait and see” approach would be a mistake. Instead, we must recognise that Australian institutions and organisations have the power to shape this revolution in a direction we want.

Read more: Technology and regulation must work in concert to combat hate speech online

This approach means focusing on leading – rather than adapting to – a changing technological environment in partnership with the business community. One example is the Swinburne Factory of the Future, which gives Victorian businesses exposure to the latest technologies and processes in a non-competitive, supportive environment. It also offers ways of assessing the likely impact of technology on individual companies, as well as entire sectors.

2. Build a bridge between public and private sectors

We need to embrace any and all opportunities for collaboration across the public and private sectors on the issue of new governance models.

Technology leaders are starting to demand this. At the World Economic Forum’s Annual Meeting in January 2018, Uber’s Dara Khosrowshahi said:

My ask of regulators would be to be harder in their ask of accountability.

At the same meeting, Marc Benioff, CEO of SalesForce, called for more active public sector guidance, saying:

That is the point of regulators and government – to come in and point true north.

To have real impact, cross-sector collaboration should be structured to lead to new Australian partnerships and institutions that can help spread benefits, manage costs and ensure the technology revolution is centred on people.

In 2017, the World Economic Forum launched its Center for the Fourth Industrial Revolution in San Francisco. It works directly with multinationals, startups, civil society and a range of governments to pilot new governance models around AI, drones, autonomous vehicles, precision medicine, distributed ledgers and much more.

The Australian government and business community can and should benefit from this work.

Cross-sector collaboration means much more than simply getting stakeholders in a room. Recent work by the PETRAS Internet of Things Research Hub – a consortium of nine leading UK universities – found that most international discussions on cybersecurity have made no progress relevant to IoT in recent years. A primary reason for this is that the technical experts and the policymakers find it difficult to interact – they essentially speak different languages.

The same challenge has been facing the international community working on the governance of lethal autonomous weapons systems. Anja Kaspersen, the UN’s Deputy Secretary General of the Conference on Disarmament, noted recently that, when it comes to discussing how the use of lethal robots might be controlled, her most valuable role is to be a translator across disciplines, countries and sectors.

Read more: For tech giants, a cautionary tale from 19th century railroads on the limits of competition

By taking this approach at the April 2018 meeting of the Group of Government Experts, Kaspersen and Ambassador Amandeep Singh Gill made substantial progress in aligning expert views and driving convergence on issues, such as the primacy of international humanitarian law.

The desired outcome is not just new rules, but inclusive governance structures that are appropriately adapted to the fast-changing nature of new technologies. While reaching out across across geographic and sector boundaries takes considerable time and energy, it is worth the effort as it often leads to unexpected benefits for society.

For example, The Prime Minister’s Industry 4.0 Taskforce was inspired by Germany to encourage collaboration between government and the labour movement on issues facing industry and workers. As a result, the cross-sector Industry 4.0 Testlabs and the Future of Work and Education workstream is co-chaired by Swinburne’s Aleksandar Subic and the National President of the Australian Manufacturing Workers Union, Andrew Dettmar.

3. Tackle the moral component of emerging technologies

Third, we need to appreciate that these issues cannot be solved by simply designing better algorithms, creating better incentives or by investing in education and training, as important as all those aspects are.

Technologies are not neutral. They are shaped by our assumptions about the world, by our biases and human frailties. And the more powerful a technology is, the greater our responsibility to make sure it is consciously designed and deployed in ways that uphold our values.

The Centrelink robo-debt controversy demonstrated what happens when algorithms prioritise the value of efficiency over the value of protecting people – and how this can backfire.

Unfortunately, the ethical and moral aspects of technology are often (and incorrectly) viewed as falling into one of two categories. Either as soft, imprecise and inessential issues interesting only to lefty activists: a distraction in the boardroom. Or as technical, regulatory, compliance-related challenges, discussed in the boardroom only when a crisis has occurred.

Read more: After the robo-debt debacle, here's how Centrelink can win back Australians' trust

A far more useful framing of ethics in technology is as a set of practical, accessible and essential tools that can help organisations create sustainable value. A forthcoming white paper from the World Economic Forum on Values, Ethics and Innovation argues that leaders can and should make ethics a priority when inventing, investing in, developing, deploying and marketing new ideas and systems.

A critical task here is building ethical considerations into the very early stages of creating new technologies. Commercial AI teams are beginning to do this.

One example is the recent formation of Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee, announced in March this year. It brings together senior executives to develop internal policies around responsible innovation in AI, with the AI research team reporting through members of the committee.

The next step is leading together

Governing emerging technologies is as much a moral and political task as a technocratic challenge. All Australians need to be involved in discussing what we want from technology, and helping to design the institutions that can help us avoid costs we’re not willing to bear as a society.

Read more: Engineers, philosophers and sociologists release ethical design guidelines for future technology

In practice, this means more frequent and more diverse conversations about the impact of today’s and tomorrow’s technology. It means more innovative forms of public debate. And it means that the most influential institutions in this space – particularly Australian governments, technology firms and national champions – need to listen and experiment with the goal of social, as well as economic and technological, progress in mind.

We’re starting to wake up. Now the real work begins.

Authors: Nicholas Davis, Adjunct Professor of Swinburne Social Innovation Institute, Swinburne University of Technology

Read more http://theconversation.com/hope-and-fear-surround-emerging-technologies-but-all-of-us-must-contribute-to-stronger-governance-96122

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin