Daily Bulletin

  • Written by Robert Merkel, Lecturer in Software Engineering, Monash University
Preliminary report on Uber's driverless car fatality shows the need for tougher regulatory controls

The US National Transportation Safety Board has released a damning preliminary report on the fatal crash in March between a cyclist and a driverless vehicle operated by Uber.

The report does not attempt to determine “probable cause”. Nevertheless, it lists a number of questionable design decisions that appear to have greatly increased the risks of a crash during the trial period.

Read more: Who’s to blame when driverless cars have an accident?

Elaine Herzberg was hit and killed by the driverless vehicle – a Volvo XC90 fitted with Uber’s experimental driverless vehicle system – while attempting to cross a sparsely trafficked four-lane urban road in Tempe, Arizona at around 10pm on Sunday March 18. She was walking directly across the road, pushing a bicycle in front of her.

Video of the accident was released soon after the crash by the local police. (Note: disturbing footage)

The video showed Hurzberg walking steadily across the road, without any significant deviation. There is no indication from the video that, despite the vehicle’s headlights operating as normal, she ever heard or saw the approaching car. The vehicle does not appear to brake or change direction at all. According to the preliminary report, the vehicle was travelling at 43 mph (69km/h), just below the speed limit of 45 mph (72km/h). A second camera angle shows the backup driver of the Uber vehicle looking down, away from the road, until very shortly before the impact.

Software teething troubles

Driverless cars, including Uber’s, rely on a range of sensing devices, including cameras and radar. They also use a system called lidar, which is similar to radar but uses light from lasers instead of radio waves. The Uber car’s lidar was supplied by Velodyne Systems, and is also used in a number of other driverless car projects.

Velodyne Systems stated after the crash that they believed their sensor should have detected Hurzberg’s presence in time to avoid the crash.

The NTSB preliminary report states that the car’s sensors detected Hurzberg approximately 6 seconds before the impact, at which time she would have been nearly 120m away. However, the car’s autonomous driving software seems to have struggled to interpret what the sensors were reporting. According to the report:

As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.

The report does not discuss the details of how Uber’s system attempted and failed to accurately classify Herzberg and her bicycle, or to predict her behaviour. It is unsurprising that an experimental system would occasionally fail. That’s why authorities have insisted on human backup drivers who can take control in an emergency. In Uber’s test vehicle, unfortunately, there were several features that made an emergency takeover less straightforward than it should be.

Questionable design decisions

The vehicle’s software had concluded 1.3 seconds (about 25m) before the crash that “emergency braking” – slamming on the brakes – was required to avoid an accident. Even at that point, if the software had applied the brakes with maximum force, an accident could probably have been avoided. Manufacturer information about the vehicle’s stopping capabilities and high-school physics suggests that an emergency stop at the vehicle’s initial speed on dry roads would take around 20m.

However, according to the report, Uber’s software was configured not to perform panic stops:

According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action.

Furthermore, the driver is apparently not even informed when the self-driving software thinks that an emergency stop is required:

The system is not designed to alert the operator.

That said, a warning to a human at the point where emergency braking is required immediately is almost certainly going to be too late to avoid a crash. It may, however, have reduced its seriousness.

The video of the driver appears to show her looking down, away from the road, before the crash. It appears that she was monitoring the self-driving system, as required by Uber:

According to Uber, the developmental self-driving system relies on an attentive operator to intervene if the system fails to perform appropriately during testing. In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review.

The inward-facing video shows the vehicle operator glancing down toward the center of the vehicle several times before the crash. In a postcrash interview with NTSB investigators, the vehicle operator stated that she had been monitoring the self-driving system interface.

What were they thinking?

Of the issues with Uber’s test self-driving vehicle, only the initial classification difficulties relate to the cutting edge of artificial intelligence. Everything else – the decision to not enable emergency braking, the lack of warnings to the backup driver, and especially the requirement that the backup driver monitor a screen on the centre console – are relatively conventional engineering decisions.

While all three are at least questionable, the one I find most inexplicable was requiring the safety driver to monitor diagnostic outputs from the system on a screen in the car. The risks of screens distracting drivers have been widely publicised due to mobile phones – and yet Uber’s test vehicle actively required backup drivers to take their eyes off the road to meet their other job responsibilities.

Read more: Why using a mobile phone while driving is so dangerous ... even when you're hands-free

If continuing to develop the self-driving software really required somebody in the car to continuously monitor the self-driving car’s diagnostic output, that job could have been done by another passenger. The backup driver would then be free to concentrate on a deceptively difficult task – passively monitoring, then overriding an automatic system in an emergency to prevent an accident.

Uber had a heads-up this would be difficult, given that their partner in the driverless car project, Volvo, had previously stated that having a human driver as a backup is is an unsafe solution for wide deployment of autonomous vehicles.

While the NTSB’s investigation has some way to go, the facts as stated in the preliminary report raise important questions about the priorities of Uber’s engineering team.

Questions for regulators

This tragic accident should not be used to condemn all autonomous vehicle technology. However, we can’t assume as a society that companies catch every contingency when racing their competitors to a lucrative new market.

Read more: A code of ethics in IT: just lip service or something with bite?

In theory, the software engineers actually responsible for writing the software that powers driverless cars have a code of ethics that imposes a duty to:

Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment.

In practice, acting on that ethical duty contrary to the directions or interests of an engineer’s employer is exceedingly rare – as I’ve previously argued, IT industry codes of ethics are largely ignored on this point.

Companies may well be able to make adequately safe, fully autonomous vehicles. But we can’t simply take claims that they have done so on trust. As with every other safety-critical system engineers build, governments are going to have to carefully regulate driverless cars.

Authors: Robert Merkel, Lecturer in Software Engineering, Monash University

Read more http://theconversation.com/preliminary-report-on-ubers-driverless-car-fatality-shows-the-need-for-tougher-regulatory-controls-97253

Business News

A Guide to Finance Automation Software

When running a business, it is critical to streamline certain processes to maintain efficiency. Too much to spent manually on tasks can wind up being detrimental to the overall health of the organis...

Daily Bulletin - avatar Daily Bulletin

Top Tips for Cost-effective Storefront Signage

The retail industry is highly competitive and if you are in the process of setting up a retail store, you have come to the right place, as we offer a few tips to help you create a stunning storefront...

Daily Bulletin - avatar Daily Bulletin

How Freight Forwarding Simplifies Global Trade Operations

Global trade operations are becoming increasingly complex due to international regulations, customs procedures, and the sheer scale of global logistics. For businesses looking to expand internation...

Daily Bulletin - avatar Daily Bulletin