There has been much discussion of the ethics of artificial intelligence (AI), especially regarding robot weapons development. There is also a related, but more general, discussion about AI as an existential threat to humanity.
If Skynet of The Terminator movies is going to exterminate us, then it seems pretty tame - if not pointless - to start discussing regulation and liability.
But, as legal philosopher John Danaher has pointed out, if these areas are promptly and thoughtfully addressed, that could help to reduce existential risk over the longer term.
In relation to AI, regulation and liability are two sides of the same safety/public welfare coin.
Regulation is about ensuring that AI systems are as safe as possible; liability is about establishing who we can blame - or, more accurately, get legal redress from - when something goes wrong.
Taking liability first, let's consider tort (civil wrong) liability.
Imagine the following near-future scenario. A driverless tractor is instructed to drill seed in Farmer A's field but actually does so in Farmer B's field. Let's assume that Farmer A gave proper instructions. Let's also assume that there was nothing extra that Farmer A should have done, such as placing radio beacons at field boundaries.
Now suppose Farmer B wants to sue for negligence (for ease and speed, we'll ignore nuisance and trespass). Is Farmer A liable? Probably not. Is the tractor manufacturer liable?
Possibly, but there would be complex arguments around duty and standard of care, such as what are the relevant industry standards?
There would also be issues over whether the unwanted planting represented damage to property or pure economic loss. So far, we have implicitly assumed the tractor manufacturer developed the system software.
But what if a third party developed the AI system? What if there was code from more than one developer?
Uber’s real-world testing gone all wild
We all know the progress of Uber that has made till date. From 2016, Uber tested its self-driving cars in San Francisco without taking permissions and approvals from the State. that's ethically and legally not right. Moreover, the interior documents of Uber stated that the self-driving car crossed around 6 red lights within the city during testing.
This is one among the clear samples of AI gone wrong as Uber uses top-notch vehicle sensors and networked mapping software also as a driver to require care if things leave of control. However, Uber said that the blunder was the results of a driver’s mistake. This AI experiment gone wrong is basically bad.