Autonomous systems: why they might not work as planned
GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Autonomous systems: why they might not work as planned
I've just seen an item in the BBC News tech magazine programme (it's called Click) which worried me. They were talking about autonomous systems and why the current model, in which these systems carry out routine operations but hand over control to humans in an emergency, will not work in practice.
The problem is simple: humans can't cope with emergencies unless they have plenty of experience of working the machinery in non-emergency situations. Sullenberger was able to land his plane on the Hudson because he was a very experienced pilot. But automation will take away the opportunity for people to acquire this kind of experience.
Imagine it! You're cruising along in your car, a cup of coffee in one hand, a bagel in the other, reading the newspaper, while the car drives itself as it has done for months. Suddenly a voice from the console says "Emergency! Automated system shutting down. Over to manual control." You look up in horror and there's a bus coming straight at you.
People already get crashed into when they're driving all the time and paying attention.
I think it's interesting that the predicted failure case in the given scenario is the computer handing over control to the drive during a catastrophic system failure. A few months back there was some robo car news of some poor guy and his car auto-driving through the side of a truck, not because the computer gave up control, but because the computer didn't sense anything in the first place.
How often does a computer suffer a catastrophic failure? They're pretty reliable. How often does software crash* because something unexpected happened? All the time.
*Edit: Not always a crash. Sometimes it's using up all the CPU or RAM, or other odd behavior the program ending up in some "impossible" state.
Last edited by Myk267; 12-10-2016 at 11:24 AM.
Reason: I should have went to sleep before posting. :)
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
To me the issue with automated systems is just an issue of liability. With a car I'm driving (for example) if it crashes and kills a few people I am likely liable but if an automated car does the same who is liable?
Until that fundamental question is resolved then, for most people, it is likely safer to take control oneself since, at least, the cause and effect will be under one's control.
When the first driverless car kills somebody that will be a very important legal event.
I think it's interesting that the predicted failure case in the given scenario is the computer handing over control to the drive during a catastrophic system failure.
How often does a computer suffer a catastrophic failure? They're pretty reliable. How often does software crash because something unexpected happened? All the time.
He wasn't talking about the system failing. He was talking about the system doing what it was supposed to do. People always promise us that autonomous systems will let us take back control in potentially dangerous situations. The point is that we won't be able to cope with those situations if we're too accustomed to machines doing everything for us.
He gave the example of that French plane that crashed into the Atlantic in 2009 with the loss of all aboard. When it was finally located and the black boxes retrieved, it was found that the pilot had crashed the plane because he got confused while flying at a high altitude. He wasn't used to having to do that because the autopilot usually did it.
The Airbus crash is an extreme example because it was caused by both human error and mechanical problems. It was much more complicated then the pilot flying at high altitude without autopilot. Pilots do practice simulated emergency conditions to help quickly recognize when it happens for real and to develop an automatic response.
In the US after we get a drivers license we don't have to take recurrent training or do much of anything else except maybe retake the written test (for what it is worth) to make sure we drive safely. I agree that it would take many seconds for the human mind to figure out what to do and by that time it is to late to prevent an accident. On the other hand if all vehicles and traffic signals communicated with each other then an accident is less likely to happen. Most errors are caused by humans versus mechanical failures.
Well, we have already experienced human death(!) caused by "self-driving computers."
As I recall the event, a white truck turned out in front of the car ... whose human driver was not paying attention. The computer's scanner did not recognize the presence of the truck, so the car ran underneath it ... killing the human (non-)driver.
I have no objection to "autonomous systems," as long as they do not share a roadway with me.
Because, after all ... "that non-driven(!) vehicle just smashed into me!"
How lucky we are, that this collision, although fatal to the driver of the car, was not also fatal to the driver of the other vehicle ... which happened to be a great big (white) truck. What if it had instead been a white minivan filled with a family and their four kids, all of whom were "wiped out by a driverless vehicle that slammed into them, when any human driver would have just: "hit the horn(!), stepped on the brakes, and changed lanes?"
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
I think the fact the car killed the "driver" and nobody else probably allowed Tesla to deny all responsibility. It's when a third-party is killed an the driver is facing huge costs or jail time that things will heat up or, if they don't, it will be obvious that the government is being bribed by somebody.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
To my mind the solution could be simple -- make the human designated as "driver" responsible. Up to them whether they self-pilot, go full-auto of some other option.
Now, I think I can anticipate "What happens if the producer of the vehicle is negligent or criminal?". Well, we know they are, so..?
Dunno ... daresay that "our first human example" had just a couple of seconds to realize ... "oh, sh*t!!" ... that the high-technology of which he apparently was so fond had just killed failed him.
How, then, could we seriously have held him responsible (had he survived, of course ...), for having merely "embraced" a technological innovation(?) that he had, apparently, "not only 'embraced,' but even 'championed?'"
After all: "we proffered this high-technology to him, in the first place." We encouraged him to use it. We implicitly told him that it was 'safe,' as evidenced by the fact that he embraced it (publicly!), and actually believed it ... up to, of course, that fatal "oh, sh*t!!" moment.
"Yeah, he was a high-tech fool." But ... who, exactly, inducedhim(!!!) ... to become so foolish? His family should right now be engaging lawyers, and those lawyers should be filing multi-billion dollar lawsuits ... (IMHO).
- - -
"Y'know, yeah," on the one hand, "all this new-fangled technology is 'just wunnerful.'" Sure, it definitely is 'exciting' that "we now live in interesting times." But are we showing that we are actually prepared, as a human society to live (and, to survive(!)) in such times?
Frankly, I wonder . . . . .
Last edited by sundialsvcs; 12-12-2016 at 02:03 PM.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Nobody induced him in the same way that nobody induces one to get drunk and drive t 200MPH.
If somebody wants to commit suicide then so be it but if they're happy to contemplate manslaughter that's a different story.
To my mind liability has to lay somewhere and there isn't a chance in hell that the corporations in charge of our countries will let it lie anywhere near them. So, if you're happy to infill a new technology set up by an entity set up purely for the making of money of shareholders on the world for your own gratification then I deem you responsible if that technology kills somebody.
Don't get me wrong, I'm sure in a few hundred years*, when every vehicle is automated and enough people have lost their lives and families we'll be in a better situation but for now it just looks grim.
*Make up your own timeline, I just mean "Defiantly not in my lifetime and, I think, longer than it ought to take.".
As with electric cars, driverless cars are controversial. There is a fear of losing control and independence. For many the car is about much more than just four wheel transport.
I sit on the fence on this issue, but the status quo is far from perfect. Deaths on the roads in any given country in the world are already unacceptably high. It's an accepted problem with no solution because the gains of having and driving your own car are "worth it" and it's always others who die (and of course the global multi-billion $ automobile and oil industries profit, so it's all good).
Sharing your roads with driverless cars may seem objectionable to some, but you also share them with drunks, overworked and tired people, partially sighted octogenarians, plus the multitudes of idiots who are 'better' drivers than everyone else and can of course drive while on the phone (and faster).
On any given motorway, you are trusting all of these personality types to do the right thing - you just don't think about the risks when you're hurtling along at 70MPH because the bad things happen to others.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
OK, as a slightly contentious point -- what are humans doing walking on roads?
I would need statistics but I can make a pretty decent guess that a significant number of people hit by cars did so because they were idiots -- I lump myself into the idiots here.
Making roads more like railways, though, is almost impossible to do usefully -- at least as far as I can think.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.