LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   General (https://www.linuxquestions.org/questions/general-10/)
-   -   opinion -- keep a.i. driven vehicles away from people? (https://www.linuxquestions.org/questions/general-10/opinion-keep-a-i-driven-vehicles-away-from-people-4175697064/)

rico001 06-29-2021 04:57 PM

opinion -- keep a.i. driven vehicles away from people?
 
Should we keep a.i. driven vehicles away from people?

frankbell 06-29-2021 07:48 PM

Given their track record, yes.

And away from the highways.

sundialsvcs 06-29-2021 07:55 PM

Absolutely. I don't want to be killed by a computer.

enorbet 06-29-2021 11:26 PM

LOL you guys. Have you considered sickness, hunger, cell phones, drugs/alcohol and all the other things that distract human drivers that have zero effect on AI? The proof is in the pudding. There are lots of data on autopilot accidents (including aircraft for decades) but due to the fear level with cars some sites have popped up as global repositories for ALL autopilot capable cars and even separate accidents that occurred while autopilot was engaged vs/ when not engaged.

Here's a sample from tesladeaths.com

Quote:

Originally Posted by Non-Affiliated Survey - 2021
Tesla Deaths Total as of : 192 | Tesla Autopilot Deaths Count: 6

This doesn't mean I'm in favor of AI piloted vehicles. The real danger is a time will very likely arrive when few humans even know how to drive. Hmmm although that might also result in more walking and bicycling.... ;)

Trihexagonal 06-29-2021 11:40 PM

Absolutely not.

There are already autonomous semi-trucks being tested on public highways with a rider in the passengers seat.

In case of Deus Ex starting while he was on board.

fatmac 06-30-2021 03:37 AM

Quote:

The real danger is a time will very likely arrive when few humans even know how to drive.
From my experiences, there are quite a few on the roads now!


Quote:

Hmmm although that might also result in more walking and bicycling....
That would be better for all of us, (we're getting very lazy these days), a massive boost to our health!

hazel 06-30-2021 05:18 AM

The problem is people will get used to dreaming in their AI-driven cars, not watching the road and so on. And then suddenly the AI will say, "Emergency situation detected. You must take over NOW!" What's the odds that the driver will panic?

michaelk 06-30-2021 08:15 AM

My father owns a Tesla and I drive it a lot but it is not an autonomous self driving vehicle. Although some accidents are probably due to the autopilot my guess is that most are in some way driver error or not understanding its limitations. You can not include the idiots that bypass the hands on steering wheel detection and sit in the back seat... True self-driving vehicles like the Waymo or Uber is something completely different.

There are many cars with adaptive cruise with lane centering and have stop and start capabilities with some that do not have some sort of driver awake/alert detection feature. Some have a camera to detect if the driver is awake but I suppose you could bypass this somehow as well.

jmgibson1981 06-30-2021 12:03 PM

There aren't any real hard numbers yet but I'd guess the "problems" caused by AI vehicles is a fraction percentage wise of problems caused by human drivers. People don't seem to bat an eye at other numbers

https://crashstats.nhtsa.dot.gov/Api...es%20in%202018.

Alcohol was involved in 29% of vehicle deaths in 2018. 10,511. 1 every 50 minutes in the US.

I'd guess AI would be far lower than that. People are just afraid because it's new. Years ago trains ran at the "breakneck speed" of 25 mph. People freaked. Now bullet trains easily push 250+ or so? Same with cars. Always the same story. They sky is falling. After a number of years and kinks worked out they are exponentially safer than they've ever been now. Same will happen with AI vehicles.

EdGr 06-30-2021 04:40 PM

The real problem is that human drivers are unpredictable, often doing dangerous or illegal maneuvers. AI drivers are not going to be successful until all drivers follow the rules. This means getting human drivers off of the roads.
Ed

sundialsvcs 06-30-2021 05:11 PM

If you're going to be running "computer-controlled, automatic" vehicles, then you're going to have to build a parallel railroad to run them on.

Trihexagonal 06-30-2021 06:34 PM

I know a guy who works for TuSimple and this is cutting edge AI technology:


https://www.youtube.com/watch?v=f2aocKWrPG8

frankbell 06-30-2021 07:46 PM

Quote:

The real problem is that human drivers are unpredictable, often doing dangerous or illegal maneuvers. AI drivers are not going to be successful until all drivers follow the rules.
And that's not going to happen.

sundialsvcs 06-30-2021 08:12 PM

"I'm the Human!" Suck It Up! :D

I refuse to have it inscribed upon my tombstone: "He was killed by Abort? Retry? Ignore?"

frankbell 06-30-2021 10:03 PM

If you look at the accidents involving driverless vehicles (including vehicles in which the purported driver was not paying attention), you will see that many of them include situations which the AI did not anticipate. Because it did not anticipate them, it did not react to them.

I must admit that I am skeptical that AI will reach a point at which it will be able to anticipate all the screwy things that drivers do.

I will draw an analogy. I worked for the railroad for many years, and railroads have lots of safety rules (the first of which is that "safety is of the first importance in the discharge of duty"). One of the things I learned is that every safety rule was a reflection of an injury or death. In other words, the rules were a consequence, not an anticipation.

In building AI for self-driving vehicles, each algorithm is a consequence, not an anticipation. If the programmers don't expect a bicyclist to be on the wrong side of the road, they won't build in a rule for it. If they don't expect a pedestrian to jaywalk, they won't build in a rule for it. If they don't expect a delivery truck to double-park because no parking space is available, they won't build in a rule for it. If they don't expect that a driver will be going the wrong way on a limited access highway, they won't build a rule for it. If they don't think a skateboarder might be on the street, they won't build a rule for it.

Heck, we can't even take the prejudice out of facial recognition.

It's not the computers I distrust. It's the programmers, because they are human. And flawed. Like you and me.

I seriously doubt that programmers will be able to anticipate everything that can go wrong on the public highways. I seriously believe that "self-driving" vehicles on public thoroughfares are a pipe-dream of the computer-enamored, and a hazard to real live human beings driving their automotive conveyances.

JSB 07-01-2021 02:41 AM

Is AI driven like selfdriven? The same?
I say NO! Too dangerus. My child want to play outside, maybe Google car cannot see?

frankbell 07-05-2021 08:07 PM

This report from the Hearst-owned news site, SFGate, appears germane to this thread.

https://www.sfgate.com/business/arti...h-16294555.php

hazel 07-06-2021 05:40 AM

Looks as if they can't lose! If anything goes wrong, it's the driver's fault.

frankbell 07-16-2021 07:52 PM

Well said. Blame the victim.

Trihexagonal 07-17-2021 06:34 AM

Quote:

Originally Posted by frankbell (Post 6264200)
This report from the Hearst-owned news site, SFGate, appears germane to this thread.

https://www.sfgate.com/business/arti...h-16294555.php

Elon Musk has a Love/Hate relationship with AI:

https://www.cnbc.com/2020/05/13/elon...community.html

enorbet 07-17-2021 09:38 AM

Quote:

Originally Posted by Trihexagonal (Post 6267216)
Elon Musk has a Love/Hate relationship with AI:

I think that may be both oversimplification and a bit misdirected. To characterize his views as simply emotional misses the mark in my view based on interviews as well as print. He recognizes that the proverbial Pandora's Box has been opened and there is no putting it all back under lock and key. AI WILL be developed and increasingly used because it works. and more often than not works better than whatever came before. So given the inevitability of it the only rational choice is to carefully control it's development. I strongly suspect any efforts to simply stop it are doomed to failure.

As for AI application to cars we are faced with a tradeoff. Humans can respond to unforeseen events better than machines for reasons exacvtly as noted that they ostensibly require human programmers... or did. Now we have Machine Learning and given the widespread adoption of "telemetry" I have little doubt that car manufacturers, among others, will increasingly allow/cause AI instances to "compare notes" to enhance the database of what can possibly be expected, ever, and that includes events with infinitesimally small odds. Because machines can access that vast accumulation of possibilities, far more and far faster than any human can conceivably aspire to, this will happen..

Additionally, dead is dead. We can't possibly hope to achieve a zero death rate with the deployment of any new technology. People die from falling off ladders, are electrocuted by tools and appliances, die in fires, etc etc etc etc. All we can do is decrease the odds anyone will die, so the numbers diminish over time.

Some here are saying they don't wish to die from the cause of an AI failure, but is that any worse than dying from a blown tire or gas tank explosion, let alone because the human driver was on his phone, eating and drinking, spilled something, got distracted, or was under the influence? If the overall numbers go down it will happen.

Consider that the Victorian adoption of electrical power in the home was commonly a safety nightmare and many thousands died as a direct result. Look it up. You will be astonished. It took time to develop standards and reduce, not eliminate all deaths directly or indirectly due to home power. Do any of you really want to go back to zero electrical power in our homes?

I get it. In the Victorian era Science and Technology seemed more commonly accessible. People imagined, real or not, that they had some level of control over the tools and events in their lives. It is a direct analog of why people generally prefer to drive rather than take a plane despite the hard fact that air travel is vastly safer than road travel. We don't like feeling at the mercy of what we don't understand and it gets harder all the time to understand when we are manipulating molecules and atoms let alone zeroes and ones in electronic pulses that take place many millions of times per second. IMHO, the answer is most definitely not in becoming blindly skeptical and negative about everything.

It is perhaps of little solace to realize we really aren't all that far down from the trees. We still build our homes from sticks and stones. Progress is indeed inevitable but it is also inevitably slow. I'm quite confidant that true AI controlled traffic will become a reality and that by the time that happens it will be utterly obvious that there will be fewer highway deaths.

So, yes, it is probably a good thing that many are overly cautious and driven my emotional reactions. However expecting to full stop progress is folly as well as inadvisable. Things in general, especially in the realm of safety, are almost unimagineabley better than they ever were before at any time in History. It's just too easy to take it all for granted and imagine the simpler Past was somehow better overall.


All times are GMT -5. The time now is 06:01 AM.