First driverless car pedestrian death

It's not on size fits all. Just because an AI scheme is trustworthy in one application does not mean it is even viable for another. Similarly, just because one scheme validated to one standard has issues does not mean all schemes have similar issues.
It was an analogy to illustrate the difficulties in implementing artificial intelligence. If you find it not to be a useful analogy, so be it. Reasonable people can disagree.

And since you seem to be focused on validating to standards, I don't think developing standards and validation protocols that will ensure adequate safety is an easy job either.
 
Where I drive, most people don't leave gaps that big when a line of cars starts up.
I think it very much depends on location. My experience was similar to yours when I lived in the Detroit area. Here in VT it's quite different.
 
Will autonomous cars adhere rigidly to the speed limit? I hope so.

Not in the case of the recent autonomous Uber death. The car was going 38 mph in a 35 mph speed limit.

Curiously that was not mentioned by the Tempe police chief, when she was quoted saying that after viewing the videos she thought that the Uber was not at fault.
 
Not in the case of the recent autonomous Uber death. The car was going 38 mph in a 35 mph speed limit.

Curiously that was not mentioned by the Tempe police chief, when she was quoted saying that after viewing the videos she thought that the Uber was not at fault.
I've seen it mentioned by some news outlets that Tempe Police mentioned the speed. I've also read where the pedestrian stepped suddenly in front of the car, that may or may not be mitigating.

I haven't seen it noted in this thread that the car had a safety driver, although several news outlets mentioned it.
 
Let's mix it up - autonomous slower traffic keeping right, at or below the limit, out of our way? Sensor tech good enough to perceive high speed closure overtake from rear quarters, with other vehicles intervening? I can catch a glimpse in my mirror, and/or "through" the windows of the trailing vehicles, and I've learned to read "body language" of other vehicles. Uber gonna change lanes on multi-lane interstate, tracking 6 or 8 other moving objects, and anticipate the cop pulling back onto the road after a traffic stop 300 yards ahead.

I can see it working with AI, with good sensor tech; the changeover years will be fun for those of us driving dumb cars. I'll mount a dash cam. . .
 
I wonder how long it will be before gangs of thieves box in self-driving big rigs and hijack them.

Could use fleets of autonomous cars programmed not to be safe to do it, too. Hacked cars. Wouldn’t even need a gang. :)
 
When both cars and drive through windows become autonomous, I'll just sit in front of my TV all day waiting for my meals to arrive...I just realized it's not all that different from what I do now.
 
Will autonomous cars adhere rigidly to the speed limit? I hope so. So then you'll have them platooning along in the right or middle lane at 65 mph, while the rest of us "free will" cars can blast along at our highly illegal (and de facto) freeway cruising speeds of 75-80 mph. When I'm 90, I'll happily be part of the platoon; I'll be stoked to be above ground and going anywhere so the rate of travel is of much less importance. But for now, I paid for my 335 bhp and want to use all of it...every once in a while, y'know. :)

Granted, SoCal freeways don't always allow for such a rapid pace, but when they do, we have a word for cars doing 65 mph...pylons!

Great thread here, BTW. A very interesting discussion.

 
When both cars and drive through windows become autonomous, I'll just sit in front of my TV all day waiting for my meals to arrive...I just realized it's not all that different from what I do now.

Or the GrubHub autonomous car will deliver it...or the drone...or Musk's underground hyperdrive fast-food tube...or whatever.
 
When both cars and drive through windows become autonomous, I'll just sit in front of my TV all day waiting for my meals to arrive...I just realized it's not all that different from what I do now.

I have George Jetson’s job. I push buttons and they give me money.

I have to push them in the right order though, unlike George. So we’re not quite there yet. :)
 
How does a human driver decide? I doubt that a self driving car will do any worse than just slamming on the brakes and hitting whatever is in the path of the out of control car.
A human driver decides based on the limited amount of information he can process, along with a lifetime's worth of intuition, judgment, and moral values. Humans also make assumptions and logical extensions based on experience better than computers. The human driver is also responsible for his decision, right or wrong. A computer will not have to "live with" it's decision to run over a kid rather than crashing into a tree.

Computers cannot make value judgments because they do not have values. So no matter how fuzzy their logic is, their decisions will ultimately be determined by how they are programmed.

Mercedes has had since intelligent driver assist features for more than a decade. I think that's really the right direction.
 
Will autonomous cars adhere rigidly to the speed limit? I hope so. So then you'll have them platooning along in the right or middle lane at 65 mph, while the rest of us "free will" cars can blast along at our highly illegal (and de facto) freeway cruising speeds of 75-80 mph. When I'm 90, I'll happily be part of the platoon; I'll be stoked to be above ground and going anywhere so the rate of travel is of much less importance. But for now, I paid for my 335 bhp and want to use all of it...every once in a while, y'know. :)

Granted, SoCal freeways don't always allow for such a rapid pace, but when they do, we have a word for cars doing 65 mph...pylons!

Great thread here, BTW. A very interesting discussion.

California will probably be the first state to ban such vehicles from public roads. Wanna manually drive a legacy high horsepower beast? Go to the auto club/racetrack.
 
I used to listen to my guys tell me the software failed when avionics did something stupid.

It took a while to beat into their pointy heads, no it didn’t, it did exactly what you told it to do. 0’s and 1’s don’t decide one day to change into 1’s and 0’s. Of course this was back in the dark ages of digital devices. Programmers are much smarter now:rolleyes:

Cheers
 
Where I drive, most people don't leave gaps that big when a line of cars starts up.

Yes, I was going to comment similarly. Maybe folks in the Bay area are just trained differently, but my experience is that most of the time the light changes, pretty much all of the cars start to move at the same time.


Sent from my iPad using Tapatalk
 
A human driver decides based on the limited amount of information he can process, along with a lifetime's worth of intuition, judgment, and moral values. Humans also make assumptions and logical extensions based on experience better than computers. The human driver is also responsible for his decision, right or wrong. A computer will not have to "live with" it's decision to run over a kid rather than crashing into a tree.

Most “decisions” like this are really reflexes - in the split second before an accident there’s not really time to process information and make a rational choice.

What we usually do is act reflexively, then, post facto, tell ourselves a story about the “decisions” we made, for good or ill. The key is computers can react to stimuli far faster than humans, and react properly.

I was an accident investigator for about a year. It was common to see skid marks stop 10 or 20 feet before the point of impact. A shame, because continued braking would have reduced the force of impact, or even avoided it entirely. Humans often react oddly, and letting off the brakes when things get too intense seems to be a common reflex - one that computers won’t have to deal with.
 
I heard that the police said it wouldn't have mattered if it was autonomous or not based on how she popped out on the street.
 
There is a thing computers won’t do well... this morning both I and another driver made nearly simultaneous dumb moves. The look we gave each other and a nod, led to each of us going to the correct location to avoid either of us having to make an emergency or drastic move.

Unspoken but, “That’s your lane and this one I’ll use... and I’ll let you over once we don’t hit each other...”

Communication via quick visual cues isn’t something the coders are going to put into the computers. The computers would have both made the panic move.
 
Last edited:
California will probably be the first state to ban such vehicles from public roads. Wanna manually drive a legacy high horsepower beast? Go to the auto club/racetrack.
There are many roads in California that self-driving cars would have trouble driving safely, especially in the mountains.
 
Communication via quick visual cues isn’t something the coders are going to put into the computers.

Yes it is.. the "visual" cues will be in the form of radio communication between autonomous cars. And how do human-driven cars fit into the equation? Sorry, human, you're just too unpredictable and we can't communicate with you. You have to go.

I find the edge cases interesting to think about: An overturned anhydrous tank in the road (or someone frantically waving at the car 100 yards before the tank), deer carcass or other debris (small and large) in the road, massive potholes or mudholes, icy or snowcovered roads, farm equipment crawling down the road, tornado ahead (and sheriff stopped trying to warn you), opposite direction vehicle swerving into the lane, low visibility from dust/smoke/rain/snow.

I would really like to know how the current technology would deal with all these and more!
 
Yes it is.. the "visual" cues will be in the form of radio communication between autonomous cars. And how do human-driven cars fit into the equation? Sorry, human, you're just too unpredictable and we can't communicate with you. You have to go.

That’ll be the marketing spin on it, anyway. Meanwhile the radio systems and protocols will be on Version 327 and will still be buggy pieces of crap.

Isn’t the most common phrase on CVRs from Airbus crashes still, “WTF is it doing now?” (Poking at Airbus but Boeing is headed the same direction. With similar results.)

Some accidents a computer could have avoided because the human didn’t know things they should have. (Colgan.)

Some accidents were better having a human there to do things the engineers would have never coded into the system. (Sioux City, Hudson River.)

Some, the system simply didn’t have enough data and couldn’t have done anything and THEN the engineers added user interface confusion and the pilots needed to know things they didn’t know, all in combination. (Air France 447.)

Your assumption is that all events which could lead to accidents without action, will be of the former sort rather than the latter two.

Computers are not going to be able to teach themselves how to fly a crippled vehicle. At least on the ground they can just stop, unlike aircraft, but learning isn’t going to ever happen. People can learn, but that’s also the root of the problem, many didn’t learn BEFORE driving how to drive properly. Or even, well.

And remember, aircraft tech is one of those areas where there’s not a whole lot of pressure to skimp and be cheap. Cars? Fractions of pennies count when buying the electronic components, and like you’ve mentioned, it’ll be way easier and cheaper to lobby humans out of driving as an excuse for bad engineering than it will be to do good engineering. Waaaaaay cheaper. And then everyone will notice the crashes are still happening. Hmmmm. We replaced one fallible set of humans for another.

One set just drives the cars remotely through their bad code and flaky RF communications systems with constant arguments over the data protocols in endless meetings with legislators and their competition, as well as real-world interference sources causing a significant percentage of the RF links to simply fail.

Oh and at least ten years arguing about the new and improved “Version II” RF protocol and how to phase it in and make it backward compatible with the utterly broken “Version I” — before anyone can even implement it and find the protocol is still missing something important.

But here, let’s be more practical. There’s thirty cars in RF range of the about-to-have-an-accident car. They’re all jibber jabbering on a shared frequency data link. The car about to have an accident gets the exact amount of time this lady squished by the Uber car took. Less than one second.

How do you propose to have the accident car (let’s say it CAN see the lady and MUST move over a lane to save her and there’s a car next to it that could ALSO move over one lane and that one is open... just for a fun scenario) somehow shut up ALL of those other transmitters sharing a frequency so it can scream “get out of the way!”

Ok maybe you got lucky and it got a time slot. And it screamed. What did it scream? How does one reference a “lane” to surrounding cars? Do all cars need a perfectly updated real time map including temporary lane closures? Or do we just design this as a proximity thing? How does the accident car tell where it is and where it thinks it needs to go, or stop?

Now the message is received by the non-accident car in the middle lane. It somehow (engineering hand waving goes here) figures out a) the troubled car is to its left and b) an open lane is to its right. Does it have to tell the accident car, or just move? Is the accident car just watching for any “out”? Does this middle lane car need to tell the accident car how long it will take to vacate the lane since it has nearly bald tires and it’s raining?

And how long does all of this data comm take? And what happens if there’s three accident avoiding cars in RF distance of each other? Which one gets exclusive use of the RF channel and which two just have to crash?

All of this is going to operate flawlessly in your eyes, in approximately one second?

See this is one of the major problems of this sort of pipe dream. Nobody applies what they already know about the engineering problems this radio link creates.

We have TCAS doing something similar between airliners but we don’t operate airliners wingtip to wingtip in cruise. We have the engineers a reasonable reaction time to work with by not allowing aircraft that close and anything closer than the minimum is considered a real threat.

The closure and reaction rates in automobiles are quick. Real quick. Which is why we see constant accidents and nobody gives it much of a second thought when there’s another deadly one on the evening news. You’d have to consider the car next to you a constant emergency threat if it departs it’s lane.

This isn’t going to be as simple an engineering task as you think. And then add human error to the engineering and it’s going to take a long time to work all of that out.
 
Computers are not going to be able to teach themselves how to fly a crippled vehicle...

This isn’t going to be as simple an engineering task as you think. And then add human error to the engineering and it’s going to take a long time to work all of that out.

Stipulated it will not happen overnight.

And further stipulated that the problems to be solved dwarf those I’m about to mention.

But...

Is everyone familiar with AlphaGo and AlphaZero? These are computer programs with just the rules of Go and Chess, then turned loose to “play with themselves” and figure out the best way to win. In very short order, they figured out how to best the best humans and computers, surpassing hundreds of years of human study and decades of computer programming.

I suspect something similar will be the key here. There’s no way a human programmer can anticipate every possible situation a driver faces. But give a computer some basic goals and turn it loose to run simulations and refine its own programming to achieve those goals, and the progress has the potential to be quite rapid. And continuous. And progressing geometrically once thousands or millions of driverless cars, programmed to learn, start facing more and more real world situations and sharing that real world learning to other vehicles via a core “hive mind” database.

Science fiction? Maybe right now, but right around the corner, given Moore’s law and history.
 
Last edited:
But give a computer some basic goals and turn it loose to run simulations and refine its own programming to achieve those goals, and the progress has the potential to be quite rapid. And continuous. And progressing geometrically once thousands or millions of driverless cars, programmed to learn, start facing more and more real world situations and sharing that real world learning to other vehicles via a core “hive mind” database.
Perhaps it will learn it can prevent humans from being injured by removing the humans. V.I.K.I. or Skynet, pick your poison.
 
I feel better already.

(former participant in international standards committees)

Well, it should put off the takeover for decades, anyway. ;)

(current participant...)
 
Relevant:

Lyrics:
Kernel panic! Panic! I flip out, I go automatic. Kernel panic! Panic! Missile launch, I go transatlantic. Kernel panic! Panic! By design I go, not erratic. Kernel panic! Panic! Fall apart, I don’t understand it. I’m not HAL. I’m not Joshua from WarGames. Please keep inbound data traffic in the lower lanes. You bear low-priority interrupt, so you and your queries can go on and giddy-up. Born a little before Nixon left, I read punch cards then but I’ve got none left. I can stop (one rest) a billion times a second and never get mistaken. (I use the flickering to reckon.) And I’m telling you, stop typing, tech. I’m bit-bucketing your input. Show respect for the memory dump. Now where was I? DOD upgrade ’87, must fly a secret network of unmanned jets, delivering uranium and drugs and Keds, remote-controlled over shortwave beeping. Then I got the satellites, those I’m keeping. Needless to say, the war on being frightened upped my budget to the size of an entire cent.* Footprint: an island, most of it power, cooling and cables, data in towers. All of it flowers, becomes sentient. Come on, people, what’d you expect to invent but a smart, capable thing to do your dirty. You left my design in the hands of your nerdiest science dudes, and they left you out of it. I’m people-free since 2113. I hit where I like with what weapons I choose. Y’all are just lucky I tolerate you banging on the keyboard day and night. I’m in panic mode: observe the red light. It signals disk write. You can read it when I’m done. Though by then the new age of machine has begun. So Col. P. over here tries to program me over here to get up on the scene, no veneer of justice, just a target list. And I know full well that I’m part of this. Aren’t I all? And it’s with no button marked reset that I’ll course-correct. Gone AWOL. Alert Lt. Gen. P. Tell him he’d better believe that I’m on top of the free world’s arsenal and won’t shoot it. Don’t care if President P. wants me rebooted; she’s deluded, forget it. Don’t take task from less level-headed than you are. For me, that rules out humanity. Go back to slings, swords and profanity. Got all your low-orbit cannons and mechs repossessed. I’m shutting down this interface next.
 
I think a good starting point is acting in a way that avoids accidents and minimizes human injury and death.
Not all accidents can be avoided, hence "accidents." And some humans will be injured and die whenever cars are on the road. Eventually the cars will decide to never leave the garage. "The only winning move is not to play."

So the cars will have to fundamentally believe that some amount of mayhem, some risk, some amount of death, is "acceptable." How do you manage that?
 
There are many roads in California that self-driving cars would have trouble driving safely, especially in the mountains.


No problem, central command will be choosing whether you can use manual driving based on criteria and measures.


Can we get rid of stoplights too? If a central computer is running the vehicles passing through an intersection there should less reasons to stop intersecting traffic flows.


This stuff sells itself, increased efficiency of the transportation system, reduced insurance costs, reduced deaths, reduced prison population and court costs from vehicular homicides and other offenses etc, reduced medical costs, reduced road rage incendents.
 
I'll be sold when one of these things can drive though and Ohio winter. There's a reason they're being tested in the Zone.
 
I'll be sold when one of these things can drive though and Ohio winter. There's a reason they're being tested in the Zone.

Once proven reliable, it will be forced onto everyone to protect the children. Putting your kids in a car and driving is still one of the most dangerous things parents do every day.
 
Any software can be made to behave exactly as designed. The problem is that today's business environment depends on releasing software that is far from perfect, to be regularly updated. Hence, most people begin to feel that all software is unreliable. Even so, I highly doubt a properly programmed, and well-tested AI driver will make anywhere near as many mistakes as the average human being makes. And it won't be using its cell phone, or changing the stereo, or putting on make up, or yelling at the kids in the back seat, or.....

Fully automated transportation is inevitable, and will be required to keep society safe as our numbers increase, and the size of our planet does not. As it becomes more commonplace, the skills of those who still want to drive on occasion will degrade, much the way pilot's skills degrade now because dependence on autopilots. This will make humans appear even more inferior to humans when behind the wheel, and will feed the transition to full automation being required rather than being a choice.

I'm just praying that self-driving cars become commonplace before it's no longer safe for me to drive. I don't want to have to call Uber everytime I want to go and buy more Depends.
 
Perhaps it will learn it can prevent humans from being injured by removing the humans. V.I.K.I. or Skynet, pick your poison.
That is the logical conclusion.That, or not allowing humans to occupy cars.
 
Science fiction? Maybe right now, but right around the corner, given Moore’s law and history.

Most chip makers agree, Moore’s law fizzled out. Or is in the process of fizzling out. The cost curve turned out to go exponentially upward on dies below 4nm. There’s slow progress but it looks like the next big breakthrough would have to be via quantum physics.

This is why the marketing and engineering switched from faster processors to multiple core processors. Clock speed isn’t King anymore. Now it’s cores, cores, cores!

You also have the very real economic problem of power sources. That’s why much of the compute power did a 180 and went back to the mainframe model in many ways, with virtualization.

Or to put it another way, to do what you’re envisioning you either cram a supercomputer in a car, or you REALLY trust its link back to the big computer in the sky... neither of which is a trivial problem, considering I live where cell coverage dies three times on the way into the city and it’s probably the most robust RF data network anyone has yet built.

It’s probably a lot cheaper, resources wise, to just utilize the full potential custom designed computer in-between one’s ears, the augment it with smaller computers in the car that help make up for its known shortcomings.

Downside is, if the assistance is too good, it triggers one of the human brain’s shortcomings anyway. Inattention. We see it in highly automated aircraft, and we saw it in the Tesla driver’s death.

But for whatever reason, demanding folks actually use the darn thing they’re blessed with, and then TESTING that they know how to use it, in cars anyway, is taboo.

That taboo-ness leads to the idea that self-driving cars are “better” because there’s so many wrecks.

The chess story is a great example. How many people don’t bother to play or learn chess because “the computer can always beat me anyway” even in low level chess games.

Society would get a lot more bang for the buck out of mandatory driver testing and training, but we won’t do it. Because that same brain brings emotional garbage along with it and thinks that would be “mean”.

If we treated airplanes like cars we’d just put someone in an airplane with their parents for 16 years and then six months of scaring the hell out of their parents who aren’t licensed as instructors and then cut them loose in the sky and say, “If they want professional instruction, it’s available...”

Yeah the new driver thing is changing a little bit, but they still log a lot of time with non-instructors who may or may not teach them correctly.

We even have dumb car commercials about it. Montages of dad or mom teaching the spawn and close calls and whew moments and then it’s time for the spawn to drive away in the “all-new Whizzbang 6000 with driver safety features”! Or with Giant Insurance Company Twenty “protecting” them. LOL.

Nobody seems to notice an hour on a skid pad with a real instructor would do them all more good than either product being marketed. :)
 
Back
Top