First driverless car pedestrian death

Lordy, we're already capable of inter-frickin'-planetary space flight. There's no reason to doubt that 25 years from now, we can ride along in autonomous vehicles with a very high degree of safety.

There will be bumps along the road, and remember, that "perfectly safe" thing just doesn't exist. As kyleb said, give it time.
 
Lordy, we're already capable of inter-frickin'-planetary space flight. There's no reason to doubt that 25 years from now, we can ride along in autonomous vehicles with a very high degree of safety.

There will be bumps along the road, and remember, that "perfectly safe" thing just doesn't exist. As kyleb said, give it time.

Interplanetary space flight is a much, much easier, mainly deterministic problem than creating safe, autonomous cars operating in the random, chaotic, public streets. As evidence, I offer the fact that interplanetary space flight was being done, mainly successfully, decades ago using less computing power than a lot of people are carrying around on their wrists.

Creating reliable, safe, autonomous vehicles is a non-deterministic problem orders of magnitude more complex.


Sent from my iPad using Tapatalk
 
Cirrus had a fatal crash within the first 5 hours of flight. By the calculation method used above Cirrus should have by now killed ~2.4 million people. They somehow haven't, and has since become one of the safest GA airplane to fly.

One cannot extrapolate a trend from a sample set of 1.
Even less valid is comparing Cirrus's first five hours of flight to Waymo's and Uber's first six million miles of autonomous vehicle testing. I suspect that the latter is more statistically significant than the former. And while it's true that one accident is not much to extrapolate from, Waymo and Uber will have to complete an additional 78 million miles with no additional fatalities before they equal the record for human drivers.
 
Interplanetary space flight is a much, much easier, mainly deterministic problem than creating safe, autonomous cars operating in the random, chaotic, public streets. As evidence, I offer the fact that interplanetary space flight was being done, mainly successfully, decades ago using less computing power than a lot of people are carrying around on their wrists.

Creating reliable, safe, autonomous vehicles is a non-deterministic problem orders of magnitude more complex.

But simply going into space was a feat that seemed unimaginable several decades before we actually did it.

Technology advances in an exponential way....things that may seem impossible today become tomorrow's reality. No one is saying that autonomous cars will be commonplace and uber-safe overnight.
 
Wait for insurance premiums to settle out: If autonomous cars are "safer", premiums on human guided cars will go up
 
Hi.
It is one thing to apply advanced knowledge, and technology, to go to space, by people that know what they are doing, it's another to be plain stupid, which is what these people are, regardless of technology.
They do not deserve to be allowed on public streets until they can prove that they know what they are doing. Before that a bunch should be going to some mexican jails, the country clubs we have here are no real punishment for them.
 
Technology advances in an exponential way....things that may seem impossible today become tomorrow's reality.

Historically, there have been examples of all of the following:

1. Technology that seemed impossible but succeeded;
2. Technology that seemed impossible and didn't succeed;
3. Technology that seemed possible and succeeded;
4. Technology that seemed possible but didn't succeed.

At this point, the data are insufficient to tell which of those categories self-driving cars will fall into.

No one is saying that autonomous cars will be commonplace and uber-safe overnight.
True, but there ARE people who are saying that it WILL happen sooner or later. At present, we have no way of knowing that.
 
Is there a driverless race yet? The "Headless 200" or something like that? THAT would be interesting.
 
True, but there ARE people who are saying that it WILL happen sooner or later. At present, we have no way of knowing that.

Probably later. I don't think the problems with autonomous cars are insurmountable.

So let the thread die, meet back here in 20 years and discuss? :)
 
From Bloomberg:

Human Driver Could Have Avoided Fatal Uber Crash, Experts Say

Excerpts:

"Zachary Moore, a senior forensic engineer at Wexco International Corp. who has reconstructed vehicle accidents and other incidents for more than a decade, analyzed the video footage and concluded that a typical driver on a dry asphalt road would have perceived, reacted, and activated their brakes in time to stop about eight feet short of Herzberg."

"For human driving in the U.S., there’s roughly one death every 86 million miles, while autonomous vehicles have driven no more than 15 to 20 million miles in the country so far, according to Morgan Stanley analysts."
 
Computer driven cars are incapable of handling anything that the programmer didn't first anticipate. They're not even close to the level of being unable to get it wrong yet.
 
Computer driven cars are incapable of handling anything that the programmer didn't first anticipate.


That's not quite true and doesn't consider how systems are trained. They can handle situations they haven't seen before, but sometimes the choices they make can be quite surprising.

The next breakthrough that might make these technologies feasible may well come from quantum computing. Using quantum superposition to evaluate all possible solutions simultaneously will be a dramatic leap. I did a little dabbling in that a few years ago and got some exposure to D-Wave One, but IMHO we're still in the ENIAC stages of development with a loooong way to go. Call me when we can chill a laptop down to ~1 degree Kelvin...
 
"Zachary Moore, a senior forensic engineer at Wexco International Corp. who has reconstructed vehicle accidents and other incidents for more than a decade, analyzed the video footage and concluded that a typical driver on a dry asphalt road would have perceived, reacted, and activated their brakes in time to stop about eight feet short of Herzberg."
A human driver could also quite possibly have anticipated the location as being one with a high likelihood of a pedestrian appearing from between parked cars. When you're driving down a street at night with cars parked along one or both sides, you typically have enough experience to have an instinct for when things like that are more likely to happen. We develop a "feel" or instinct for it based on experience given the neighborhood, time of day or night, type of housing, surrounding businesses, etc. Neighborhood with young families? Downtown with bars closing? Major event going on and drunken revelry in progress? Maybe that speed limit is a little high, better slow down a bit and keep a foot ready on the brake.
 
That's not quite true and doesn't consider how systems are trained. They can handle situations they haven't seen before, but sometimes the choices they make can be quite surprising.

I've studied in this field - no, they can't. We can program the computers to go down paths, left or right. We can bump variables according to input and then program responses according to data state. Expert or AI systems don't get "trained", they accumulate a data state, from which they are programmed to react. That data state can be very sophisticated, but it still limited by what the original programmers enabled the system to accumulate and to react on.

Systems can be trained to recognize stimuli and the results. But this car failed to recognize the stimulus of a person walking in front it from the shadows because it's probably never seen it before. So it would have to have this happen to be able to see the pattern, have to be able to recognize the result of the person hit/killed by the car, be programmed to recognize the negativity of the result and THEN be able to adapt to look for this stimulus happening again in the future so it could be avoided. That is what you're talking about when you say handling situations they haven't seen before. No human or computer can predict a negative outcome from a visual pattern they've never seen before.

Of course, we could always just put IR cameras in the cars and then avoiding hitting humans in the dark becomes a lot easier. But programmers didn't think of that.
 
A human driver could also quite possibly have anticipated the location as being one with a high likelihood of a pedestrian appearing from between parked cars. When you're driving down a street at night with cars parked along one or both sides, you typically have enough experience to have an instinct for when things like that are more likely to happen. We develop a "feel" or instinct for it based on experience given the neighborhood, time of day or night, type of housing, surrounding businesses, etc. Neighborhood with young families? Downtown with bars closing? Major event going on and drunken revelry in progress? Maybe that speed limit is a little high, better slow down a bit and keep a foot ready on the brake.
The clue for me yesterday was one of those inflatable "houses" that kids jump around in, across the street from my house, and it was bouncing. I drove much slower than normal until I was well clear.
 
I've studied in this field - no, they can't. We can program the computers to go down paths, left or right. We can bump variables according to input and then program responses according to data state. Expert or AI systems don't get "trained", they accumulate a data state, from which they are programmed to react. That data state can be very sophisticated, but it still limited by what the original programmers enabled the system to accumulate and to react on.

Systems can be trained to recognize stimuli and the results. But this car failed to recognize the stimulus of a person walking in front it from the shadows because it's probably never seen it before. So it would have to have this happen to be able to see the pattern, have to be able to recognize the result of the person hit/killed by the car, be programmed to recognize the negativity of the result and THEN be able to adapt to look for this stimulus happening again in the future so it could be avoided. That is what you're talking about when you say handling situations they haven't seen before. No human or computer can predict a negative outcome from a visual pattern they've never seen before.

Of course, we could always just put IR cameras in the cars and then avoiding hitting humans in the dark becomes a lot easier. But programmers didn't think of that.

But IR cameras also add a whole 'nuther set of stimuli that have nothing to do with safety, such as the columns of heat rising from underground steam pipes or grate-type manhole covers. In a city like New York with lots of such interference, the cars would be overwhelmed and wouldn't move at all. In this case, the limitations of human vision -- specifically, not being able to see in the IR range -- give us the advantage over machines.

Rich
 
I’m just glad that when “they” think they’ve perfected these things and let them loose, I’ll be long gone and my ashes stashed away.

My parking sensors going bat s*** every time it snows is a real PITA, BTW.

Cheers
 
The clue for me yesterday was one of those inflatable "houses" that kids jump around in, across the street from my house, and it was bouncing. I drove much slower than normal until I was well clear.

Imagine an AI programmed to run through billions of real-life scenarios a second, and learning from them. First few bounce house scenarios it squishes a kid or two. It learns, “If bounce house, then slow down”. Over the next hundred billion scenarios it learns that that greatly lessens squished kids and it incorporates that algorithm.

If we learned that lesson, no reason why a computer couldn’t.
 
Last edited:
Computer driven cars are incapable of handling anything that the programmer didn't first anticipate. They're not even close to the level of being unable to get it wrong yet.

Not exactly. While there probably is some classic if-then, switch, and looping type of algorithmic programming in these vehicles, the heavy load of computation is most likely handled by neural network type of architectures. Take a look at http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html for a pretty good basic level description of these. At a fundamental level, what is being done is to create a computational structure which attempts to replicate a brain like structure of interconnected neurons, although with orders of magnitude fewer artificial neurons than a real human brain. There are several different training algorithms for these, but they all involve presenting massive amounts of data to the network along with expected outcomes for each data set. The network “learns” how to respond to the data in the set during the training phase, and the results are verified by presenting other data sets, which it did not see in training, to the network and seeing if it behaves in the proper fashion. For example, a network can be trained to identifiy faces in a picture by giving it many (thousands usually) pictures containing faces with the location of the faces identifed.

The limitations are several.

1. Typically, the engineers don’t really know how the network is making its decisions. (See the limitations section in the linked article)
2. The quality of the results depends on the input data. For instance, let’s say that the faces in the training set in my example all had blue eyes because the engineer didn’t think to check for that. It may be that instead of recognizing faces, the network really is only finding blue eyes, and a test with faces of brown eyed people would fail. The key to a robust network is to have a robust set of training data.
3. The network may be learning something entirely different from what we think it is. It will always get the correct results on the training set, and may even get the correct results on the verification set, but may get completely wrong results in the real world if the training and verification datasets aren’t large and varried enough.
4. The larger and more complex the network, the larger the data sets required to train and verify are.

For driverless cars, one huge issue is creating the massive set of training data. The network itself also has to be large enough to handle the huge variety of situations it will encounter in the real world. Likely there are multiple networksin a car.

There’s probably one which identifes road boundries and lanes. There’s likely one (or more) which are used to classify objects the sensors are seeing. And there is probably one taking these pieces of data and making decisions on what to do. Likely there is also some if-then classical algorithmic code involved also.

It’s an extremely complex and challenging problem, which requires massive amounts of known good data for training.

Complete speculation here, but for example, what if the uber car were trained to recognize pedestrians with data sets only taken when the pedestrian wasn’t with a bike? Or perhaps trained to recognize bikers also? Maybe the neural network classified the victim as a biker and not a pedestrian and had been trained to evaluate that a biker would be out of the way in time? It’s possible that it miss-classifed the situation.


Sent from my iPad using Tapatalk
 
So why didn't the car recognize that someone was walking out of the shadows?
 
So why didn't the car recognize that someone was walking out of the shadows?

Complete random speculation here to illustrate why your question can’t be answered and how incredibly complex the problem is:

* Maybe it had never been trained with such a case.

* Maybe it had only been trained to recognize pedestrians not pushing bikes.

* Maybe it had only been trained to recognize pedestrians pushing bikes with data sets in which the bikes had a rear reflector at 3-8 o’clock position with respect to the valve stem.

* Maybe it had been trained to recognize pedestrians with a training set that also had pictures of streetsigns in them, and therefore if a streetsign wasn’t in the field of view also it did not recognize the pedestrian.

* Maybe any of thousands of other possibilities...

The point is that a lot depends on the quality of the training data and we don’t really know what the network is taking into account as it makes its decision.

* Maybe there was a sensor glitch or failure, i.e. some type of momentary or permanent hardware failure.

* Maybe a cosmic ray particle caused an failure in a key node of the system at a key point in time. (Not joking here, military and space electronics are rad hardened and tested just to prevent this type of failure)

* Maybe it was a murder and someone cleverly blinded the lidar with an infrared CO2 laser just as the pedestrian was crossing the street. (okay, not really likely in this case, but one must consider the possibility that these self driving cars can be maliciously tampered with from a distance)

Bottom line, there’s no way of answering your question without lots of reverse engineering work. Hopefully the car in question kept a full record of all the sensor data it was seeing so the engineers can maybe find out what happened. Even with the data, it will likely take hundreds or thousands of man hours to completely recreate where the car made the wrong decision.
 
Last edited:
2. The quality of the results depends on the input data. For instance, let’s say that the faces in the training set in my example all had blue eyes because the engineer didn’t think to check for that. It may be that instead of recognizing faces, the network really is only finding blue eyes, and a test with faces of brown eyed people would fail. The key to a robust network is to have a robust set of training data.


+1
Better yet, +1e9

It's next to impossible to have a complete enough training set in a development environment. Part of the reason for putting these vehicles onto real streets with real scenarios is to continue training. BUT, that training has to be done with a human back-up safety net, which was just shown to be less than ideal.

Humans also learn from an entire variety of other life experiences that we then bring to driving, and computers don't benefit from this. The statement about bounce houses is an example. Similarly, if we see balloons tied to a mailbox, and maybe a colorful sign, we humans think there might be a child's birthday party under way and we exercise caution. If we smell hot brakes we realize someone around us might be having a brake problem. If we're driving past Fast Eddie's place we know to look out for puppies. We recognize funerals, weddings, little league games, know when it's Bike Week in FL (last week), and a gazillion other experiences that all influence our driving.
 
One thing about the ability to learn: The ability to transfer that knowledge to others, to teach.

This car *maybe* saw a human pushing a bike for the first time. Maybe it learns from that and doesn't do it again. Unless it has the ability to publish that knowledge so other vehicles can be taught, then the transfer of knowledge has not occurred.
 
Unless it has the ability to publish that knowledge so other vehicles can be taught, then the transfer of knowledge has not occurred.

It may seem like science fiction, but one thing being proposed is that all enabled vehicle talk to each other, setting up the possibility of “sharing” learning. Think “hive mind”.

Has anyone watched “Humans”? Not much of a spoiler alert, but the non-sentient automatons constantly ask the sentient ones, “Why won’t you share?” They expect to upload/download each other’s experiences. Fascinating show and worth the watch, BTW.
 
But IR cameras also add a whole 'nuther set of stimuli that have nothing to do with safety, such as the columns of heat rising from underground steam pipes or grate-type manhole covers. In a city like New York with lots of such interference, the cars would be overwhelmed and wouldn't move at all. In this case, the limitations of human vision -- specifically, not being able to see in the IR range -- give us the advantage over machines.

Rich
Stationary heat sources not in direct path - no big deal. Heat sources moving toward the path of the vehicle -- big deal.

I was talking about this with my niece (also sort of a geek) yesterday as we drove back from a wedding. I think it would take a combination of IR and solid-object detection -- RADAR, LIDAR, etc. Let's say you have a well camouflaged pedestrian standing beside a manhole venting warm air. You need to be able to distinguish the stationary solid object and react to its presence, while ignoring the stationary warm air. You also need to be able to scan off the side of the road for moving warm objects that are converging with the path of the vehicle.

It's a problem that can be solved, but obviously hasn't been solved yet. In the case of this most recent accident, as well as the Tesla crash while under "autopilot" control, it may well be true that the automation was not specifically at fault. If someone or something suddenly darts out in front of my car and I fail to stop in time, it may not be legally "my fault" - but it may also be true that I could have stopped in time, had I been more observant or less distracted or had better reaction time or better judgment. Sometimes there's a substantial margin between what puts you "at fault", and what we're really willing to accept.

The advantage of the available technology is that the car could actually see better than the human driver. We can't see IR, we don't have radar or LIDAR or echolocation. If we did, we could avoid those "they came out of the shadows in black clothing" accidents. The car can have all of those sensors, but needs to have the software developed to the point where they do enough good to make the car at least as good as the best human drivers.
 
It may seem like science fiction, but one thing being proposed is that all enabled vehicle talk to each other, setting up the possibility of “sharing” learning. Think “hive mind”.

Big help to learning, but also a big vulnerability to malicious intrusion. Imagine your entire network learning some very very bad habits....
 
It may seem like science fiction, but one thing being proposed is that all enabled vehicle talk to each other, setting up the possibility of “sharing” learning. Think “hive mind”.

Has anyone watched “Humans”? Not much of a spoiler alert, but the non-sentient automatons constantly ask the sentient ones, “Why won’t you share?” They expect to upload/download each other’s experiences. Fascinating show and worth the watch, BTW.

Think several other cases enabled by this capability.

RF (or IR or whatever) interference, either malicious or random, preventing vehicles from communicating correctly about things like current position, speed, and intentions. Imagine lots of these things traveling at highway speeds all merrily coordinating with each other to make sure there are no collisions when the data links are taken down by RF jaming by someone intent on doing harm.

Suppose there is a transfer of knowledge between vehicles. Imagine someone hacking into that protocol and transfering false knowledge to the network.

These two scenarios are just off the top of my head. Besides the already complex task of just getting these things to safely work in a benign environment. Think of the opportunities made available to bad actors.

Blinding the sensors, interfering with the data links, heck, throwing a spike strip in front of the car, all could be done to interfere with the system.

Not only do the engineers have to solve the many problems with just getting the cars to safely operate in full autonomous mode, they also have to think about securing them from intentional malicious behavior.

[Edit] looks like HalfFast beat me to it regarding these points. He posted while I was composing...



Sent from my iPad using Tapatalk
 
Stationary heat sources not in direct path - no big deal. Heat sources moving toward the path of the vehicle -- big deal.

I was talking about this with my niece (also sort of a geek) yesterday as we drove back from a wedding. I think it would take a combination of IR and solid-object detection -- RADAR, LIDAR, etc. Let's say you have a well camouflaged pedestrian standing beside a manhole venting warm air. You need to be able to distinguish the stationary solid object and react to its presence, while ignoring the stationary warm air.


Infrared imaging can be much more robust than you may realize (I'm one of the original inventors and patent holders of this sensor, for example: https://en.wikipedia.org/wiki/Sniper_Advanced_Targeting_Pod ), but there might be some mild objections to raising the price of a $50,000 automobile to $1,050,000... :D
 
It's next to impossible to have a complete enough training set in a development environment. Part of the reason for putting these vehicles onto real streets with real scenarios is to continue training. BUT, that training has to be done with a human back-up safety net, which was just shown to be less than ideal.

All it shows us is that PARTICULAR human back-up safety net was less than ideal.

When my car is under autopilot anywhere over 15mph, my hand is holding the steering wheel tighter than I would if I were to outright drive the car myself. No way the car is driving anywhere I don't want it to go.

Having said that, if I saw only what the recording was showing I would not have been able to respond in time. But obviously I think I would have seen more than the recording is showing.
 
To be fair, the human "computer" didn't look, or didn't notice, the headlights of an oncoming car when entering the street, at night, in the middle of the block. Of course the computer in the car should be able to react to this, but I shake my head at pedestrians who cross the street, at night, wearing dark-colored clothing, trusting that cars will stop.
 
To be fair, the human "computer" didn't look, or didn't notice, the headlights of an oncoming car when entering the street, at night, in the middle of the block. Of course the computer in the car should be able to react to this, but I shake my head at pedestrians who cross the street, at night, wearing dark-colored clothing, trusting that cars will stop.

True. It seems that certain pedestrians and bikers tend to think that since they can see us (with our headlights on), we can see them. Not necessarily true at all.

However, in this case, the car’s sensors _should_ have been able to perceive the pedestrian. That is one advantage of the automation over humans; the sensors are better. Not sure if they did or not, but they should have had that capability. Clearly, even if the sensors perceived her, the systems in the car failed either in the interpretation or decision phases and clearly took no action.


Sent from my iPad using Tapatalk
 
The point is that a lot depends on the quality of the training data and we don’t really know what the network is taking into account as it makes its decision.

That is kinda my point too. The source of the training data and what the network is taking into account is the program team. Whatever they didn't anticipate happening isn't trained for, isn't sensed for, isn't accounted for.
 
Infrared imaging can be much more robust than you may realize (I'm one of the original inventors and patent holders of this sensor, for example: https://en.wikipedia.org/wiki/Sniper_Advanced_Targeting_Pod ), but there might be some mild objections to raising the price of a $50,000 automobile to $1,050,000... :D
We discussed that yesterday as well. New or rarely used technology, and military hardware in particular, is always eye-wateringly expensive for various reasons. Of course your car would very likely need only a small subset of the features (and far less performance).

An expensive sensor suite is unlikely to remain expensive when a few million or tens of millions of units per year are needed.
 
Back
Top