First driverless car pedestrian death

From what the dashcam footage shows .... she was barely paying attention to the road, texting or playing a game on her phone.
No amount of training can FORCE a dumb and lazy person to do something they don't want to do.

For comparison, imagine if she was driving her regular car and while texting, hit the bicyclist. Now would it be a good idea to sue the state whose DMV provided the Driver Ed classes to the driver because she seems undertrained?

Same with the student pilot who ran out of fuel during his solo practice and the ensuing (pun intended) lawsuit against the school for improper training.

I do understand what you are saying but i will repeat my words: we, as humans, need to learn to accept our own responsibility in life, not hide behind some company to shift blame for our mistakes.

(for the record, I don't have a dog in this fight, I don't have any feeling for/against Uber or their automated vehicles, I do not own any of their stock or use them in real life - this is a moral debate to me)

I'm almost certain the law would apply the doctrine of vicarious liability here. As an employer, you are responsible for the actions of your employees when they are undertaking duties that are part of their jobs.
 
That would mean the software and hardware engineers in this case if the car was “driving”. If the passenger in the car seat was “driving” then you’re correct.

And both will be a legal construct not a moral one. The moral construct would make the engineers responsible.

And there’d be no concept of “a company” in a moral construct.

But law is rarely moral or just. The engineers won’t be charged with manslaughter or murder.

I bet the real reason this happened is that the car is tuned to look for car sized and shaped things and people sized and shaped things and person plus bike confused the software.

Addition of a FLIR sensor and integration with the LIDAR would have given enough data to know the thing crossing in front of it was warmer than the surroundings and likely a human or alive even if it was shaped wrong.

FLIR would add a whole 'nuther set of problems. Heat from manholes, underground steam pipes, and even narrow beams of sunlight filtering through the leaves of a tree could confuse the hell out of it.

I have a client who uses FLIR to find honey bee nests inside walls. It takes very little heat to trigger them.

Rich
 
The more I think about it, the more I think that this autonomous car thing is much farther off than a lot of people think/hope.

The one thing I can’t reconcile is if computers/AI will ever be able to replicate the human mind when it comes to prediction. Most of us have been in a situation when all our previous life experiences culminate into a “spider sense” of sorts where we can predict the actions of others just by observing them. We’ve all been on the road, passing a cyclist or even another car on the road and thought “this guy doesn’t even know I’m here” and slow down based on the fact that the inattentive cyclist “looks” or behaves a certain way. Or we see kids playing in a yard and we slow down because we know that a ball may get kicked into the road. I think no matter how good the AI becomes, it’s not going to be able to “read” a fellow driver’s actions before they even happen, or see a scenario unfolding and base its driving on things that might happen (kid running into the street).

To this specific accident, I’ve lived in Tempe long ago. I don’t know what road this is on, but most of these four lane roads in Tempe are pretty well lit, even if someone is crossing between the light posts, I can’t believe that an attentive driver wouldn’t have seen this pedestrian as she was crossing the other three lanes. I would also think that an attentive driver would have braked hard, swerved and laid on the horn, perhaps warning the pedestrian to stop, or jump back out of the way, perhaps getting injured, but maybe not fatally.
 
Murder requires premeditation. None of the people involved intended to kill anyone.
Premeditation exists when someone knows that the act, action, that someone takes, can result in death, or killing someone, and does not try everything they can do, to prevent it. Given the circumstances this could have been prevented by more / better testing and improved technology.
It is clear that they did not do everything they could to prevent it.

I do think that intention, intentional action, is necessary.

What is more of a problem, and it's just a matter time, this will likely kill many more people, unless it is stopped now, and sent back to Research, test only, for more / better testing and improved algorithms, sensors....
This is only the top / tip of the iceberg, wait until one of those huge vehicles gets into a situation where it is likely to kill many more people in one occurence.
 
Still, even as bad as many drivers seem to be, I have to question the premise that autonomous vehicles are or can ever be made any safer than an average human driver. Where's the evidence?
Rich

I think your typical engaged driver will be safer than an autonomous car for many years to come. The average driver doesn't have accidents when s/he is paying attention. But a distracted driver (cell phone, screaming kid, applying mascara) or an impaired driver is likely to be substantially worse than an autonomous car.
 
I think your typical engaged driver will be safer than an autonomous car for many years to come. The average driver doesn't have accidents when s/he is paying attention. But a distracted driver (cell phone, screaming kid, applying mascara) or an impaired driver is likely to be substantially worse than an autonomous car.

While waiting for a light about a week ago, I saw a lady who had her cell phone mounted vertically on the windshield about a foot from her face in a spot where it surely must have obstructed her vision. She appeared to be having a Face Time call. I don't understand that. I don't even like the music to be too loud when I'm driving.

Maybe it's because I spent some time doing volunteer EMS. Seeing the aftermath of bad driving tends to drive home the seriousness.

Rich
 
Premeditation exists when someone knows that the act, action, that someone takes, can result in death, or killing someone, and does not try everything they can do, to prevent it. [snip]

Actually, no. Premeditated means it was planned and intended to kill somebody. What you described is negligence. So it could be negligent homicide. (And, for the record, homicide happens when a human is killed. It doesn't matter whether it's 1st degree (a.k.a. premeditated) murder, 2nd degree (a.k.a. in the heat of the moment), negligent or accidental.)

John
 
FLIR would add a whole 'nuther set of problems. Heat from manholes, underground steam pipes, and even narrow beams of sunlight filtering through the leaves of a tree could confuse the hell out of it.

I have a client who uses FLIR to find honey bee nests inside walls. It takes very little heat to trigger them.

Rich

Thus why I said it has to be integrated with other sensors. By itself, no.
 
It can only be homicide if the death is intentional. The rest of your question is up to the lawmakers to answer.

That’s why I used the word. Politicians will certainly make sure to use it if someday cars can drive themselves and someone goes against the crowd and decides to drive the vehicle themselves. The threat to conform will be a real threat of incarceration and being charged with homicide. Because by then, “Everyone KNOWS you’re not as safe as a computer...”

It’ll happen. I’ll be long dead before then, but it’ll happen.
 
I think your typical engaged driver will be safer than an autonomous car for many years to come. The average driver doesn't have accidents when s/he is paying attention. But a distracted driver (cell phone, screaming kid, applying mascara) or an impaired driver is likely to be substantially worse than an autonomous car.

I think you're right. But the engaged-to-distracted ratio seems to be about 1-to-30. :eek:
 
The more I think about it, the more I think that this autonomous car thing is much farther off than a lot of people think/hope.

The one thing I can’t reconcile is if computers/AI will ever be able to replicate the human mind when it comes to prediction. Most of us have been in a situation when all our previous life experiences culminate into a “spider sense” of sorts where we can predict the actions of others just by observing them. We’ve all been on the road, passing a cyclist or even another car on the road and thought “this guy doesn’t even know I’m here” and slow down based on the fact that the inattentive cyclist “looks” or behaves a certain way. Or we see kids playing in a yard and we slow down because we know that a ball may get kicked into the road. I think no matter how good the AI becomes, it’s not going to be able to “read” a fellow driver’s actions before they even happen, or see a scenario unfolding and base its driving on things that might happen (kid running into the street).
Another problem I can see is that, as a pedestrian crossing the street, I often look at the driver to make sure they see me and at least start to slow down or stop. With a computer-driven car, there is no one to look at.
 
Another problem I can see is that, as a pedestrian crossing the street, I often look at the driver to make sure they see me and at least start to slow down or stop. With a computer-driven car, there is no one to look at.

There's been discussion that some electric vehicles are too quiet, pedestrians don't hear them coming and step off a curb or step in front of them in a parking long. They now have to make some sort of audible signal below 20 mph.

As far as making eye-contact with a driverless car:

It would be interesting if a driverless car were equipped with some sort of low powered visible laser or some other device that would be able to be aimed at a pedestrian as a warning that the car senses a possible conflict.
 
Premeditation exists when someone knows that the act, action, that someone takes, can result in death, or killing someone, and does not try everything they can do, to prevent it. Given the circumstances this could have been prevented by ... not f***ing with a cell phone while driving.
FTFY
 
...I disagree with blaming the company for this senseless death. Whether some Uber SW was keeping the car in the lane, keeping the speed (like cruise control) or whether the driver had their foot on the gas pedal, does not matter. The driver is the PIC equivalent on the road. She was responsible for the safe outcome of the trip. Otherwise why have the driver in the driver seat? Or why have the driver seat at all? Driverless cars should just have a back seat (or sofa), no?

If you set cruise control in your car, do you start texting because it frees up your time some? You still watch your speed and scan for obstacles and such so that you can stop in case you encounter something on the road.

If Uber cannot get out of this one, they need to hire a real lawyer.
It's likely that spending so little time looking at the road was normal practice for the guy in the driver's seat. If so, then if Uber ever reviewed the footage of the guy, they should have informed him that taking his eyes off the road to that extent was not acceptable. And if they weren't at least spot checking those videos, they should have been.
 
Last edited:
It's likely that spending so little time looking at the road was normal practice for the guy in the driver's seat. I so, then if Uber ever reviewed the footage of the guy, they should have informed him that taking his eyes off the road to that extent was not acceptable. And if they weren't at least spot checking those videos, they should have been.

Rhetorical question: Do you think DART, Wal-mart, UPS, FedEx, the USPS, etc. spot check videos of their drivers' attentiveness? If not, why would Uber's drivers be under a more critical eye? All of the drivers I listed are ultimately responsible for their vehicles/actions.
 
It's likely that spending so little time looking at the road was normal practice for the guy in the driver's seat. I so, then if Uber ever reviewed the footage of the guy, they should have informed him that taking his eyes off the road to that extent was not acceptable. And if they weren't at least spot checking those videos, they should have been.

They shouldn’t have even had a camera on him. Plausible deniability lost, and it added no value to their research. The footage obviously is a business liability.
 
Rhetorical question: Do you think DART, Wal-mart, UPS, FedEx, the USPS, etc. spot check videos of their drivers' attentiveness? If not, why would Uber's drivers be under a more critical eye? All of the drivers I listed are ultimately responsible for their vehicles/actions.

Most of those don’t have footage of their drivers. They limit their corporate nosiness to the location of the vehicle, only. They can use that data for all sorts of business purposes, but cameras catch both good and bad driver behavior and mostly aren’t used.

Same reason nearly everyone in aviation from the companies to the pilots to the unions, has never allowed cameras in cockpits. Nothing good enough comes of it for any party involved, that outweighs the very bad that would also come of it. Not even regulators push for that one, even with benefits to accident investigation.
 
There's been discussion that some electric vehicles are too quiet, pedestrians don't hear them coming and step off a curb or step in front of them in a parking long. They now have to make some sort of audible signal below 20 mph.
Buncha young punks yelled at me in a parking lot because I was behind them in a Prius while they were walking in a big group in the center of the lane. "Get a real car!!"
 
Buncha young punks yelled at me in a parking lot because I was behind them in a Prius while they were walking in a big group in the center of the lane. "Get a real car!!"

They weren’t wrong you know. :) :) :)

Of course you could have just run them over to show it was plenty of “real car”.

Probably best you didn’t though.

Let the computer do it next time.

The Police Chief with his decades of automotive engineering and computer science knowledge and background, will go on TV in a couple of hours before true investigation even starts, and exonerate the car. :) :) :)
 
...I would also think that an attentive driver would have braked hard, swerved and laid on the horn, perhaps warning the pedestrian to stop, or jump back out of the way, perhaps getting injured, but maybe not fatally.
That raises a question: do self-driving cars, as currently implemented, ever honk the horn?
 
Rhetorical question: Do you think DART, Wal-mart, UPS, FedEx, the USPS, etc. spot check videos of their drivers' attentiveness? If not, why would Uber's drivers be under a more critical eye?
I don't have an opinion on what WOULD happen; I'm talking about what SHOULD happen. As Larry pointed out, it's a lot harder to stay attentive when you're not the one doing the driving.
 
The more I think about it, the more I think that this autonomous car thing is much farther off than a lot of people think/hope.

The one thing I can’t reconcile is if computers/AI will ever be able to replicate the human mind when it comes to prediction. Most of us have been in a situation when all our previous life experiences culminate into a “spider sense” of sorts where we can predict the actions of others just by observing them. We’ve all been on the road, passing a cyclist or even another car on the road and thought “this guy doesn’t even know I’m here” and slow down based on the fact that the inattentive cyclist “looks” or behaves a certain way. Or we see kids playing in a yard and we slow down because we know that a ball may get kicked into the road. I think no matter how good the AI becomes, it’s not going to be able to “read” a fellow driver’s actions before they even happen, or see a scenario unfolding and base its driving on things that might happen (kid running into the street).

To this specific accident, I’ve lived in Tempe long ago. I don’t know what road this is on, but most of these four lane roads in Tempe are pretty well lit, even if someone is crossing between the light posts, I can’t believe that an attentive driver wouldn’t have seen this pedestrian as she was crossing the other three lanes. I would also think that an attentive driver would have braked hard, swerved and laid on the horn, perhaps warning the pedestrian to stop, or jump back out of the way, perhaps getting injured, but maybe not fatally.

I work on systems (complex and with realtime input) first as a technician later as programmer.
I also believe we are LONG ways from the goal. May even never get there.
There are ALL kinds of issues. From sensors, (that have to work flawlessly, OR catch any instances of failure and "shut down" safely so that only manual human operation until fixed) that will have to work in all kinds of weather, snow, rain, dirt, grime, temperatures, etc. to maintenance (much looser of course than aircraft maintenance) to a HUGE issue with the firmware and software (updates, patches for bug fixes, etc.) where in a complex system it is VERY hard (impossible really) to thoroughly test for all conditions. All the conditions and hazards on roads, etc. dealt with correctly.

Also the built in reactions, there is a possibility of other manual drivers finding or exploiting those to cut people off, sneak in queue, etc.
Laws (who gets a ticket if you self-driving car makes a mistake?, can a person drink and sit in the LH seat?)

I've seen the state of software engineering decay. I see it in releases from the biggest companies on down. Bugs, vulnerability to hacking, **** poor design that obviously is from not enough planning or understanding of how the software should work, bad planning.

Many get hung up on if the woman was "at fault". It makes no difference. This is real life, and the situations like these happen. Question is if the average driver would have avoided killing her. Same thing with other drivers, they don't always act rationally, which seems to be a point for automation but in fact is not necessarily so, I think so far humans are better at identifying and avoiding "crazy drivers" (I'm always wary and trust no other car to do the right thing) and processing complex information. Computers are faster. But it all comes down to the sophistication of the programming. And there we end up programming such complex interaction of software that humans cannot grasp it all and can make a minor "adjustment" with unforseen effects.
And all testing of such things is bound to be spotty and not thorough, it cannot be.
 
Was it a woman driving? I’ve seen two names published, one female one male, and it looked like a male to me. Maybe a recent trans?
 
I work on systems (complex and with realtime input) first as a technician later as programmer.
I also believe we are LONG ways from the goal. May even never get there.

Good to remember Clarke’s Laws:

  1. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
  2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
  3. Any sufficiently advanced technology is indistinguishable from magic.


Humans programming chess computers took us only so far. When we allowed them to learn on their own, their progress was stunning - and humbling.

Yes, lots more variables in programming an autonomous vehicle. But I think the concept is sound, and technological progress will be quicker than we imagine. Again, the legal and moral issues may end up being bigger obstacles than the programming.
 
Also seems odd how little distance the lights were effective. Maybe it’s an artifact of the video, but lights should throw a lot further than it appears on the video.
 
The deep learning tech is actually quite astonishing. While the software is not really new, the scale of parallel computing and enormous size of memory available has changed the art of the possible.

Nate commented above (somewhere, long thread) that the AI wouldn’t figure out how to control a crippled plane. But there are videos of an AI learning on the fly how to do just that with a quad copter. Clearly the laws of physics still apply: if the plane is no longer controllable AI won’t help. But what many fail to grasp is the speed and scale of the trial and error these systems can execute. Thousands if not tens of thousands of trials and evaluations in a second.

And they learn without preconceived notions. It’s conceivable (given enough trials) that AI drivers would recognize some random (to us) combination of head, feet and elbow position that tells them : this object is about to walk in front of me. We’d be looking at head & face & maybe velocity for the same result.

They’re tricky to train, no doubt. There’s a well published case from the old Future Combat Systems project of training a vision algorithm to recognize tracked vs wheeled vehicles. After they trained it and were getting good results with the training and test data, they tried it on real world data. What they found was all the wheeled vehicle images they’d used for training were collected on cloudy days. They’d successfully trained the algorithm to recognize cloudy days. This opacity in what it’s actually using to decide is something to be aware of.

Also, a pure learning system is really bad until you’ve fed it a prodigious amount of data. Then it gets really good (assuming the data you fed it is good). Most common applications don’t want to wait that long so they use “guided” AI where they start with some rules and then train from there. This, of course, opens up the can of human error in selecting and programming the rules.

I’m dabbling for work right now and I do think this is a game changer. But like all tools, they’re never a substitute for understanding the problem.

John
 
They shouldn’t have even had a camera on him. Plausible deniability lost, and it added no value to their research. The footage obviously is a business liability.

Most Uber drivers (and probably most cab drivers in general) do use interior cameras. The reasons for the cameras are to protect the drivers from the passengers in terms of bad reviews, scams to avoid paying the fare, exaggerated injuries in the event of accidents, false rape allegations, drunkenness, and even physical assault. Thinkware even makes a combined front-and-rear dash cam that's specifically designed for the Uber market and features infrared illumination to capture the cabin interior at night. Many other manufacturers also make cameras marketed as "taxi cams."

Of course, one would think that drivers who know that they're also being recorded would have the common sense to conduct themselves accordingly (for example, by paying attention and watching the road). But apparently one would be wrong.

Rich
 
Does an Uber safety pilot need a commercial DL?

Only if the state and/or municipality requires one.

In most of New York State, for example, any class of CDL with a Passenger endorsement will work, or there's a separate Class "E" license that covers just taxi driving. The "E" requires only that the holder be 18, pass the eye test, and pay an extra fee.

New York City and some other municipalities require an additional license which may range from quite easy to quite difficult to get. In New York City it requires obtaining a Class "E" license, taking a course, passing a test, getting a physical, and undergoing a background check. In a few other municipalities, it requires only obtaining a Class "E" license and paying a fee. Most of the rest are somewhere in between.

Rich
 
Only if the state and/or municipality requires one.

In most of New York State, for example, any class of CDL with a Passenger endorsement will work, or there's a separate Class "E" license that covers just taxi driving. The "E" requires only that the holder be 18, pass the eye test, and pay an extra fee.

New York City and some other municipalities require an additional license which may range from quite easy to quite difficult to get. In New York City it requires obtaining a Class "E" license, taking a course, passing a test, getting a physical, and undergoing a background check. In a few other municipalities, it requires only obtaining a Class "E" license and paying a fee. Most of the rest are somewhere in between.

Rich
I wasn't sure if there were different requirements for the Uber self-driving car.
 
Another problem I can see is that, as a pedestrian crossing the street, I often look at the driver to make sure they see me and at least start to slow down or stop. With a computer-driven car, there is no one to look at.
No one to look at, and no one to look at you. How would a self-driving car know when a pedestrian is about to step off a curb without looking? A human can tell just by looking. Will a computer know that a little kid may be following that ball into the street? What will the computer do when it and another car arrive at an intersection at almost the same time and the other driver waives it through?
 
...Of course, one would think that drivers who know that they're also being recorded would have the common sense to conduct themselves accordingly (for example, by paying attention and watching the road). But apparently one would be wrong.
Yeah, that part baffles me.
 
It is beyond the scope of human capability--even with a camera pointed at you.
Beyond the scope of human capability to spend more time looking out the windshield than whatever it was he was doing instead?
 
Frankly, based on personal experience, I don’t believe that. Have you ever taught your teenage kid to drive?
Yes, I have.

Try it yourself when you're a passenger. Your attention will drift over time no matter how hard you try to pay attention. People are not wired for passive monitoring.
 
Back
Top