First driverless car pedestrian death

I always thought it meant the DA can get the Grand Jury to do what the DA wants, to wit, return an indictment against a ham sandwich:D.

Cheers

And sometimes they get an indictment they realize later they didn’t want politically.

Rocky Flats. Pretty much every surviving member of that Grand Jury is still ****ed. And gagged from talking about it, which some have pushed the boundaries of that requirement pretty hard.

But what’s 70 pounds of lost plutonium amongst Cold War fighting “friends”? Just water the grass with it. No problem. Hey they’re building houses there now. Or close enough anyway. :)

The similar lawsuits against all the other facilities were made to disappear before the Grand Juries were ever seated. Go figure.
 
More family members of a woman killed by an Uber self-driving vehicle have hired legal counsel,

All hoping to become independently wealthy....cha CHING..!!!
 
More family members of a woman killed by an Uber self-driving vehicle have hired legal counsel,

All hoping to become independently wealthy....cha CHING..!!!

Perhaps but it sounds like it’s really just a split family thing. Divorce and two households and separate lawyers. One lawyer waited until the other was done.
 
Perhaps but it sounds like it’s really just a split family thing. Divorce and two households and separate lawyers. One lawyer waited until the other was done.

Probably more truth in this statement, but I can't help to think money is the motivator.

The litigious attitude of some people seem to see not a tragedy but a payday. Myself, if one of my immediate family members were killed I would probably not be satisfied until I see a bill signed into law banning driverless vehicles from public roads.

I can only imagine how the safety driver in this accident car is feeling now.
 
I can only imagine how the safety driver in this accident car is feeling now.

I heard that the driver texted a counselor about the accident and was texted back not to worry, accidents happen, and move on with their life.
 
Myself, if one of my immediate family members were killed I would probably not be satisfied until I see a bill signed into law banning driverless vehicles from public roads.

The emotions are certainly understandable, but isn't this the kind of attitude that makes general aviation increasingly expensive, and under attack?
 
The emotions are certainly understandable, but isn't this the kind of attitude that makes general aviation increasingly expensive, and under attack?

Its pretty much an attitude that would stifle all new development. The car, the plane, the boat, the train, the bicycle, electricity, natural gas in the home, <insert most any modern technology> have all killed people. Of course, so did rocks and sticks before that, so I guess we're all effed...
 
Its pretty much an attitude that would stifle all new development. The car, the plane, the boat, the train, the bicycle, electricity, natural gas in the home, <insert most any modern technology> have all killed people. Of course, so did rocks and sticks before that, so I guess we're all effed...

If the new development is truly great enough to be revolutionary, the early lawsuits will be easy to recoup.

I’m not completely convinced driverless cars are really all they’re cracked up to be. They’ll be fine in boring benign conditions that real drivers can also handle half asleep, and the drivers will lose all their ability to practice anything, and won’t be able to operate the vehicles in conditions when the computers can’t.

If we get that far along, it’ll then turn into “This vehicle is not equipped with the VegOMatic 6000 version 12 for heavy rain driving. Do you have a special heavy rain permit y/n?” or the car will just check the license place via the network and ask. “Sorry, you aren’t allowed to drive in heavy rain. Shutting down. Would you like me to summon a professional driver? Price will be $200 an hour. Y/N Okay, estimated wait is one day, three hours. Would you like to continue?”

And you’re stuck not evacuating from the path of the hurricane...
 
There have been a lot of comments about autonomous vehicles "learning" how to drive.

In this accident, initial information suggests that the vehicle failed to detect the obstacle (person and bicycle). That doesn't seem like a "learning" problem to me but an issue with the sensor system. A computer can't learn to avoid something that it can not "see".

The comments on "learning", however, are what I have been thinking about. What does "learning" mean in this context?

For us humans, the FAA defines four levels of learning that each CFI applicant must, um, learn; Rote, Understanding, Application, and Correlation. How does a computer do against these standards? I have some experience with programming, which gives me a little insight into computers, but not enough to fully answer that question.

Rote - This could be a computer collecting data. It has the data. It can recall the data. But, it can't do anything with the data.

Understanding - The FAA defines this as "To comprehend or grasp the nature or meaning of something." I'm not sure how a computer program would exhibit this level of learning. Perhaps taking the data and categorizing it, sorting it, into meaningful groups or sets? That's a very rudimentary level of "understanding" and certainly doesn't meet the common use of the word. Anyone have anything better for how a computer would achieve this level?

Application - This is, "The act of putting something to use that has been learned and understood." Would this be using the data to take actions based on decisions (conditionals)?

These first three seem, to me, to be within the capabilities that I understand computers to have. None of those three skills seem to solve the problem we currently have with autonomous vehicles. They will not be enough.

Correlation - "Associating what has been learned, understood, and applied with previous or subsequent learning". I think this is what the posters are talking about when they say "learning".

How does a machine demonstrate correlation? What do we currently have that approaches this level?
 
Think of captchas. Like the deal when log on to some site and it asks you to click on every street sign in an image, or every storefront, or something like that. Captchas like this work because they are very easy for humans and very difficult for computers. Driving down a busy street is like one big captcha.
 
Think of captchas. Like the deal when log on to some site and it asks you to click on every street sign in an image, or every storefront, or something like that. Captchas like this work because they are very easy for humans and very difficult for computers. Driving down a busy street is like one big captcha.

And realize all those catchpa's were being used to train Google's image recognition neural networks.

"Learning" in this context involves both the sensing and the response. Both are composed of multiple neural networks which have to be "trained" by feeding them data sets and telling them the answer over and over thousands and tens of thousands (and hundreds of thousands) of times until they seem to do the right thing on their own when fed the test data sets. They are not debuggable in the traditional code sense where you can trace the logic and see where they made a wrong turn. Then they are turned loose in the real world to see how they do on general data. The problem with these is if the training data sets are skewed-even in an entirely unrelated way-they can learn the wrong things. I may have cited this earlier in the thread, but during the Army FCS effort, computer vision neural nets were trained for target recognition-specifically to recognize tracked vs wheeled vehicles. What nobody noticed until real world testing began was all the training and test data images of wheeled vehicles were taken on cloudy days. What the neural net had actually been trained to do was recognize cloudy conditions. Which it did really well.

So the sensing system(s) are using computer vision neural nets to detect and recognize things and then passing that information to behavioral neural nets that have been "trained" to take action. What could go wrong?

John
 
In this accident, initial information suggests that the vehicle failed to detect the obstacle (person and bicycle). That doesn't seem like a "learning" problem to me but an issue with the sensor system. A computer can't learn to avoid something that it can not "see".

That particular vehicle has Lidar so it should have “seen” the obstacle. It didn’t identify it as an object that was about to cross the road though.
 
There's another Tesla in the news today at that same accident location:
https://jalopnik.com/video-appears-to-show-tesla-autopilot-veering-toward-di-1825016336

Apparently there are lane markers that under certain sun angles fool the guidance system.

Interesting about Tesla’s quote in response:


—————-
Autopilot does not, as ABC 7’s reporting suggests, make a Tesla an autonomous car or allow a driver to abdicate responsibility. To review it as such reflects a misrepresentation of our system and is exactly the kind of misinformation that threatens to harm consumer safety. We have been very clear that Autopilot is a driver assistance system that requires the driver to pay attention to the road at all times,
—————-

I think they should start using a different terrm then for their “driver assistance system”. In my airplane, I can comfortably take my hands off the controls for long stretches of time and let the autopilot fly the plane. Tesla is calling their system an autopilot, but it really can’t be trusted (no surprise there) to do the job of a real autopilot.

If they really want to have the driver paying atention at all times, they could have attention detection systems or hands on wheel sensors to enforce that. This is just their mealy mouthed way of trying to evade responsibility for putting a system in a car which is sold as being able to self-drive and is not up to the job.

The sad thing is that the dead driver complained to Tesla multiple times about this wxact behavior in this exact spot.

People need to really rethink this whole autonomous vehicle concept.


Sent from my iPhone using Tapatalk Pro
 
I think they should start using a different terrm then for their “driver assistance system”. In my airplane, I can comfortably take my hands off the controls for long stretches of time and let the autopilot fly the plane. Tesla is calling their system an autopilot, but it really can’t be trusted (no surprise there) to do the job of a real autopilot.

Everywhere in the vehicle it's just called AutoSteer. AutoSteer (Beta) actually. AutoSteer (Beta) + TACC to be exact.

I've never seen the vehicle refer to the system as AutoPilot. That's a marketing term.

If they really want to have the driver paying atention at all times, they could have attention detection systems or hands on wheel sensors to enforce that. This is just their mealy mouthed way of trying to evade responsibility for putting a system in a car which is sold as being able to self-drive and is not up to the job.

It DOES have hands-on-wheel sensors. In fact, I have to grip the wheel tighter on AutoSteer than I grip it if I'm just steering myself. If you don't do that it disengages. If it disengages 3 times, you have to park the vehicle before you can engage it again.
 
Everywhere in the vehicle it's just called AutoSteer. AutoSteer (Beta) actually. AutoSteer (Beta) + TACC to be exact.

I've never seen the vehicle refer to the system as AutoPilot. That's a marketing term.



It DOES have hands-on-wheel sensors. In fact, I have to grip the wheel tighter on AutoSteer than I grip it if I'm just steering myself. If you don't do that it disengages. If it disengages 3 times, you have to park the vehicle before you can engage it again.

Well, in their quote, they called it Autopilot. And they call it Autopilot on their web site. Their website also says the car is capable of self driving.

Not disputing what you say about having to have hands on wheel, but I wonder why the deaths in that case? Clearly something is wrong with the system and/or with the training of the drivers.


Sent from my iPhone using Tapatalk Pro
 
I think they should start using a different terrm then for their “driver assistance system”. In my airplane, I can comfortably take my hands off the controls for long stretches of time and let the autopilot fly the plane. Tesla is calling their system an autopilot, but it really can’t be trusted (no surprise there) to do the job of a real autopilot.

But there are many autopilot systems in GA planes that will happily fly the aircraft into the side of a mountain without human intervention.
 
But there are many autopilot systems in GA planes that will happily fly the aircraft into the side of a mountain without human intervention.

So true. Which is why I keep a close eye on what it is doing all the time. However, calling the Tesla system an autopilot, to me, implies that it has capabilities that it in no way has.


Sent from my iPhone using Tapatalk Pro
 
Well, in their quote, they called it Autopilot. And they call it Autopilot on their web site. Their website also says the car is capable of self driving.

Self Driving is not AutoPilot. It's a different feature. Currently you can buy 2 things:

EAP -> Enhanced AutoPilot ($5000) . This is a 4 camera based Level 2 autonomy system that requires you to keep your hands on the wheel.

FSD -> Full Self Driving ($3000). This is a 8 camera based Level 5 autonomy system that they're still working on that can drive without a driver. But at the moment it's just an interest free loan to Tesla. The feature doesn't exist yet, and may never exist. Even if they could get the tech right, it may never get regulatory approval.

Not disputing what you say about having to have hands on wheel, but I wonder why the deaths in that case? Clearly something is wrong with the system and/or with the training of the drivers.

The logs show that the driver removed his hands from the wheel for 6 seconds. (He received warnings). He was likely looking down at his phone and ignored the flashing screens.
 
Last edited:
The logs show that the driver removed his hands from the wheel for 6 seconds. (He received warnings). He was likely looking down at his phone and ignore the flashing screens.

The Tesla yesterday was in the same lane. The vehicle followed the lane markings which began aiming it directly at the same barrier. This driver took over. There were no warnings about lane departure.

The accident driver may have been getting warnings about driving hands-free, but might not have gotten any warning of an impending collision.
 
The Tesla yesterday was in the same lane. The vehicle followed the lane markings which began aiming it directly at the same barrier. This driver took over. There were no warnings about lane departure.

The accident driver may have been getting warnings about driving hands-free, but might not have gotten any warning of an impending collision.

There are no lane departure warnings in Tesla's AP2 software. I don't have them on my Ford's TACC either. And neither car's radar-based TACC or Emergency Braking will stop for a non-moving target - there are too many false positives with radar systems.

You can crash any car which you drive beyond the capabilities of the system. I wouldn't take my hands off the wheel when on AutoSteer any more than I would tug my feet in under my body after I've put a car into traditional cruise control. But some people do that. Sometimes unfortunately with disastrous consequences.

I see reports for AutoSteer giving people a false sense of security and that's why they let go of the wheel for so long. I honestly have no idea how. AutoSteer at best drives like a teenager that's taking their 3rd or 4th lesson. You must be a REALLY bad driver in general to feel secure enough by it to trust it completely. It doesn't mean it doesn't have a place - the same way cruise control has a place even though it can ram you at 80mph into a 30mph curve if you don't watch things. It still makes driving more pleasurable though. Just don't think it's any more than it is.
 
Huh. I know Tesla drivers who claim their cars basically drive them to work. Not true?
 
Huh. I know Tesla drivers who claim their cars basically drive them to work. Not true?

As true as it would be to claim: "My GFC 500 basically flew me from LA to Vegas this morning".
 
As true as it would be to claim: "My GFC 500 basically flew me from LA to Vegas this morning".
That's like alien technology compared to what I fly, but it looks likeyou can punch in a GPS route and that autopilot will follow it, even flying an approach. I'd that's what the Tesla does, that's fairly autonomous.
 
A drivered car managed to sideswipe me yesterday. The driver did a poor job of it and only managed some nearly invisible scratches on some decorative plastic bits. I didn’t have time to dodge the lane change swerve by the other car so I didn’t materially contribute to the poor quality of the ‘crash’.

Question: would a driverless car have done a better job of hitting my vehicle and actually removed paint and/or bent metal?
 
Last edited:
A drivered car managed to sideswipe me yesterday. The driver did a poor job of it and only managed some nearly invisible scratches on some decorative plastic bits. I didn’t have time to dodge the lane change swerve by the other car so I didn’t materially contribute to the poor quality of the ‘crash’.

Question: would a driverless car done a better job of hitting my vehicle and actually removed paint and/or bent metal?
If you want something done right, trust it to a 2500lb, 60 mph Roomba.
 
That's like alien technology compared to what I fly, but it looks likeyou can punch in a GPS route and that autopilot will follow it, even flying an approach. I'd that's what the Tesla does, that's fairly autonomous.

I wasn't even thinking NAV integration, just HDG and ALT hold. Tesla's Autopilot doesn't have any NAV integration. All it knows about the world is the 500 ft it can see in front of it. Even simple things like if you have a road programmed on the GPS that tells you to take the right fork on a road that is dividing, Autopilot will just randomly pick a fork when it gets there. Zero NAV intelligence.

That's the first thing what will differentiate Full Self Driving in the future vs. Autopilot - the ability for the AutoSteer computer to use the map and route data from the Navigation computer.

I didn't pre-purchase the FSD feature - waiting on the sideline to see it first. So far, nobody outside Tesla has seen it. (It will kind'a suck if it comes out next month and it's amazing and I can summon the car from 1000 miles away. Then it will cost me $1000 extra than having pre-purchased the option. But somehow I think I'm safe for a few years...)
 
When this thread started, I was pretty optimistic about driverless car tech. Now that more fact have emerged, I think we're a long way off. When the Uber car drives right into a moving human-size object, and the Tesla will happily slam you into a concrete barrier at 60+ mph (faded lane markings? Deal with it.. humans do), that tells me that we haven't even solved the easy problems. And that's not even getting to the difficult edge cases.
 
When this thread started, I was pretty optimistic about driverless car tech. Now that more fact have emerged, I think we're a long way off. When the Uber car drives right into a moving human-size object, and the Tesla will happily slam you into a concrete barrier at 60+ mph (faded lane markings? Deal with it.. humans do), that tells me that we haven't even solved the easy problems. And that's not even getting to the difficult edge cases.

Musk apparently had an epic meltdown at his latest earnings release conference call. I haven’t found the audio yet but the articles are entertaining.
 
When this thread started, I was pretty optimistic about driverless car tech. Now that more fact have emerged, I think we're a long way off. When the Uber car drives right into a moving human-size object, and the Tesla will happily slam you into a concrete barrier at 60+ mph (faded lane markings? Deal with it.. humans do), that tells me that we haven't even solved the easy problems. And that's not even getting to the difficult edge cases.

All the way at the beginning of this thread someone asked "is it seen as different when a human driver takes a life or an automated driver?" (paraphrased). I think yes. There is a difference and a different fear. The human driver has existed since the beginnning of the automobile age. We are used to the human driver and their foibles. New tech (like cell phones) add new distractions, but there has always been a range of good, mediocre, and bad drivers.

Also drunk drivers. Drunk drivers have zero sense, will hit a lamppost, drift into the other lane, etc. BUT...with all this range most of us think we are pretty good drivers, and the car is in our hands. Our fate is in our abilities.

Anyone who has worked in programming software, or even just many users, have come across bugs, and their unpredictablity. I do the coding and maintenance on a fairly old, but very complex system. Untested conditions though...mean that with so many possible "states" so many processes that access data, and human interaction, every week we get some kind of combination of conditions that cause minor problems.

I learned very early on, when I had found, fixed, and tested (my testing before the customer would also test) a bug, with my bugfix, to stop saying to the customer "This is a small isolated change and can't affect the main pipeline". Of course, after the "fix" caused another problem that seemed totally illogical (the other problem showing up totally different place in the flow, seemingly unrelated) and I FOUND the bug I had now caused, it all made sense..."oh, of course. That bit got set and then never unset" whatever.

Point being this complex code I work with, compared to what an autonomous auto is, would be like a calculater is to a super-computer.

They simply cannot test sufficiently. There is no way. So they will be testing, fixing bugs and best they can do is small tests of the new software. Add to that the complexity of the hardware, and the RANGE of it. Meaning even brand new sensors are not going to be perfectly exactly the same in all ways, but even worse after years "on the road". Sleet, hail, mud, other conditions, aging, heat, cold ....all these things play havoc with electronics and sensors.

As far as what we pay attention to...and accept. It is a bit like terror vs. other causes of death. We focus on the terror, while bathtubs kill more people, or guns, or this or that, or workplace accidents. The one (even though way less percent of causes of death) we focus totally on while the others er like "eh...<excrement> happens". So we are pretty irrational, but yeah....it's harder to accept a dumb sensor or bug causing a fatality than a driver.
 
All the way at the beginning of this thread someone asked "is it seen as different when a human driver takes a life or an automated driver?" (paraphrased). I think yes....
Maybe so, but earlier in the thread, it was pointed out that the fatalities per mile for self-driving cars has a long way to go before it gets better than for human drivers.
 
Maybe so, but earlier in the thread, it was pointed out that the fatalities per mile for self-driving cars has a long way to go before it gets better than for human drivers.

I don't have a lot of faith in self-driving cars. I see too many problems, testing, maintenance. The only thing I haven't heard about, which I think would be a good indicator...what are auto insurance companies planning with this? When accidents occur...if the software is at fault, who pays?

Anyway, even though I don't believe we will be able to make it safer, there are a number of drivers on the road (I'm not one of them :) ) that I would love to see move over to even untested self-driving cars. Numbnuts that really shouldn't be driving.

The stats are kind of misleading. I personally think when you get around 50% self-driving a LOT of bugs are going to start to show up.
Also human drivers will learn to game the system...expect more rowdy driving when they can determine it is a self-driving.

But the stats you mention are a little unreliable. It's only when they get a decent number of miles that it is going to be able to match.

I read somewhere about the advent of the automobile. The very FIRST recorded automobile accident happened pretty early on. If you used those stats they'd probably be worse than self-driving now are.
 
I don't have a lot of faith in self-driving cars....
Me neither!

...the stats you mention are a little unreliable. It's only when they get a decent number of miles that it is going to be able to match.

Based on estimates posted earlier in the thread for Waymo and Uber, they will have to increase their number of autonomously driven miles to about 14 times what it is now, without a single fatality, in order to equal the record of human drivers. And the unreliability of the data due to the relatively small sample size will make it take even longer to validate autonomous vehicle technology.

I read somewhere about the advent of the automobile. The very FIRST recorded automobile accident happened pretty early on. If you used those stats they'd probably be worse than self-driving now are.
How long before the first fatality?
 
Anyone else see the news that squeaked out...? The car did see her but some brilliant engineer’s code tagged her as a false target.

They haven’t said what the car thought she was or which “stop the car from reacting” logic thread it went down.
 
Back
Top