First driverless car pedestrian death

Yes, I have.

Try it yourself when you're a passenger. Your attention will drift over time no matter how hard you try to pay attention. People are not wired for passive monitoring.

When my son was learning to drive my attention did not waiver for a moment. The terror was sufficient to keep me engaged.
 
Here is one guys take on driverless cars. I agree with him. Go to the 1 min mark to skip the intro.



Sent from my iPhone using Tapatalk
 
Frankly, based on personal experience, I don’t believe that. Have you ever taught your teenage kid to drive?
Not to mention, anyone who has taught anyone to fly... ;)

Seems it should be possible to sustain passive monitoring as long as the "passenger" keeps in mind that the "driver" is likely going to try to kill him, or someone else, at some point.
 
Yes, I have.

Try it yourself when you're a passenger. Your attention will drift over time no matter how hard you try to pay attention. People are not wired for passive monitoring.
This.

Probably most people who have flown with an autopilot are guilty of this at least once in a while, if not a lot.
 
I think I was born at the right time (1963). I'm a car nut to the core and have autocrossed and raced for more than 25 years. Got to drive a Formula Atlantic around Buttonwillow for a 20-lap test, did a GT3 Cup school at Barber Motorsports Park, even drove a rhd Miata in a short race at the Tsukuba track in Japan. (The more interesting story is how I bluffed my way into getting a temp FIA competition license for the event! :))

But...30 years from now when I'm old and feeble, I will wholeheartedly welcome...even embrace...an autonomous car that can take me to the grocery store, bingo parlor, doctor's office, and off to visit friends and relatives. Heck, even to the airport to jawbone with all the other guys who've lost their medicals, and just watch planes take off and land. The autonomous technology will be pretty advanced by that time. Mobility and independence, baby!

Until then, I want to be in full control of a proper rear-drive, manual transmission performance coupe/sports car with a genuine suck-squeeze-bang-blow, dinosaur-juice engine that makes glorious sounds.

30 years from now you won't need a medical certificate to ride in a self-flying airplane...
 
Should AI bots lie? Hard truths about artificial intelligence

Excerpt:

New technologies always come with unforeseen problems. In the 1800s iron railway bridges were considered modern wonders, until they started collapsing and plunging trainloads of people to gruesome deaths. Metal fatigue -- who knew? To this day Canadian engineers get iron rings made from a collapsed bridge, to remind them that lives depend on their work.

Compared to civil engineering, of course, software construction has all the discipline of a pack of rabid ferrets. So, yeah, let's celebrate AI's coming good times, before the bad times roll.

In a discussion of that article on another forum, one of the members had this to say:

"Humans are playing around with AI with all the consideration and forethought of a chimpanzee that has been given a hand grenade. Except that, unlike the chimp, we know that there is a non-zero possibility that things will work out very badly, as in a near-extinction level event for humanity, but we continue to muck around with it."
The context of that remark was a discussion of how we can teach AI ethics. The question that this raises in my mind is "Whose ethics?"
 
To this day Canadian engineers get iron rings made from a collapsed bridge, to remind them that lives depend on their work.

Wearing a reminder of someone else’s screwup sounds very strange to me. Their screwup, not mine.
 
Do Doctors get a ring with a miniature malpractice suit or model of someone else’s spleen on it to remind them their predecessors killed somebody? LOL.
 
Do Doctors get a ring with a miniature malpractice suit or model of someone else’s spleen on it to remind them their predecessors killed somebody? LOL.

No. They get very large insurance policies with very large premiums because they all make mistakes. Every single one. Because they’re human, they’re not perfect, and no one can know all things at all times. The rings are a reminder that mistakes can cause loss of life, and you don’t see a collapsed bridge every day. (They usually get torn down and replaced, and the old girders get turned into jewelry, apparently.) Doctors need no such totem to remember of the outcome of their potential failures, for they only need to interact with a few people to see how messed up most humans already are. ;)
 
Wearing a reminder of someone else’s screwup sounds very strange to me. Their screwup, not mine.
We pilots often read aircraft accident reports to remind ourselves of other people's screw-ups, presumably in an attempt to avoid having the same things happen to us.
 
We pilots often read aircraft accident reports to remind ourselves of other people's screw-ups, presumably in an attempt to avoid having the same things happen to us.

Yeah but... I don’t make a ring out of the wreckage and wear it. :)
 
Maybe the Uber self driving car engineers should make some nice rings out of the bumper of the car, after they wash the blood off of it, anyway... :)

Too soon? :) :) :)
 
Was this Uber-mobile actually taking pax (maybe not at the time of the accident)? Or is it still in the testing stage?
 
Since 2015:
Number of deaths caused by Uber drivers: 48
Number of physical assaults by Uber drivers: 91
Number of sexual assaults by Uber drivers: 362
Number of kidnappings by Uber drivers: 16

Since 2016:
Attempt at eliminating Uber drivers kills 1 person... must go back to those Uber drivers!
 
Hi everyone.
The are all kind of excuses one can come up for all type of procedures.
There is NO excuse for killing people when the same, or better environments / conditions exist to do the proper testing before releasing it in / to the public.
This is nothing more than intent to kill with some approval from some that know very little about the technology and or the testing that can / should be done.
If any one thinks, by looking at the video, that this was an accident, and justified, I suggest that those people think that there is no killing that takes place that cannot be justified / excused.
 
Hi everyone.
The are all kind of excuses one can come up for all type of procedures.
There is NO excuse for killing people when the same, or better environments / conditions exist to do the proper testing before releasing it in / to the public.
This is nothing more than intent to kill with some approval from some that know very little about the technology and or the testing that can / should be done.
If any one thinks, by looking at the video, that this was an accident, and justified, I suggest that those people think that there is no killing that takes place that cannot be justified / excused.

After you run controlled trials, you have to start using it in real world situations. While under testing, it is anticipated that autonomous cars will have accidents if left entirely to their own devices. That's why there was a human monitoring the self driving car. Unfortunately, neither the self driving car or its human safety net worked. Stuff happens.

Yes, it is sad, but self driving cars, when mature, will save tens of thousands of lives a year. Between here and that panacea, there will be bumps in the road. That's how development and product maturity works. It took 100 years to get aviation to the level of safety commercial aviation achieves today. Thousands of bodies have been splattered across the globe on the path to today's extremely safe commercial flying. Self driving cars will have a maturation process too.
 
Since 2015:
Number of deaths caused by Uber drivers: 48
Number of physical assaults by Uber drivers: 91
Number of sexual assaults by Uber drivers: 362
Number of kidnappings by Uber drivers: 16

Since 2016:
Attempt at eliminating Uber drivers kills 1 person... must go back to those Uber drivers!
In order for those numbers to be a valid comparison of risk, they need to be expressed as a ratio of incidents to miles driven for each category.
 
After you run controlled trials, you have to start using it in real world situations. While under testing, it is anticipated that autonomous cars will have accidents if left entirely to their own devices. That's why there was a human monitoring the self driving car. Unfortunately, neither the self driving car or its human safety net worked. Stuff happens.

Yes, it is sad, but self driving cars, when mature, will save tens of thousands of lives a year. Between here and that panacea, there will be bumps in the road. That's how development and product maturity works. It took 100 years to get aviation to the level of safety commercial aviation achieves today. Thousands of bodies have been splattered across the globe on the path to today's extremely safe commercial flying. Self driving cars will have a maturation process too.
Historically, some new technologies have been successful and achieved wide adoption, and some haven't. It's not possible to predict which this is.
 
Yes, it is sad, but self driving cars, when mature, will save tens of thousands of lives a year.

This is an opinion not based on fact. What evidence do you have for making this statement? Wishful thinking? I’m not saying you’re wrong, but I’m also not saying you’re right. As an engineer, I would say that there is a possibility that autonomous vehicles might be somewhat safer than non-autonomous vehicles, but I would definitely not say that it is a foregone conclusion.

In any case, it seems to me that the current data shows that self driving cars are certainly not safer than cars with a driver behind the wheel when the data is normalized by the number of vehicle miles driven, and I am in the camp that these things need to get off the public roads and go back into controled testing facilities. I also believe that there needs to be some standards and metrics put into place which must be met before self driving cars are allowed back onto public roads.




Sent from my iPad using Tapatalk
 
Safety aspects aside, self driving cars strike me as a totalitarian state's dream: A record of your journey is created whenever you travel, and they can turn off your transport whenever they want. A lot of modern conveniences we rely on today seem to be created to reduce one's independence and enhance one's reliance on the state. Who can navigate by the stars today? Who can navigate with a map and compass? If the electricity goes out, how cold will your house get? I know I seem to be a prepper, but we heat with coal and live near a spring out of choice. And I have kept my sextant!
 
This is an opinion not based on fact. What evidence do you have for making this statement? Wishful thinking? I’m not saying you’re wrong, but I’m also not saying you’re right. As an engineer, I would say that there is a possibility that autonomous vehicles might be somewhat safer than non-autonomous vehicles, but I would definitely not say that it is a foregone conclusion.

In any case, it seems to me that the current data shows that self driving cars are certainly not safer than cars with a driver behind the wheel when the data is normalized by the number of vehicle miles driven, and I am in the camp that these things need to get off the public roads and go back into controled testing facilities. I also believe that there needs to be some standards and metrics put into place which must be met before self driving cars are allowed back onto public roads.

Current data = one accident. I have a hard time making any conclusions from one accident. The fact that there was a driver behind the wheel in this accident illustrates exactly why there are good reasons to automate the process so people can't screw it up by <for example> playing on their phones while operating a vehicle.

I don't know how much behind the scenes testing was done before the technology was allowed on the roads, but nobody is sending V1.0 out on public streets. I tend to believe that Uber had done enough testing to have a reasonable level confidence in their technology before sending it out on the road. The cars were in a real world environment with human backup, but the person responsible for maintaining control in this test screwed the pooch. That resulted in one of the types of accident accident automated cars will eventually prevent - distracted driving.

Whether automated vehicles will be safer than human guided vehicles is the big question. My opinion is that engaged drivers will be better drivers on the whole than automated cars until the technology is very mature, but the automated vehicles will be more adept than the impaired (whether by age, lack of experience, chemicals, or distraction). I also believe that if you can remove the bad (impaired) drivers from the mix, accident rates will drop disproportionately. As the technology improves, automated vehicles will be a better (safer, more efficient) choice for a bigger and bigger slice of the population and that one day, manual driving will be mostly for enjoyment, not for daily transportation.
 
Last edited:
Safety aspects aside, self driving cars strike me as a totalitarian state's dream: A record of your journey is created whenever you travel...

Between cell phone data and traffic recording cameras all over the place, Big Brother can already figure out where you've been most of the time. It gives me the willies just thinking about it.
 
I think accident avoidance can be broken down into roughly 3 stages.

1) Perception. IOW “seeing” that an accident may be imminent. Sensors can perceive far beyond the limits of human vision, and look in all directions at once. I think computers will win handily here.

2) Decision. IOW determining what action to take. This is where programming is the key, but the speed at which calculations can be run heavily favors computers. Accidents are often over before a human has “decided” what to do, and even then the decision is often wrong.

3) Reaction time. No contest. Human reaction time can be measured in seconds, computers in milliseconds.

Put them all together and self-driving vehicles will be much safer - IMHO, of course.
 
Current data = one accident. I have a hard time making any conclusions from one accident. The fact that there was a driver behind the wheel in this accident illustrates exactly why there are good reasons to automate the process so people can't screw it up by <for example> playing on their phones while operating a vehicle.

I don't know how much behind the scenes testing was done before the technology was allowed on the roads, but nobody is sending V1.0 out on public streets. I tend to believe that Uber had done enough testing to have a reasonable level confidence in their technology before sending it out on the road. The cars were in a real world environment with human backup, but the person responsible for maintaining control in this test screwed the pooch. That resulted in one of the types of accident accident automated cars will eventually prevent - distracted driving.

Whether automated vehicles will be safer than human guided vehicles is the big question. My opinion is that engaged drivers will be better drivers on the whole than automated cars until the technology is very mature, but the automated vehicles will be more adept than the impaired (whether by age, lack of experience, chemicals, or distraction). I also believe that if you can remove the bad (impaired) drivers from the mix, accident rates will drop disproportionately. As the technology improves, automated vehicles will be a better (safer, more efficient) choice for a bigger and bigger slice of the population and that one day, manual driving will be mostly for enjoyment, not for daily transportation.



Here’s the data (from https://www.rita.dot.gov/bts/sites...ansportation_statistics/html/table_01_35.html)

Self driving cars have been on the roads, what, maybe 2-3 years? I’ll give it the benefit of the doubt and use data for 2 years only from the above. I’ll include the total highway miles for 2014 and 2015. I doubt this includes in town driving or short trips, but it will serve to illustrate the issue.

I’ll use scientific notation NeM which means N x 10 raised to the M power.

In 2014 and 2015, total highway miles driven = 6e12 (6 million million). In those 2 years, there were approximately 70,000 (7e4) vehicle fatalities. ( https://en.m.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year)

So, deaths per mile for non automonous vehicles = 7e4/6e12 = 1.2e-8.

This reference https://medium.com/waymo/waymo-reaches-5-million-self-driven-miles-61fba590fafe has Waymo’s numbers for miles driven. I’ve seen estimates for uber of 1 million. So Waymo (Google) + Uber = 6e6

Deaths per mile for autonomous vehicles = 1 / 6e6 = 1.7e-7

Odds of death in self-driving / non-self-driving = 1.7e-7/1.2e-8 = 14.

So, the data to date shows that there is a chance of death 14 times higher with the self driving vehicles. That is significant.

My strong suspicion is that since the number of total miles driven by self-driving cars is so low, the death rate will likely go up and not down as these cars are exposed to more real world situations. I strongly doubt these things have had to deal with icy roads, heavy fog, storms, etc. yet. What happens when the sensors are degraded and damaged due to dirt, rocks thrown up by the vehicle in front, hail, heavy rain, mud, etc?

And the driver behind the wheel didn’t even touch the controls, so that certainly doesn’t make the case for autonomy.

My concern isn’t whether Uber thought their cars should go on the road or not. It’s clear they thought so. They were wrong. I’m usually not a fan of a lot of heavy regulation, but I don’t want me or my family to be the guinea pigs for these folks’ software quality control testing. Get these things out of the public and back into the controlled testing environment until they can be _proven_ to be at least as safe as the normal vehicles. And the heavy burden of proof is on the vendors.



Sent from my iPad using Tapatalk
 
Last edited:
I think accident avoidance can be broken down into roughly 3 stages.

1) Perception. IOW “seeing” that an accident may be imminent. Sensors can perceive far beyond the limits of human vision, and look in all directions at once. I think computers will win handily here.

2) Decision. IOW determining what action to take. This is where programming is the key, but the speed at which calculations can be run heavily favors computers. Accidents are often over before a human has “decided” what to do, and even then the decision is often wrong.

3) Reaction time. No contest. Human reaction time can be measured in seconds, computers in milliseconds.

Put them all together and self-driving vehicles will be much safer - IMHO, of course.

Humans have something on the order if 100 billion neurons in the brain. The largest single chip computer has something on the order of 7.2 billion transistors, and it takes at least 2 transistors to make a logic element (2 for an inverter, 4 for a 2 input CMOS NAND or NOR gate, more for more complex functions). Digital computers use binary logic, neurons are more analog and are therefore capable of more processing per neuron than a logic gate. It takes many, many transistors to model a neuron. Computers are a LONG way from being able to equal the decision making power of the human brain.

I’ve designed computer chips, and I have worked with software engineers making and training AI neural networks. Huge data sets are required, and much work is still to be done. I don’t have nearly the certainty that you do that self driving cars will be safer. As an electrical engineer, I would suggest that your faith in computers and in the engineering profession to solve a real world problem with random inputs (pedestrians looking the wrong way, children running into the street, degraded sensors, icy roads, random events) may be a bit misplaced.


Sent from my iPad using Tapatalk
 
So, the data to date shows that there is a chance of death 14 times higher with the self driving vehicles. That is significant.

To put this number in perspective, look at it this way. Curently, the US averages on the order of 30,000 deaths per year. If we switched to self driving vehicles right now and over night, the above number suggests that there might be 420,000 deaths per year.

Granted, this is for illustration purposes, and I have no way of knowing what the actual number would be, but to blindly assert that self driving vehicles _will_ be safer is a position spoken from ignorance of the massive engineering difficulties involved.

Degraded sensors. Degraded road markings, hazardous road conditions, unpredictable pedestrians, unpredictable actions of other vehicles, communications bandwidth limitations, limitations of software and hardware capabilities, etc.

I’ll put my money on the human brain for quite some time. Humans deal with ambiguity much, much better than electonics.

This reminds me of a science textbook I had in school, way back when. The author asserted confidently that by 2000, everyone would be commuting in flying cars, and that we would have colonies on Mars. How’d that work out?



Sent from my iPad using Tapatalk
 
Last edited:
Degraded sensors. Degraded road markings, hazardous road conditions, unpredictable pedestrians, unpredictable actions of other vehicles, communications bandwidth limitations, limitations of software and hardware capabilities, etc.

I’ll put my money on the human brain for quite some time...

Seems like all of those, save the bandwidth one, pertain to human drivers as well.
 
If we switched to self driving vehicles right now and over night, the above number suggests that there might be 420,000 deaths per year.

Who has suggested this? Nobody. That's why Uber's car had a human minder. To work through development issues with a meat servo as a backstop. The car worked fine for 99.xxxx percent of the time. The meat servo failed in one of its relatively few (maybe only) opportunities.
 
Seems like all of those, save the bandwidth one, pertain to human drivers as well.

Yes, and my assertion is that human drivers will, for a very long time, be better able to deal with these issues than will an autonomous vehicle. Clearly you disagree.

It’s just interesting to me to see the blind faith people put in technology. Having been and worked as an engineer for years in the technology field, let”s just say I’m not nearly as confident as you.


Sent from my iPad using Tapatalk
 
Who has suggested this? Nobody. That's why Uber's car had a human minder. To work through development issues with a meat servo as a backstop. The car worked fine for 99.xxxx percent of the time. The meat servo failed in one of its relatively few (maybe only) opportunities.

I’m pretty certain that I made it clear that my example was an illustration to show how far away we are from success in the self-driving space. The words “Granted, this is for illustration purposes” should have been a clue. Geeze... reading comprehension these days... sigh.

Look, these cars already are in an extremely benign environment. The weather and roads in Tempe are pretty much perfect for this sort of test, and the car got it wrong. Yes, the safety driver messed up. The safety driver messed up precisely because he/she allowed the car to remain autonomous. The assertion that self driving cars will be safer assumes that there shouldn’t have to _be_ a safety driver, right? Otherwise what would be the point?

Uber, Waymo, and others should take these cars off the road and put them back into controlled testing environments where all the participants (test pedestrians, test bikers, etc.) are aware what is going on and have agreed to bear the risks. Build a bunch of Potemkin villiages for all I care. My arguement is that these things are a long way from being proven to be as safe as the existing cars, and it is wrong to subject the public to this sort of risk just so Uber and Waymo engineers can have their fun.


Sent from my iPad using Tapatalk
 
Deaths per mile for autonomous vehicles = 1 / 6e6 = 1.7e-7

Odds of death in self-driving / non-self-driving = 1.7e-7/1.2e-8 = 14.

So, the data to date shows that there is a chance of death 14 times higher with the self driving vehicles. That is significant.

Cirrus had a fatal crash within the first 5 hours of flight. By the calculation method used above Cirrus should have by now killed ~2.4 million people. They somehow haven't, and has since become one of the safest GA airplane to fly.

One cannot extrapolate a trend from a sample set of 1.
 
I think accident avoidance can be broken down into roughly 3 stages.

1) Perception. IOW “seeing” that an accident may be imminent. Sensors can perceive far beyond the limits of human vision, and look in all directions at once. I think computers will win handily here.
I think this is where computers will have more problems than humans. They will be able to sense things in front of the car, but there are many objects to the side of cars that may or may not present a risk. Examples might be; a person about to jaywalk, someone opening the door of a parallel parked car, and a lane-splitting motorcycle approaching from behind. The problem is properly assessing the risk and acting appropriately. You'll either have a car that is too aggressive, or one that stop or slows down for everything. Also, what happens when a car in front decides to stop in a travel lane, for whatever reason. At some point is the driverless car going to switch to the open lane, and how is it going to judge an appropriate opening between cars? Sometimes if you are not a bit aggressive, other drivers will never give you an opening.

I'm not against driverless cars. I just think that there are conditions that the programmers need to fix that seem quite variable. It's not like playing chess.
 
Cirrus had a fatal crash within the first 5 hours of flight. By the calculation method used above Cirrus should have by now killed ~2.4 million people. They somehow haven't, and has since become one of the safest GA airplane to fly.

One cannot extrapolate a trend from a sample set of 1.

Agreed. However, one cannot also state from a small sample size the other way either. People are stating as fact that self driving cars _will_ be safer. As a lawyer might say: “objection, assumes facts not in evidence.”

My points in this discussion are these:

1. We have no way of knowing that self driving cars will be safer than not. There is the possibility, but we don’t KNOW. To blindly assert something as true when not proven is foolish.

2. It is clear that self driving cars are currently not safer than non self driving cars based on the existing data.

3. Having these things on the road and interacting with the public exposes people to a significant risk for which they did not sign up for.

4. We need a much better set of structure and, yes, regulations put into place which will require that manufactures prove that their vehicles are at least as safe as non autonomous vehicles before they are allowed back into the public space.



Sent from my iPad using Tapatalk
 
I think this is where computers will have more problems than humans. They will be able to sense things in front of the car, but there are many objects to the side of cars that may or may not present a risk. Examples might be; a person about to jaywalk, someone opening the door of a parallel parked car, and a lane-splitting motorcycle approaching from behind. The problem is properly assessing the risk and acting appropriately. You'll either have a car that is too aggressive, or one that stop or slows down for everything. Also, what happens when a car in front decides to stop in a travel lane, for whatever reason. At some point is the driverless car going to switch to the open lane, and how is it going to judge an appropriate opening between cars? Sometimes if you are not a bit aggressive, other drivers will never give you an opening.

I'm not against driverless cars. I just think that there are conditions that the programmers need to fix that seem quite variable. It's not like playing chess.

Actually, the _sensors_ and perception are quite likely better than humans’. 360 degree multispectral coverage is certainly doable and is not in a human’s capability. Electronics have better perception than humans, and have had for quite some time. Radar, Lidar, Infrared, etc, all provide perception at greater distances and in more varied conditions than humans.

What you are talking about is the interpretation of that data, and I wholeheartedly agree with you about that. There is a difference between perception (there is an object radiating heat at a certain angular rho,theta, distance) and interpretation (That object is a person, and based on the way the person is acting, my experience leads me to believe that there is a chance they may step off the sidewalk in front of me, so I should slow down).

So I would rank it this way.
1. perception: Advantage automation
2. Interpretation: Strong advantage humans
3. Decision: Strong advantage humans

There is a fourth step. Once a decison has been made, it must be put into action. Automation, I believe, likely has the advantage in the actual execution of the decision.

However, until the self driving car manufactures show that they have at least equaled humans in the Interpretation and Decision categories, I don’t want them on the roads with the rest of us.

And, color me skeptical that they will achieve this parity for a very long time.




Sent from my iPad using Tapatalk
 
Last edited:
Agreed. However, one cannot also state from a small sample size the other way either. People are stating as fact that self driving cars _will_ be safer. As a lawyer might say: “objection, assumes facts not in evidence.”

My points in this discussion are these:

1. We have no way of knowing that self driving cars will be safer than not. There is the possibility, but we don’t KNOW. To blindly assert something as true when not proven is foolish.

2. It is clear that self driving cars are currently not safer than non self driving cars based on the existing data.

3. Having these things on the road and interacting with the public exposes people to a significant risk for which they did not sign up for.

4. We need a much better set of structure and, yes, regulations put into place which will require that manufactures prove that their vehicles are at least as safe as non autonomous vehicles before they are allowed back into the public space.

I don't think this is a vehicle problem. I think it's a company attitude problem. Tesla's has around 40% fewer accidents on autopilot. Uber is nowhere there that. (Their miles-between-accidents are < 10k on average.)

The difference is Tesla insists that drivers keep their hands on the wheel. Uber doesn't have this requirement for their safety drivers.

There is no reason in the world Uber has for not requiring their safety drivers to keep their hands on the wheel. They measure success by number of disengages per 100k miles. You can still determine that metric even if you have a completely hands-on driver during the testing phase.

Uber just has a cavalier attitude in everything they do. They have driver-facing cameras. They must have known by know their drivers were not paying attention. They didn't care.
 
I don't think this is a vehicle problem. I think it's a company attitude problem. Tesla's has around 40% fewer accidents on autopilot. Uber is nowhere there that. (Their miles-between-accidents are < 10k on average.)

The difference is Tesla insists that drivers keep their hands on the wheel. Uber doesn't have this requirement for their safety drivers.

There is no reason in the world Uber has for not requiring their safety drivers to keep their hands on the wheel. They measure success by number of disengages per 100k miles. You can still determine that metric even if you have a completely hands-on driver during the testing phase.

Uber just has a cavalier attitude in everything they do. They have driver-facing cameras. They must have known by know their drivers were not paying attention. They didn't care.

I agree that there is also an attitude problem here. But Tesla’s requirement that drivers keep their hands on the wheel helps make my case. Some folks are asserting that self driving cars will be safer than non self driving cars. Requiring a human driver to be ready to take over, means that the car is not truly autonomous. I see huge benefits to adding the many safety related driver assist technologies to human driven vehicles. But those still have the awesome computing power of the human brain in the loop.

What I take exception to is the assertion that fully autonomous vehicles will be safer. Not proven yet. It may be, but it is also possible that it won’t be.

And, if they want to test these things in public, I have no problem with that if they have a fully engaged safety driver ready, willing, and able to take over at any point. Sensors on the steering wheel, and gas/brakes to verify the safety driver is on the controls, attention detection sensors to verify the driver is alert and paying attention. Spot check review of in-car driver cameras to insure that the drivers are doing their job. Put something like this in place, and, yes, it probably is okay to test on the road.

Having a safety driver who isn’t even paying attention is not acceptable. Yes, the driver is culpable, but so is the company for not having systems in place to make darn sure the driver was doing their job.


Sent from my iPad using Tapatalk
 
This is an opinion not based on fact. What evidence do you have for making this statement? Wishful thinking? I’m not saying you’re wrong, but I’m also not saying you’re right.

It is stated as a prediction, not a fact. My exact statement included the phrase "when mature".
 
Back
Top