Took a Waymo (driverless car) this week

I think you're overlooking the cost of installing all that tech in remote areas.

Or are we all going to be confined to cities and flatland?
When did I say it would be instant?
 
you don't even need stop signs anymore, the cars can negotiate among themselves how to get past each other or wait for a blockage, etc. A lot of other things beyond what I've pointed out, but our level of tech is way beyond the capabilities needed to make that kind of system work. But it's got to be a closed system.
...until the system is hacked and there are simultaneous mass casualties nationwide...

The biggest problem I see is decision making... the autonomous car is going to hit either the oncoming car approaching at a high closing rate, or the kid crossing the street, with no other option... which does the computer pick? For that matter, which does the human pick in that fraction of a second?

I would have thought they'd start first in express lanes of interstate highways, where you don't have crossing traffic or pedestrians... manually drive to the on ramp and engage the autopilot, and resume manual control when exiting.
 
The biggest problem I see is decision making... the autonomous car is going to hit either the oncoming car approaching at a high closing rate, or the kid crossing the street, with no other option... which does the computer pick?
That's not how a computer works. It has no morals. The autonomous car will attempt to avoid hitting everything.

If you're ever in Nashville, look me up. We'll go for an autonomous drive.
 
That's not how a computer works. It has no morals. The autonomous car will attempt to avoid hitting everything.

If you're ever in Nashville, look me up. We'll go for an autonomous drive.
The autonomous car has no morals, but makes choices based on weighted risk / harm outcome rankings that are decided upon when creating the algorithms.

Those risk rankings are generated by real people; engineers, programmers and lawyers.

One example of predetermined risk weighting is below. The algorithm determines how much separation risk to allocate to the vehicle and owner from the heavy truck, vs. increasing risk of potential harm to the cyclist in the image below.

Additional protection that the vehicle gets allocated to mitigate truck path uncertainty increases injury risk of the cyclist and vice-versa.

The “computer” will have cases where it is not possible to avoid a collision due to rapid movement of either, or an additional party, and by default will be making pre-weighted choices.

I find this to be fascinating, as people in another room far, far away are pre-determining the final destiny of others by simply adjusting a risk weight factor.

IMG_5688.jpeg

The modern version of the old “Trolley Problem”.
 
The autonomous car has no morals, but makes choices based on weighted risk / harm outcome rankings that are decided upon when creating the algorithms.

Those risk rankings are generated by real people; engineers, programmers and lawyers.

One example of predetermined risk weighting is below. The algorithm determines how much separation risk to allocate to the vehicle and owner from the heavy truck, vs. increasing risk of potential harm to the cyclist in the image below.

Additional protection that the vehicle gets allocated to mitigate truck path uncertainty increases injury risk of the cyclist and vice-versa.

The “computer” will have cases where it is not possible to avoid a collision due to rapid movement of either, or an additional party, and by default will be making pre-weighted choices.

I find this to be fascinating, as people in another room far, far away are pre-determining the final destiny of others by simply adjusting a risk weight factor.

View attachment 127186

The modern version of the old “Trolley Problem”.
Sounds like pre-programmed decision trees rather than actual intelligence.
 
Is there enough data yet to show that the safety record of autonomous vehicles equals or exceeds that of manual drivers in all of the locations where vehicles are used?


Is there any evidence that they're worse? Human drivers are really bad.
 
Tolerance is lower for a machine killing a human.
Apparently. Human drivers kill people tens of thousands of times a year, and everyone considers that normal. But until Tesla solves unsolvable philosophical riddles, nobody thinks we're ready for self-driving cars.
 
What about when the programing is machine learning, not hand coded, like Tesla's FSD v12?
It's still there, but in a different way.

"Millions of videos", per Elon, have been used for network training.

Many of those needed to be sorted into "good" and "bad" behavior to set boundaries, as well as many "never do this" regarding traffic laws, etc. I'm sure their lawyers and safety analysts have a say, too.

You can't just dump all of the Tesla fleet video into a machine and have it autonomously decide how to drive based on what John Does happen to do in a car when he thinks no one is watching.
 
Many of those needed to be sorted into "good" and "bad" behavior to set boundaries, as well as many "never do this" regarding traffic laws, etc.
Sure, but none of those videos showed how to handle the hypothetical binary ethical decisions like whether to hit the kid or the car.
 
Is there any evidence that they're worse? Human drivers are really bad.
I don't know. That's why I'm asking if anyone has seen comparative data.

I'm also wondering whether enough driverless vehicle data has been collected to allow a quantitative comparison. For example, the latest data I've seen for driving in general in the U.S. shows 1.37 deaths per 100 million miles traveled. That means that driverless vehicles would need to log about that many miles before it would be possible to conclude that the driverless vehicle fatality record was equal to or better than human drivers.

Another issue is whether driverless vehicles have been tested in locations that might be especially challenging for them, like narrow mountain roads, or in snowy conditions. It was proposed above to kick human drivers off the roads, but under such a proposal, unless we're going to prohibit people from traveling to such places by car, the testing of driverless vehicles needs to include such areas. I don't know whether that is being done.
 
Last edited:
Do they know to stop, if there is a fender bender, to share insurance info?
How would that sharing take place?

There are lots of accidents where the driver drove poorly, and goes to jail.
Who goes to jail when one is involved in eg; vehicular manslaughter?
 
I was in San Francisco this week and one broke down in front of the house…lit itself up with flashers and it’s ok pedestrian disco ball on the roof…blocked traffic for an hour and a half until a technician showed up.
 

Attachments

  • IMG_3327.png
    IMG_3327.png
    8.2 MB · Views: 24
I was in San Francisco this week and one broke down in front of the house…lit itself up with flashers and it’s ok pedestrian disco ball on the roof…blocked traffic for an hour and a half until a technician showed up.
I wonder how long it would have blocked traffic if that had happened on a snowy road on the way to a ski area.
 
Do they know to stop, if there is a fender bender, to share insurance info?
How would that sharing take place?
On a screen? Audibly? Via calling the 800 number on the outside of the vehicle?
There are lots of accidents where the driver drove poorly, and goes to jail.
Who goes to jail when one is involved in eg; vehicular manslaughter?
They can't drive drunk and they can't consciously ignore an unreasonable risk of causing someone's death, so unclear how one could commit manslaughter. Perhaps you could argue some negligence or recklessness by the company that made them though.
 
I like discussions like this because they're often good thought experiments. But identifying difficult corner cases doesn't mean an idea is unviable even if 100% of the problems can't be solved.
 
I like discussions like this because they're often good thought experiments. But identifying difficult corner cases doesn't mean an idea is unviable even if 100% of the problems can't be solved.
I just get grumpy when people talk about banning human drivers from the roads without thinking it through.
 
I just get grumpy when people talk about banning human drivers from the roads without thinking it through.
It's entirely possible that there will be some roads in the future that are off limits to human drivers, like HOV lanes are restricted now. Autonomous vehicles that are talking to each other can drive much now efficiently than human-operated vehicles in certain circumstances. But it won't be everywhere. And not all restaurants will be Taco Bell.
 
Do they know to stop, if there is a fender bender, to share insurance info?
How would that sharing take place?

There are lots of accidents where the driver drove poorly, and goes to jail.
Who goes to jail when one is involved in eg; vehicular manslaughter?
I think this is the best question in the thread. It addresses an issue that's been around for a long time, that the software industry is almost immune from meaningful regulation or legislation. Well, I'll correct that, we DO have regulations protecting software companies, but not many the other way around.

We pick on Boeing for not being able to build an airplane that doesn't have parts flying off of it, yet every major software vendor I'm aware of is in a continual cycle of release products with serious flaws. The problem is so pervasive that regulations are put in place in every industry that deals with confidential data that software has to be kept "current". Meaning that entire industries are built around the profit of continued release of bad products.

So if self-driving cars can fix the software industry, that would be great. If someone builds a car that is generally safer than the average driver, that's also great. But if that car drives into a group of firemen, then that needs to be a liability for that company. And if their processes to develop that software weren't adequate for the risk they were dealing with? Well, then somebody could be going to jail. That last part doesn't seem very likely in this country, but may be in others. Maybe this will drive us (accidental pun there) to a place where the software is open source and universal? In part I say that because I'm not sure how well the Bosch and Toyota versions of "we drive u" will play along with each other, and if we get Microstuff involved we're all going to die.
 
I'm looking forward to the "if you want to avoid the upcoming collision, just watch this ad!" and the "if you would like to unlock the car, fill out this quick, five-question survey about whether or not you like ketchup with your fries!" pop-ups.
 
I like discussions like this because they're often good thought experiments. But identifying difficult corner cases doesn't mean an idea is unviable even if 100% of the problems can't be solved.
But it is ok to think about, talk about potential problems (even unlikely ones) and their solutions, yes?
 
If a car with a driver is involved in an accident with a driverless car, who will the driver argue with.??
 
Waymo knows where stop signs are and, from what I have observed, comes to a full stop at each one, unlike human drivers. They also know the speed limit. I can see how some unusual situations would confuse them, but stop signs, stop lights, and speed limits are predictable. I've seen them around here for 4-5 years; first in the testing phase with safety drivers; then alone; and now carrying paying passengers. I haven't had the occasion to take one, but I have the app and might someday. I am actually more cautious of them as a pedestrian and bicyclist than I think I would be as a passenger.

But things like emergency vehicles, pedestrians, objects in the road, blocked lanes, etc., aren't.

View attachment 127114

Ny Ford Escape knows about speed limits. It does not know about those.

Thanks, that's a lot more than I knew when I got up this morning..!!

Me, too.

I just get grumpy when people talk about banning human drivers from the roads without thinking it through.

Welcome to the club.

We pick on Boeing for not being able to build an airplane that doesn't have parts flying off of it, yet every major software vendor I'm aware of is in a continual cycle of release products with serious flaws. The problem is so pervasive that regulations are put in place in every industry that deals with confidential data that software has to be kept "current". Meaning that entire industries are built around the profit of continued release of bad products.

This is not a new problem. I remember about 40 years ago when a shop in Silicon Valley that specialized in rebuilding MGs had a bumper sticker for sale (that I regret not buying) that said, "I'll have you know that the parts falling off this car are of the highest British quality!"
 
But it is ok to think about, talk about potential problems (even unlikely ones) and their solutions, yes?
As I said in your quote. It does get comical though when the peanut gallery assumes the designers and engineers who have been working with these things on actual roads for years haven't thought of obvious things like ice or emergency vehicles.
 
Last edited:
Welcome to the club.
I think many of us who live west of the Continental Divide know that roads that driverless cars would have trouble with are less of a "corner case" than some would have us believe.
 
As I said in your quote. It does get comical though when the peanut gallery assumes the designers and engineers who have been working with these things on actual roads for years haven't thought of obvious things like ice or emergency vehicles.
I don't doubt that they have thought of those things. However I would like to know how far along they are in testing driverless cars on roads and conditions that are not uncommon from the Rockies on west
 
I think many of us who live west of the Continental Divide know that roads that driverless cars would have trouble with are less of a "corner case" than some would have us believe.
What some videos from the https://www.youtube.com/@WholeMars YouTube channel. He shows driving around California with his Model S with FSD v12.

Many of his drives are without any driver intervention and the Model S is not geofenced nor does it use any advanced sensors--just cameras.
 
I think many of us who live west of the Continental Divide know that roads that driverless cars would have trouble with are less of a "corner case" than some would have us believe.
I think driving around the congested parts of San Francisco would be more difficult that driving on a mountain road (I have done both). Many more unpredictable events in the City; double-parked vehicles and clueless or agressive pedestrians, bicyclists, motorcyclists (lane splitting is legal here), and drivers, to name a few. Waymo seems to have done ok. Not perfect, but OK. Human drivers are far from perfect.
 
So, if a driverless car runs a stop sign, who gets the ticket.??
In Turkey, (while I was stationed there), if you hire a cab and it gets into an accident, the passenger pays for the damages and compensates the victim for any loss or injury. If you kill someone the passenger goes to jail.
The reason: The cab wouldn't be there if you hadn't hired it, so it's your fault.
I was a witness to this while I was there.
 
Our systems suck in so many ways, but there are a lot of places that are MUCH worse. Thanks for the reminder...
 
In Turkey, (while I was stationed there), if you hire a cab and it gets into an accident, the passenger pays for the damages and compensates the victim for any loss or injury. If you kill someone the passenger goes to jail.
The reason: The cab wouldn't be there if you hadn't hired it, so it's your fault.
I was a witness to this while I was there.
I was there in the 80s and was surprised at the four door Chevy cabs from the 50s and 60s that were there:

Old Turkish cabs
 
Back
Top