Trusting AI With Life or Death Decisions

There are already deterministic methods for adapting to aerodynamic uncertainties and controllability changes that do not require "AI".
That only addresses the UA232 problem, it doesn't solve the larger problem of unanticipated failure modes of which UA232 is just one example.

UA232 showed how people can adapt and solve problems that were never anticipated. I'm not concerned at how AI would handle a full control system failure as that problem is no longer unanticipated. How does AI come up with something completely new as the crew of UA232 did? Where are the examples of it doing that?
 
jeees,I thought AI stood for attitude indicator
 
That only addresses the UA232 problem, it doesn't solve the larger problem of unanticipated failure modes of which UA232 is just one example.

UA232 showed how people can adapt and solve problems that were never anticipated. I'm not concerned at how AI would handle a full control system failure as that problem is no longer unanticipated. How does AI come up with something completely new as the crew of UA232 did? Where are the examples of it doing that?

Does it really matter? Let's say you put a bunch of C4 on the plane, and if the AI encounters a US232-like scenario you just blow up the plane in mid-air.

In exchange for that you get to eliminate all of the other crashes that are due to pilot-error. We'd still be better off.
 
Definitely true, and if we could all agree on the best (most ethical) course of action in all situations then an AI would be much more likely than a human driver to deliver those results. Not only that, but the probability of ending up in those situations should drastically decrease or even approach 0.

But to get to that point some human somewhere must make a decision about who lives and who dies in every possible situation. As you point out, they are common thought exercises, but at some point with this technology someone will need to code those into real machines and realize that they are killing real people. A thought exercise and what essentially amounts to the murder of real living people are two very different things.

And to get to that point there will be lawsuits with large settlements.

In the end, I can't wait for driverless cars. I just think there are a lot of difficult ethical quandaries and liability issues that need to be solved first.

No. That's exactly what the whole deep learning technology avoids. What a human will have to do is decide what objectively quantifiable goals to optimize and what cost is associated with what inputs. And you'll have to have models (or let the AI learn in the real world-probably not the best choice). Then the AI runs tens to hundreds of thousands of iterations to build an internal model of how the system(s) work to achieve the desired goals. No human will "have to code ethics". But humans will choose the goals and define what good means. Humans will build the simulation models that teach the AI how the system(s) behave.

John
 
In exchange for that you get to eliminate all of the other crashes that are due to pilot-error.
Your high expectations with regard to the capabilities of the technology are setting you up for disappointment.

It is very easy to solve complex problems by just saying that we'll design the technology so that the problem won't happen. Actually designing and building the technology that can meet those expectations is quite another thing.

Please show us the technology that can do as you suggest.
 
First, the fears of AI robots becoming our overlords are the stuff of Sci Fi horror movies, nothing more.

Second, the advantage to an AI making life and death decisions is that there is no emotion involved. It's all numbers. The classic "save the one child vs a trainload of productive adults" is easy for an AI, and that's precisely how it should be.

As AI begin to take over roles such as driver, pilot, train engineer, there will be accidents, and people will die unnecessarily. But that happens now, simply because people are stupid and/or irresponsible. But for every accident involving an AI operator, lessons will be learned and changes will be made to improve reliability. No different than the model we have no with human operators.

To be honest, I think we are rapidly approaching the point at where I'll trust an AI with my life more than most so-called "professionals" that I've met in my life. Productivity for humans will increase, as mundane tasks are handed over to AIs. Education levels will rise, as jobs for the uneducated will be taken over by AIs. Society will change drastically, and in a very short time once the AI ball really starts rolling.
 
Please show us the technology that can do as you suggest.

I have a feeling that no matter what I post, you won't accept it, since your job depends on you not accepting it. (You're an ATP right?).
 
[snip] Productivity for humans will increase, as mundane tasks are handed over to AIs. Education levels will rise, as jobs for the uneducated will be taken over by AIs. Society will change drastically, and in a very short time once the AI ball really starts rolling.

I wonder how this will actually play out. I've worked in a variety of fields including construction, machine assembly shops, engineering departments and various levels of business. There are people I've known and worked with for whom a "mundane" tasks are all they could realistically handle. They are/were nice folks, conscientious, did their jobs as well as they could (and quite well in many cases). But intellectually, that was their peak. What do those people do when all this nirvana overtakes us? Please understand, I do not think them my inferior as people-I'm friends with some and my life would be poorer if I wasn't. But they are not going to do any sort of creative or deep thinking job. They are not capable of it. 40 years ago they had a genuine shot at a decent living and raising a family with a pleasant though not extravagant lifestyle. But when machines do all the "mundane" jobs-which increasingly are including things like engineering, see:https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/ what will those humans do?

John
 
I have a feeling that no matter what I post, you won't accept it, since your job depends on you not accepting it. (You're an ATP right?).

So does mine, but I don't harbor any illusions about the future. That said, I'll likely be long retired (perhaps dead) before it ever happens, with the lifecycle of an airliner being what it is.

Thing is, *nobody's* job is safe from a purely technical standpoint. It'll be interesting to see how society deals with the transition.
 
No. That's exactly what the whole deep learning technology avoids. What a human will have to do is decide what objectively quantifiable goals to optimize and what cost is associated with what inputs. And you'll have to have models (or let the AI learn in the real world-probably not the best choice). Then the AI runs tens to hundreds of thousands of iterations to build an internal model of how the system(s) work to achieve the desired goals. No human will "have to code ethics". But humans will choose the goals and define what good means. Humans will build the simulation models that teach the AI how the system(s) behave.

John

And I'll add that the human parts of this operation is what concerns me. Do the models really replicate the system(s) accurately? If not the AI will learn wrong things. Are the goals and inputs all the right ones? Do we have a full set? Today a certain amount of our systems work because somewhere in the cycles a human will say "Wait a minute. That doesn't seem right." The AI will never say that. To be fair, many of our systems today have problems because a human is distracted, having a bad day or just screws up. But assuming AI's will not screw up is naive because humans are still defining the operating parameters and the models.

John
 
I wonder how this will actually play out. I've worked in a variety of fields including construction, machine assembly shops, engineering departments and various levels of business. There are people I've known and worked with for whom a "mundane" tasks are all they could realistically handle. They are/were nice folks, conscientious, did their jobs as well as they could (and quite well in many cases). But intellectually, that was their peak. What do those people do when all this nirvana overtakes us? Please understand, I do not think them my inferior as people-I'm friends with some and my life would be poorer if I wasn't. But they are not going to do any sort of creative or deep thinking job. They are not capable of it. 40 years ago they had a genuine shot at a decent living and raising a family with a pleasant though not extravagant lifestyle. But when machines do all the "mundane" jobs-which increasingly are including things like engineering, see:https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/ what will those humans do?

John

Universal Basic Income. Don't particularly like it, but I don't see another way.

Maybe a Star-Trek like economy where nobody actually has any money, and you just get whatever you need from the machines.
 
Universal Basic Income. Don't particularly like it, but I don't see another way.

Maybe a Star-Trek like economy where nobody actually has any money, and you just get whatever you need from the machines.
As long as I don't have to inflate it that'd be okay...maybe.
 
I have a feeling that no matter what I post, you won't accept it, since your job depends on you not accepting it. (You're an ATP right?).
It won't affect me. I retire in 2030. Even if this technology was imminent--which it isn't, it would only be starting to fly revenue flights by then.

I was having this same discussion in the early 1990s on the old CompuServe Travel and AvSig forums. That was 25 years ago and the technological changes over that period have not produced an airliner that is less dependant on the pilots--or even the second pilot. The trends in the industry have been just the opposite in that procedures and techniques have been introduced that depend more heavily on the second pilot to improve safety--and improve safety it has. When I retire in a little over 13 years, the airplane I fly later today will still be in operation flying with two pilots. Today's brand new B787s and A350s will still be flying, not yet halfway through their useful lives, with crews of three or four pilots, and there's a very good chance that new 787s and A350s will still be rolling off the assembly line.

We keep going around in circles because you say that A.I. can figure things out with which it has no prior experience and you attempt to support that by showing how well A.I. learns from its past mistakes. Show me where A.I. can come up with something new to solve a serious problem that it has never experience and its designers have never anticipated.

Maybe a Star-Trek like economy where nobody actually has any money, and you just get whatever you need from the machines.
Well, except for the Ferengi and any other time that a moneyless society was inconvenient to the story...
 
I have a feeling that no matter what I post, you won't accept it, since your job depends on you not accepting it. (You're an ATP right?).
I have a feeling that you're dodging the question.
 
I have a feeling that you're dodging the question.

No I already answered - the reason why AI work with unanticipated scenarios is because that's all it does. If you have something predictable and repeatable you don't need AI.

We see driving as a monolithic predictable thing, because we're so used to it that we smooth over the differences. But to a computer it's all new all the time. When it drives on a road that it's never seen before, with randomly spaced road markings that it has never seen before, surrounded by 10 vehicles that it has never seen before, with 10 drivers that are texting each other and behaving unpredictably - it has to make sense out of it. Everything is unanticipated, from the AI's perspective.

When it turns the steering wheel to change lanes, it doesn't turn it for 5 seconds 2 degrees to the right like a CNC machine would do - it turns it until it's in the new lane and then turn the wheel back again. If the steering wheel behaves differently because you have a under-inflated tire or the wheel hits a brick on the road, it compensates.

Autopilot vehicles can handle those scenarios today - we laugh it off as just "driving", but "driving" is really a series of unanticipated scenarios being placed one after the other, after the other.

Of course it's not perfect - it's nowhere near perfect. But we've only just begun using AI in scenarios like that, and even then we only use it for a small fraction of control functions. The computing power has just not been there before to do deep learning in real time. It is barely there now - and although it all just looks like "computing" it's a VERY different thing than traditional programming that we've all seen over the last 30+ years.

If humans had to program an airplane to fly by itself, your jobs would be safe for the next few millenniums. We'd have a year 9999 rollover problem before you'd have a self-flying airplane. But AI isn't about human programming - it's about self-learning, and that's a game-changer.
 
How does AI come up with something completely new as the crew of UA232 did? Where are the examples of it doing that?
Look up adaptive control allocation. Lots of peer-reviewed stuff publicly available.

Nauga,
artsy
 
Look up adaptive control allocation. Lots of peer-reviewed stuff publicly available.
How much peer-reviewed stuff was available on it prior to UA232?

My point isn't about getting A.I. to handle the UA232 scenario. The challenge in UA232 is not the loss of all flight controls; the challenge is handling a situation that had not previously been considered as a possibility. We have to build an A.I. that can solve the next problem that nobody has yet imagined.
 
How much peer-reviewed stuff was available on it prior to UA232?

My point isn't about getting A.I. to handle the UA232 scenario. The challenge in UA232 is not the loss of all flight controls; the challenge is handling a situation that had not previously been considered as a possibility. We have to build an A.I. that can solve the next problem that nobody has yet imagined.

And the answer with the current generation of AI is yes and no.

For the yes answer: First I'm making an assumption that the AI was properly trained-which does not require anticipating all the situations it will face but simply that the AI was trained using accurate models or the real system.

Within that context, the AI will rapidly adapt the control inputs to get the desired outcome regardless of the situation it's presented with. For example, in the article I linked above, an AI was tasked with controlling the cooling for one of Google's data centers. The objective "goodness" measurements are 1) the systems stay within 68 degrees +- 2 degrees. (These are reasonable numbers but I have no idea what the actual numbers used were.) 2) Use as little money for cooling as possible. (If the engineers were smart they included data on variable electric costs). They turned over control to the AI. Now say a bank of servers goes down. The AI isn't programmed to say "IF bank_of_servers == down THEN turn_off(AC2)". It senses the temperature drop and compensates. Instead of a bank of servers going down, a window (if they even have windows. I wouldn't in my data center.) blows out. Temp climbs, AI compensates. This all sounds like a thermostat, but it's way more capable. It's optimizing the energy cost at the same time. Presumably it will make sure that the temperature is a minimum just before electrical costs jump, for example.

And before you say "Well we anticipated those events. What about something we didn't anticipate?" any example I can give will include something that by definition includes an event I anticipated.

(And as a thought exercise, it's hard to write simple examples that can't be trivially solved by traditional code. But these AI systems are handling hundreds and thousands of inputs and controls and getting the optimized outcome.)

For the no answer:
Pick something where we left out a desired good, like my example above of the AI pilot which controls the plane after explosive decompression and control damage but remains at altitude too long and kills the passengers because the AI trainer didn't include passenger health as a desired "good".

John
 
Back
Top