Will robots/AI replace human pilots?

"There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will." - Albert Einstein, 1932

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." - Western Union internal memo, 1876

"Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia." - Dr. Dionysius Lardner, 1830

"I think there is a world market for maybe five computers." - Thomas Watson, chairman of IBM, 1943

"The horse is here to stay but the automobile is only a novelty—a fad." - Henry Ford's lawyer told not to invest in the Ford Motor Co., 1903

"No, it will make war impossible." - -Hiram Maxim, inventor of the machine gun

"There will never be a bigger plane built." - - A Boeing engineer, after the first flight of the 247

A rocket will never be able to leave the Earth’s atmosphere.” — New York Times, 1936

You do realize that there are many, many examples in the opposite direction as well, right?

Popular Science predicted personal flying cars in every house by 1980. GE showcased the (fake) automated house in the 1964 Worlds Fair. We were supposed to be vacationing on the Moon by 2000. And so on.

Those of us who work in technology know there are limitations.

But silly belief in hype drives the economy here, so keep going at it.
 
Last edited:
The fact that you don't doubt it does not mean that it is not in doubt.

Clearly my statement was meant in the manner of an opinion, but in my opinion the Sun will rise up in Florida tomorrow. That is also an opinion.

Cheers
 
But the pitch is the engineers work once for a whole bunch of flights while the pilots work every one...:)

Seen any software engineers work themselves out of a job, yet? LOL. Software is all subscription based now. They'll get away with it in unmanned aviation, for the most part, too. Aircraft landed, auto App Update... hahaha.
 

I retired as a Silicon Valley exec. There is no question AI will replace humans in the cockpit, on the roads and at sea. It is already happening.

U.S. and Europe Race to be First to Self-Driving Trucks https://www.trucks.com/2017/02/13/self-driving-trucks-us-europe/


Interesting. Have you seen advancement in AI being able to anticipate (predict) road, air, see situations prior to them reacting? On a busy freeway with cars/trucks inches apart going high speed, I would think this is necessary.
 
Seen any software engineers work themselves out of a job, yet? LOL. Software is all subscription based now. They'll get away with it in unmanned aviation, for the most part, too. Aircraft landed, auto App Update... hahaha.

No. We're smarter than the HW engineers. :) They're mostly writing software now too. As are the chemical engineers, physics majors, etc., etc.

And to be fair, I said that's what the pitch would be. Not what the reality would be.

You remember the joke about the world's oldest profession (not that one)? The gardener said his was because the garden of Eden. The architect says hers was because bringing order out of chaos. Then the SW engineer said "Where do you think the chaos came from?"
 
Interesting. Have you seen advancement in AI being able to anticipate (predict) road, air, see situations prior to them reacting?

Let me give you an example of technology that advanced very rapidly recently that most are not aware of and take for granted.

Ten years ago Intel and AMD were in a race to get speaker independent voice recognition technology solid and into silicon. Prior to that voice tech was slowly evolving; first text to voice only then speaker dependent voice recognition on a very crude level and then with increases in speed and refinement of algorithm a modicum of speaker independence.

Today, smartphones, for example now allow anyone, regardless of gender, thick accent or speech impediment to speak without training the speech translator and a result will be nearly 100% accurate. No small achievement.

AI was also very crude back in the 80s, only enabling machines to play games better than humans. That was light years ago in tech. Today these systems are likewise beginning to come off supercomputing platforms and are being downsized into all kinds of machine platforms including the one in your hand (digital assistants are AI).

A couple of years ago the Turing test was achieved (a human could not tell they were interacting with a machine). That was a major milestone for AI.

The big trick for autonomous cars and trucks is 3d vision tech. Once a machine can "see" like a human the rest is already in place. Acting on the data is a no-brainer, pun intended.

For aircraft it's about the sheer amount of data and the growing automated origin of the data in the air and on the ground. The first gen will be AI augmented decision systems (for both pilots and ATC), then AI autopilots, then a fully intelligent and automated cockpit. Here is a decent article that's current on the subject: https://www.wired.com/2017/03/ai-wields-power-make-flying-safer-maybe-even-pleasant/
 

I retired as a Silicon Valley exec. There is no question AI will replace humans in the cockpit, on the roads and at sea. It is already happening.

U.S. and Europe Race to be First to Self-Driving Trucks https://www.trucks.com/2017/02/13/self-driving-trucks-us-europe/

You bought into the hype....

I just had a Google car cut me off on Shoreline Blvd. yesterday and slam on the brakes. This is after 5+ years of development. It's not close. After all that money, Google cars have an accident rate the same as the general public and still drive like confused octogenarians on painkillers.

You show your ignorance by framing the problem as dataflow. That's not the problem at all. It's all the corner cases that can't possibly be tested -- or in some cases even specified (e.g., the well known "trolley problem"). And your misconception is VERY common among management, and has led to some truly spectacular failures. The stuff you can specify on an envelope is easy. The major failures will come from the details that "don't matter."

Silicon Valley has worked for many years by promising the moon and delivering something quite a lot less. Some limited autonomy to assist a driver is quite likely; in some cases this has even happened. Complete autonomy is incredibly stupid.
 
Today, smartphones, for example now allow anyone, regardless of gender, thick accent or speech impediment to speak without training the speech translator and a result will be nearly 100% accurate. No small achievement.

Name one device that covers all of your scenarios with the cellular network turned off.
 
The big trick for autonomous cars and trucks is 3d vision tech. Once a machine can "see" like a human the rest is already in place. Acting on the data is a no-brainer, pun intended.
That's also their fallacy. Takes me about 5 minutes to blind an autonomous vehicle. I'm sure there are others way more skilled than I that can do it in less time.
 
[snip]
That's not the problem at all. It's all the corner cases that can't possibly be tested -- or in some cases even specified (e.g., the well known "trolley problem"). [snip]

:yeahthat:

And any engineer that tells you otherwise is insufficiently paranoid (i.e. experienced).
 
Name one device that covers all of your scenarios with the cellular network turned off.

So Nate, are you saying for ground operations (cars/trucks, etc) we'd need SAT links instead of cell links to be reliable? I know we'd need them for air/sea. Currently, I would tend to agree, not that I am an IT, or computer tech guy, just a user of tech. This topic interests me, as the paranoid person I am, sees governments trying to curtail, or eliminate personal travel with our own, manned vehicles. For our own good of course. :rolleyes: I believe Sport Pilot was an attempt to appease us with flying around the patch instead of using our planes for travel, and being in the ATC system on IFR flight plans.

I do think we can benefit from this tech if used wisely, and within our rights. It is great to have resources here that maybe can give us a glimpse of the near future.
 
:yeahthat:

And any engineer that tells you otherwise is insufficiently paranoid (i.e. experienced).
Ummm....so, lets think this thru. Testing is even safer....the testing is initially performed manned (EP)...then the unsafe conditions are tested unmanned.

UAS have a similar test profile...where initial testing is performed using an EP (external pilot - or remote controlled)...then once the flight parameters are validated more of the automated functions are proved out.

btw....the software can be tested without the actual hardware....in a lab on a bench...in an enviro chamber.
 
Ummm....so, lets think this thru. Testing is even safer....the testing is initially performed manned (EP)...then the unsafe conditions are tested unmanned.

UAS have a similar test profile...where initial testing is performed using an EP (external pilot - or remote controlled)...then once the flight parameters are validated more of the automated functions are proved out.

btw....the software can be tested without the actual hardware....in a lab on a bench...in an enviro chamber.

Yes. For things that are known and understood the software can be tested in a variety of ways. But as MAKG1 said above it's the corner cases that can't even be specified that are the killers. These systems are complex (and I'll grant you, flying in the air, which is much less cluttered, is simpler than autonomous cars have to deal with. But just sopping until you're sure is also not an option in the air.) and interact with each other (a case which is theoretically boundable, at least), the hardware (possibly boundable, but the actual flight systems, engines, etc. are pretty complex) and the environment (which I'd argue is not ever entirely understood).

Our technological history is full of those moments when, as Ernie Gann put it "fate pulled back the curtain and peed on science".

One of my favorites is in the book "We Seven" by the Mercury astronauts. It was the first unmanned launch of the Mercury/Redstone combination. The rocket fired, just started to lift and then the engines stopped and the rocket stayed on the pad. It was amusing in the telling because the capsule continued it's functions on time: the escape rocket fired and flew away and the parachute popped out on time. What had actually happened was there were two connectors from the launch pad to the rocket. The system designers had assumed they would disconnect at the same time as the rocket lifted. And they had every time someone had pulled the cables loose in manual testing. But one had slightly longer pins than the other and the actual rocket lifting was slow enough that the systems had detected one being missing and the other connected which was an error. So it shut down the rocket. But nothing told the capsule that it wasn't flying because it was a failure mode nobody suspected.

I'm not a believer in fate, myself, but I do know (and I mean know) that we don't know everything we think we know. There will be corner cases nobody designed for and bad things will happen. And then we'll learn something (hopefully) and the systems will get better.

John
 
It just occurred to me that the time might come when we HAVE TO rely on AI to pilot aircraft, regardless of whether it's safe or economical, because the supply of human pilots will dry up after the powers-that-be finish killing off GA!
 
Ummm....so, lets think this thru. Testing is even safer....the testing is initially performed manned (EP)...then the unsafe conditions are tested unmanned.

UAS have a similar test profile...where initial testing is performed using an EP (external pilot - or remote controlled)...then once the flight parameters are validated more of the automated functions are proved out.

btw....the software can be tested without the actual hardware....in a lab on a bench...in an enviro chamber.

Wrong. Completely, positively, absolutely wrong.

Testability is a so-called "NP-complete" problem. You cannot do exhaustive testing. Period, full-stop. You can do a VERY sparse sample and hope the rest falls in. Sometimes it doesn't, and the results are not at all good.

Even so-called "full coverage testing" (which assumes implicitly that all parts of the software are independent of one another -- and they aren't) is not actually possible.

What testing you decide to do can often be done on simulated platforms -- if the simulators are perfect (hint: that's a really bad assumption). But it's the testing you don't do that's going to kill people. The real world has a tendency to be a lot less clean than a simulator. And I write simulators for a living. I'd never bet my life on one.

This is a fundamental limitation of software systems. You will not get around it by technology growth -- in fact, you'll make it worse. It's not possible to fully analyze every line of a million line software system like you could when they were a few thousand. Nor is it possible to reuse software modules to avoid testing unless it has no inputs or outputs (and then, why would you need it?).

Some examples to ponder: The Hubble Space Telescope was launched with a defective primary mirror, because its optical simulator was flawed. And the Arianne 5 maiden launch failed because the fuel control system was tested for the Arianne 4 and no one foresaw that the higher fuel flow would cause a floating point overflow, and that the result of an overflow would shut it down. These are both complex interactions that would have been hard to predict a priori.
 
Yes but this is a sport, a hobby, a recreational activity. We WANT to fly the airplane. Not just sit there and watch the robot fly it.

I'd like to see a Robot get my plane out of my hangar...
 
Wrong. Completely, positively, absolutely wrong.

Testability is a so-called "NP-complete" problem. You cannot do exhaustive testing. Period, full-stop. You can do a VERY sparse sample and hope the rest falls in. Sometimes it doesn't, and the results are not at all good.

Even so-called "full coverage testing" (which assumes implicitly that all parts of the software are independent of one another -- and they aren't) is not actually possible.

What testing you decide to do can often be done on simulated platforms -- if the simulators are perfect (hint: that's a really bad assumption). But it's the testing you don't do that's going to kill people. The real world has a tendency to be a lot less clean than a simulator. And I write simulators for a living. I'd never bet my life on one.

This is a fundamental limitation of software systems. You will not get around it by technology growth -- in fact, you'll make it worse. It's not possible to fully analyze every line of a million line software system like you could when they were a few thousand. Nor is it possible to reuse software modules to avoid testing unless it has no inputs or outputs (and then, why would you need it?).

Some examples to ponder: The Hubble Space Telescope was launched with a defective primary mirror, because its optical simulator was flawed. And the Arianne 5 maiden launch failed because the fuel control system was tested for the Arianne 4 and no one foresaw that the higher fuel flow would cause a floating point overflow, and that the result of an overflow would shut it down. These are both complex interactions that would have been hard to predict a priori.

We could just do the Russian version of testing... build flawed designs and keep blowing them up till we get it right. =D
 
well I guess you guys are right.....it will never work. :D

I never said it wouldn't work. I've said repeatedly that it won't be perfect. You can't test what you don't know and there will be things we don't know.
 
So Nate, are you saying for ground operations (cars/trucks, etc) we'd need SAT links instead of cell links to be reliable? I know we'd need them for air/sea. Currently, I would tend to agree, not that I am an IT, or computer tech guy, just a user of tech. This topic interests me, as the paranoid person I am, sees governments trying to curtail, or eliminate personal travel with our own, manned vehicles. For our own good of course. :rolleyes: I believe Sport Pilot was an attempt to appease us with flying around the patch instead of using our planes for travel, and being in the ATC system on IFR flight plans.

I do think we can benefit from this tech if used wisely, and within our rights. It is great to have resources here that maybe can give us a glimpse of the near future.

No, I was making fun of the silly notion that @citizen5000 floated that voice recognition is done in small devices for all types of voices without external assistance. Voice recognition is not done on-device. Network required.

As far as automation goes in aircraft, we already have autoland airliners. We have no shortage of tech. We have a shortage of money to implement it on down into the small aircraft fleet. And a huge pile of regulations that make it very expensive.

As far as governments go, I don't think they care if you go by human large mailing tube or small mailing tube. They just want it all TRACKED. See: ADS-B mandate. They don't care if you travel, they just think they ought to know where everything that's traveling is. Like always, it's sold as being for your "safety".

I'm sure Martha King felt nice and safe, face down on the ramp, with an AR-15 pointed at her head, all because the "safety" computer in New Mexico
said her and John's airplane was a possible drug smuggler. Yay "safety" run by software.

When you say "within our rights" you realize there's no right under the law to fly anything, correct? Only privileges.
 
Yes but this is a sport, a hobby, a recreational activity. We WANT to fly the airplane. Not just sit there and watch the robot fly it.

I'd like to see a Robot get my plane out of my hangar...

I agree but I think the application in discussion is referring to professional flying.

If the company can replace me with AI and it's proven to be safer and cheaper, they will. Now, will I be out of a job when that time comes? Doubtful. I'd be there as a systems manager and to take over when "George" gets hacked or some sort of EMI episode.
 
I'm sure Martha King felt nice and safe, face down on the ramp, with an AR-15 pointed at her head, all because the "safety" computer in New Mexico
said her and John's airplane was a possible drug smuggler. Yay "safety" run by software.

She was saved due to her hair and 80's clothing which eventually repelled the SWAT guys!

When you say "within our rights" you realize there's no right under the law to fly anything, correct? Only privileges.

Well that is a debate for another time, and place as I believe we have a right to free travel, (freedom of movement) no matter of the means of travel.
 
I never said it wouldn't work. I've said repeatedly that it won't be perfect. You can't test what you don't know and there will be things we don't know.
sounds like you have complete faith in DO178B too....:eek:
 
sounds like you have complete faith in DO178B too....:eek:

I don't have complete faith in anything humans have built. We're not infallible. We're just not.

Did you read the failure reports in the B-777 automation stuff posted in this thread? That was developed in accordance with DO178B. It's a good practice. We could build much worse systems than we do. But we and the systems are not now, nor ever will be perfect.
 
Well that is a debate for another time, and place as I believe we have a right to free travel, (freedom of movement) no matter of the means of travel.

Yeah. Without going into it too heavily, the real question these days is do you have any right to privacy in that travel. Can't really even go by vehicle, the plate readers are nearly ubiquitous in most large cities now.

Government won't impede travel, as long as they make sure they know where everyone is. Instead of "papers, please?" its just electronic. "ADS-B, please."
 
And the Arianne 5 maiden launch failed because the fuel control system was tested for the Arianne 4 and no one foresaw that the higher fuel flow would cause a floating point overflow, and that the result of an overflow would shut it down. These are both complex interactions that would have been hard to predict a priori.

By the way, the Ariane 5 thing was VERY easy to predict. It's more an example of cheap, lazy, system/code reuse than an example of a complex interaction.
 
By the way @MAKG1 - I didn't have time to dig this up earlier, but unless you're talking about a different launch failure of Ariane 5 other than #501... it wasn't a fuel flow number that barfed the IRUs, it was a horizontal rate number that did it.

The real error was bringing Ariane 4's code forward to 5 without thinking about it OR testing it closed-loop, and also the completely broken idea that any IRU error should knock that IRU off the bus and the other IRU would take over... problem was, the sensor/code error that killed IRU 2 (the active one at launch) had killed IRU 1, 75ms before IRU 2 said "eff it, I'm out..."

http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html

The entire concept of shutting down BOTH guidance platforms BY DESIGN during flight, automatically, is just flat wrong-thinking. There's no way around that. And THAT was VERY predictable.

(I'm a failure mode/systems guy... I absolutely LOVE analyzing systems for flaws like this. I'm always amazed when organizations that are magnitudes larger, somehow forget simple rules like... "It has to run, or the vehicle crashes." in accident analyses of these things. It's like their sheer size and tiny specializations focus the engineers down on one little piece of something, and they just don't pay ANY attention to the big picture anymore. And it's a repetitive problem in large systems. You'd be amazed how many times I've walked into a large telecom site and just immediately saw their biggest risk as an outsider... "Hey, what happens if that power panel that only has a single feed over there, burns up?" and they just couldn't even see it at all.)
 
By the way @MAKG1 - I didn't have time to dig this up earlier, but unless you're talking about a different launch failure of Ariane 5 other than #501... it wasn't a fuel flow number that barfed the IRUs, it was a horizontal rate number that did it.

The real error was bringing Ariane 4's code forward to 5 without thinking about it OR testing it closed-loop, and also the completely broken idea that any IRU error should knock that IRU off the bus and the other IRU would take over... problem was, the sensor/code error that killed IRU 2 (the active one at launch) had killed IRU 1, 75ms before IRU 2 said "eff it, I'm out..."

http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html

The entire concept of shutting down BOTH guidance platforms BY DESIGN during flight, automatically, is just flat wrong-thinking. There's no way around that. And THAT was VERY predictable.

(I'm a failure mode/systems guy... I absolutely LOVE analyzing systems for flaws like this. I'm always amazed when organizations that are magnitudes larger, somehow forget simple rules like... "It has to run, or the vehicle crashes." in accident analyses of these things. It's like their sheer size and tiny specializations focus the engineers down on one little piece of something, and they just don't pay ANY attention to the big picture anymore. And it's a repetitive problem in large systems. You'd be amazed how many times I've walked into a large telecom site and just immediately saw their biggest risk as an outsider... "Hey, what happens if that power panel that only has a single feed over there, burns up?" and they just couldn't even see it at all.)

Nate, if it was completely predictable, why didn't anyone predict it?

Almost all interaction failures are "predictable" in hindsight. The problem is, there are uncountably many of them, so MANY will get left behind, every time. The game is to predict which ones, ahead of time. But since every line of code and every mechanical feature has the potential to affect every other one, that's not possible.

Engineers attempt to control the interactions with ICDs. But it is always possible to have unseen interactions outside the ICD that no one realizes are there. In your field, the classical example is the buffer overrun or stack corruption.
 
I don't have complete faith in anything humans have built. We're not infallible. We're just not.

Did you read the failure reports in the B-777 automation stuff posted in this thread? That was developed in accordance with DO178B. It's a good practice. We could build much worse systems than we do. But we and the systems are not now, nor ever will be perfect.
true that....;)
 
Nate, if it was completely predictable, why didn't anyone predict it?

Because they literally didn't even look.

From the accident report:

"It would have been technically feasible to include almost the entire inertial reference system in the overall system simulations which were performed. For a number of reasons, it was decided to use the simulated output of the inertial reference system, not the system itself or its detailed simulation. Had the system been included, the [design error] could have been detected."

Mainly it looks like the decision not to even TRY to test with Ariane 5 data was a cost decision... A "just use the system from Ariane 4 and it'll be fine..." type of decision. No rational thought behind it.

The Ariane 501 mistakes were massive...

- Code/system reuse without ANY simulation whatsoever with expected flight data numbers that were FULLY expected on the brand new platform.
- Designed to fail even prior to that mistake... both IRUs could shut down leaving the platform literally without the data they provide.
- Running code in flight that was no longer necessary AT ALL during that portion of the flight. Code that was left running on Ariane 4 for CONVENIENCE of not needing 45 minutes to re-align the platform after an on-pad abort, but completely unnecessary aboard Ariane 5. The unnecessary code was what crashed the IRUs. NONE of the data they were outputting was being used after liftoff.
- Code that outputs TEST data when restarted, running on a PRODUCTION/FLIGHT platform, and nobody writing into the RECEIVER to IGNORE that data in-flight.

(And more. So many errors in that accident, it's a horrible example of "complex" systems... it's a great example of "ASS-U-ME-ing" old stuff will work on a new platform.)

You also get THIS kind of crap in aerospace, especially space... and various folks have pointed out that accident reports often gloss over or don't contain ANY real data about what "tests" were actually done... and VERY few companies will allow third parties to look over their ASS-U-ME-tions unless the project management DEMANDS it. Not open cultures. Secretive. Because... rockets!

"The MCO report contains little information about the software engineering practices but hints at specification deficiencies in statements about “JPL’s process of cowboy programming” and “the use of 20-year-old trajectory code that can neither be run, seen, or verified by anyone or anything external to JPL.”"

A pretty decent analysis of a large number of space losses incurred, caused by software, from an MIT prof who specializes in such stuff... if you're bored. It's actually a pretty fun read if you like stories of how software engineers and software engineering really works in the real world... and remember these are hundred-million-dollar plus projects... it's not a resource issue, most of the time. Smaller companies stand NO chance of ever getting software right, if these folks can't.

http://sunnyday.mit.edu/papers/jsr.pdf

Conversely, the Shuttle had a big "win" in software in this regard... one of the most professional groups of software writers yet documented on the planet... not cheap, not sexy, not flashy, but damned good code. And they lowered their error rate by 90% after they started, an unheard of number in software development.

https://www.fastcompany.com/28121/they-write-right-stuff

"That’s the culture: the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip — but it is the future of software. When you’re ready to take the next step — when you have to write perfect software instead of software that’s just good enough — then it’s time to grow up."
 
Because they literally didn't even look.

From the accident report:

"It would have been technically feasible to include almost the entire inertial reference system in the overall system simulations which were performed. For a number of reasons, it was decided to use the simulated output of the inertial reference system, not the system itself or its detailed simulation. Had the system been included, the [design error] could have been detected."

Mainly it looks like the decision not to even TRY to test with Ariane 5 data was a cost decision... A "just use the system from Ariane 4 and it'll be fine..." type of decision. No rational thought behind it.

The Ariane 501 mistakes were massive...

- Code/system reuse without ANY simulation whatsoever with expected flight data numbers that were FULLY expected on the brand new platform.
- Designed to fail even prior to that mistake... both IRUs could shut down leaving the platform literally without the data they provide.
- Running code in flight that was no longer necessary AT ALL during that portion of the flight. Code that was left running on Ariane 4 for CONVENIENCE of not needing 45 minutes to re-align the platform after an on-pad abort, but completely unnecessary aboard Ariane 5. The unnecessary code was what crashed the IRUs. NONE of the data they were outputting was being used after liftoff.
- Code that outputs TEST data when restarted, running on a PRODUCTION/FLIGHT platform, and nobody writing into the RECEIVER to IGNORE that data in-flight.

(And more. So many errors in that accident, it's a horrible example of "complex" systems... it's a great example of "ASS-U-ME-ing" old stuff will work on a new platform.)

You also get THIS kind of crap in aerospace, especially space... and various folks have pointed out that accident reports often gloss over or don't contain ANY real data about what "tests" were actually done... and VERY few companies will allow third parties to look over their ASS-U-ME-tions unless the project management DEMANDS it. Not open cultures. Secretive. Because... rockets!

"The MCO report contains little information about the software engineering practices but hints at specification deficiencies in statements about “JPL’s process of cowboy programming” and “the use of 20-year-old trajectory code that can neither be run, seen, or verified by anyone or anything external to JPL.”"

A pretty decent analysis of a large number of space losses incurred, caused by software, from an MIT prof who specializes in such stuff... if you're bored. It's actually a pretty fun read if you like stories of how software engineers and software engineering really works in the real world... and remember these are hundred-million-dollar plus projects... it's not a resource issue, most of the time. Smaller companies stand NO chance of ever getting software right, if these folks can't.

http://sunnyday.mit.edu/papers/jsr.pdf

Conversely, the Shuttle had a big "win" in software in this regard... one of the most professional groups of software writers yet documented on the planet... not cheap, not sexy, not flashy, but damned good code. And they lowered their error rate by 90% after they started, an unheard of number in software development.

https://www.fastcompany.com/28121/they-write-right-stuff

"That’s the culture: the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip — but it is the future of software. When you’re ready to take the next step — when you have to write perfect software instead of software that’s just good enough — then it’s time to grow up."
This history does a great job of illustrating the importance of the point I was trying to make earlier, that taking human pilots out of the loop does not take ALL humans out of the loop.
 
Because they literally didn't even look.

From the accident report:

"It would have been technically feasible to include almost the entire inertial reference system in the overall system simulations which were performed. For a number of reasons, it was decided to use the simulated output of the inertial reference system, not the system itself or its detailed simulation. Had the system been included, the [design error] could have been detected."

Mainly it looks like the decision not to even TRY to test with Ariane 5 data was a cost decision... A "just use the system from Ariane 4 and it'll be fine..." type of decision. No rational thought behind it.

The Ariane 501 mistakes were massive...

- Code/system reuse without ANY simulation whatsoever with expected flight data numbers that were FULLY expected on the brand new platform.
- Designed to fail even prior to that mistake... both IRUs could shut down leaving the platform literally without the data they provide.
- Running code in flight that was no longer necessary AT ALL during that portion of the flight. Code that was left running on Ariane 4 for CONVENIENCE of not needing 45 minutes to re-align the platform after an on-pad abort, but completely unnecessary aboard Ariane 5. The unnecessary code was what crashed the IRUs. NONE of the data they were outputting was being used after liftoff.
- Code that outputs TEST data when restarted, running on a PRODUCTION/FLIGHT platform, and nobody writing into the RECEIVER to IGNORE that data in-flight.

(And more. So many errors in that accident, it's a horrible example of "complex" systems... it's a great example of "ASS-U-ME-ing" old stuff will work on a new platform.)

You also get THIS kind of crap in aerospace, especially space... and various folks have pointed out that accident reports often gloss over or don't contain ANY real data about what "tests" were actually done... and VERY few companies will allow third parties to look over their ASS-U-ME-tions unless the project management DEMANDS it. Not open cultures. Secretive. Because... rockets!

"The MCO report contains little information about the software engineering practices but hints at specification deficiencies in statements about “JPL’s process of cowboy programming” and “the use of 20-year-old trajectory code that can neither be run, seen, or verified by anyone or anything external to JPL.”"

A pretty decent analysis of a large number of space losses incurred, caused by software, from an MIT prof who specializes in such stuff... if you're bored. It's actually a pretty fun read if you like stories of how software engineers and software engineering really works in the real world... and remember these are hundred-million-dollar plus projects... it's not a resource issue, most of the time. Smaller companies stand NO chance of ever getting software right, if these folks can't.

http://sunnyday.mit.edu/papers/jsr.pdf

Conversely, the Shuttle had a big "win" in software in this regard... one of the most professional groups of software writers yet documented on the planet... not cheap, not sexy, not flashy, but damned good code. And they lowered their error rate by 90% after they started, an unheard of number in software development.

https://www.fastcompany.com/28121/they-write-right-stuff

"That’s the culture: the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip — but it is the future of software. When you’re ready to take the next step — when you have to write perfect software instead of software that’s just good enough — then it’s time to grow up."

Yes.

EVERY oversight is because someone didn't look.

Which ones are going to be overlooked is an exercise in lucky guesses. There will always be A LOT of them. It's unavoidable in any system more complex than dirt-simple. Testing is an NP-complete problem. Almost all potential failure modes will not be enumerated, let alone tested.

That there would be bugs in Ariane was predictable ahead of time. Their identity and severity was not.

It's like asking you, as an IT person, to announce all your unplanned downtimes in advance. You KNOW they will be there. You just don't know when and what the problem will be.

Using the shuttle as an example of risk management is more than a little sketchy. It had multiple severe hardware "bugs," two of which destroyed orbiters and killed people. The second was not foreseen, though if you go through the same hindsight driven process you did, it should have been. No one identified a hypersonic bit of fluffy foam as a hazard until it proved itself.
 
It's like asking you, as an IT person, to announce all your unplanned downtimes in advance. You KNOW they will be there. You just don't know when and what the problem will be.

Umm. You might be surprised. The number of unplanned outages caused by human error or equipment failure is less than five in ten years on stuff I designed and administer. I don't "do" outages.

Part of that comes from old school telecom, part comes from working at a data center company who literally made outages taboo, culturally. Their company motto was, "Uptime, all the time." Once you've played grown up, its hard to go back.

Using the shuttle as an example of risk management is more than a little sketchy. It had multiple severe hardware "bugs," two of which destroyed orbiters and killed people. The second was not foreseen, though if you go through the same hindsight driven process you did, it should have been. No one identified a hypersonic bit of fluffy foam as a hazard until it proved itself.

First, we were talking space systems software as a broad category. Shuttle whipped almost everything's ass at software. It's well known for that. It was also very expensive. If you want REAL software *engineering* that's what it costs. They gave the world an example.

Second, the hardware problems were well known on Shuttle at both accidents. The first they literally flew outside of spec. Murder. Nothing less. The second, they had seen the problem and downplayed it culturally.

Again, budget. Fly or die. The deal they made to keep Shuttle alive had to hit certain "gates" or all funding would be cut immediately and sent to other programs. Any management structure put under that sort of budget pressure always chooses expedience over safety. But the problems were known. One was extremely well defined, the other was seen to be happening and nobody said "stop, we need to test this". There was a dent the size of a basketball in one of the boosters only two flights before Columbia. The culture said, "It's okay. Nothing bad will happen..."

Managers who attended the Readiness for Flight meetings all agree that one of the ONLY people willing to buck the trend of rubber stamping everything, was John Young. As in Bob Crippen and John Young. The first two guys to fly Columbia. Literally the only guy who would force questions. The very definition of a cultural problem and a very bad one if you can name the only guy who ever said, "Wait a minute..." in meetings so large the experts overflowed into the hall and the room was standing room only.

So yeah, the Shuttle *software* comment still stands. Hardware, they built a culture of "it's not that bad" and paid the price, twice.

The Ariane 501 mistake path is so far down in the weeds quality-wise, on a space system, it's really quite embarrassing. But it wasn't a human spaceflight rated system either, so it ultimately just cost money. Big boom, debris everywhere. No human bodies in the debris field. Lloyds of London coughs up some cash.
 
Anyway... as all of the above relates to autonomous individual air transport being commonplace...

You really going to trust the coders at Uber to write code for a little flying human slap-chop carrying you across town as a gust front blows in?

Like I've said before, I'll hop aboard anything fully automated that flies, the day after EVERY member of the development team was aboard it/one of them/whatever size the are, in bad weather.

If even one member of that engineering team won't fly on it, I won't either.
 
Do you apply that same standard to piloted airplanes?

Nauga,
who knows a guy...

Probably should. But there's not too many people making airplanes warning their family to stay off of all of them these days. :)
 
Back
Top