Horizon Jumpseater goes crazy

With regard to notations of AI math being incorrect. Yes, it is possible to pay admission for the cheap seats, and get a cheap AI system. The marketplace can be brutal. With regard to personal innuendos and slights, these are sophomoric at best, and don't contribute to the quality of the discussion, and simply point to the general observation that without further infusion of creative ideas, this thread is worn out. AI guardrail systems within the context of airline passenger transport are inevitable. It is only a matter of time.

Let me suggest that rather than relying on websites like Defensescoop for your information on AI, you spend some time reading publications of actual engineering professional societies. The IEEE has published many articles on AI, and for starters you might want to take a look at the October 2021 issue of IEEE Spectrum. There are several AI articles in that issue, and I recommend you start with "The Turbulent Past and Uncertain Future of AI" and "7 Revealing Ways AIs Fail."

A couple of major challenges with AI are "catastrophic forgetting" and "brittleness," both of which can easily crash an airplane. Catastrophic forgetting is the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information. These systems have a terrible memory. Brittleness is the inability of an AI system to respond appropriately to slight changes in patterns it has previously learned. AIs struggle with "mental rotation" of images, something that is trivial for human intelligence (I show you a picture of an object and tell you it's a beer glass; then I show you a picture of it lying on its side, and you can still tell me it's a beer glass. AIs often can't.).

In discussing brittleness, the article cites a few examples:
"Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. ... Medical images can get modified in a way imperceptible to the human eye so that AI systems misdiagnose cancer 100 percent of the time."

Regarding math errors, this isn't just a problem in ChatGP. It seems to be (at least so far) an inherent weakness in AI systems. UC-Berkley's Dan Hendrycks states, "AIs are surprisingly not good at mathematics at all. You might have the latest and greatest models that take hundreds of GPUs to train, and they're still just not as reliable as a pocket calculator." UC-Berkley trained an AI on hundreds of thousands of math problems with step-by-step solutions, then tested it using 12,500 high school math problems. The result? The AI only had 5% accuracy.

Still think AI is viable? Still want it making life-critical decisions in a cockpit?

Look, people have been working on AI for decades. Despite the current hype, this isn't a new field. While there's been progress, and AI can perhaps assist with a few things and be used as one more tool in the box, it simply hasn't arrived yet when it comes to protecting human life, and if you read what experts are actually writing in the real engineering publications, instead of relying on pop press hype, you'll see that it has a long, long way to go.
 
Last edited:
How about a 3rd person in the cockpit. Worked for a long time…

Yeah, it did.....

But that was then. These days, airlines are struggling a bit to put two people in the cockpit, let alone three. And for the purpose of thwarting a malevolent actor, how's that work? Do two votes overrule the captain? Do we arm them all and have a 3-way shootout? Knife fight?

This could make a great new episode of Air Disasters.... :devil:
 
The effort from the get go has been focused on loyal wingmen proof of concept (force multiplier stuff,....

Yep. Nothing new. Can't go into details, but about a decade ago I led a LockMart team writing a proposal to the Army for using semi-autonomous drones in that role with Army helicopters. Part of the concept was letting the helo stand off and direct drones into closer proximity for weapon target designation or for intelligence gathering, considering the drones to be attritible assets.
 
Yep. Nothing new. Can't go into details, but about a decade ago I led a LockMart team writing a proposal to the Army for using semi-autonomous drones in that role with Army helicopters. Part of the concept was letting the helo stand off and direct drones into closer proximity for weapon target designation or for intelligence gathering, considering the drones to be attritible assets.
That's when you really, REALLY would not want those drones to figure out what's really going on. The opinions of the drones and the helo crew over who's an expendable asset might differ.
 
Yep. Nothing new. Can't go into details, but about a decade ago I led a LockMart team writing a proposal to the Army for using semi-autonomous drones in that role with Army helicopters. Part of the concept was letting the helo stand off and direct drones into closer proximity for weapon target designation or for intelligence gathering, considering the drones to be attritible assets.
It’s almost a reality today with the Apache / Grey Hawk hunter killer teams. I suppose the autonomous feature would alleviate some of the workload for the Apache front seaters but right now they’re getting good results with the current version. 30 mile link range and they just started controlling two UAVs from the Apache cockpit.
 
It’s almost a reality today with the Apache / Grey Hawk hunter killer teams. I suppose the autonomous feature would alleviate some of the workload for the Apache front seaters but right now they’re getting good results with the current version. 30 mile link range and they just started controlling two UAVs from the Apache cockpit.

Correct. I had a little involvement with the development of the Arrowhead targeting system on the Apache, and I led the HW development of the seeker for the JAGM missile (upgrade of HELLFIRE). Even won Lockheed’s Nova award for that one. :biggrin:

In a hostile environment, if some other asset is doing laser target designation, the Apache can shelter behind terrain (for example), pop up and shoot a HELLFIRE from safety, then duck again. Without the other designator, to shoot without a constant clear line of sight the Apache would have to do a JAGM LOAL (“lock on after launch”) radar shot. That works, but might not be allowed under certain ROEs.
 
Let me suggest that rather than relying on websites like Defensescoop for your information on AI, you spend some time reading publications of actual engineering professional societies. The IEEE has published many articles on AI, and for starters you might want to take a look at the October 2021 issue of IEEE Spectrum. There are several AI articles in that issue, and I recommend you start with "The Turbulent Past and Uncertain Future of AI" and "7 Revealing Ways AIs Fail."

A couple of major challenges with AI are "catastrophic forgetting" and "brittleness," both of which can easily crash an airplane. Catastrophic forgetting is the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information. These systems have a terrible memory. Brittleness is the inability of an AI system to respond appropriately to slight changes in patterns it has previously learned. AIs struggle with "mental rotation" of images, something that is trivial for human intelligence (I show you a picture of an object and tell you it's a beer glass; then I show you a picture of it lying on its side, and you can still tell me it's a beer glass. AIs often can't.).

In discussing brittleness, the article cites a few examples:
"Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. ... Medical images can get modified in a way imperceptible to the human eye so that AI systems misdiagnose cancer 100 percent of the time."

Regarding math errors, this isn't just a problem in ChatGP. It seems to be (at least so far) an inherent weakness in AI systems. UC-Berkley's Dan Hendrycks states, "AIs are surprisingly not good at mathematics at all. You might have the latest and greatest models that take hundreds of GPUs to train, and they're still just not as reliable as a pocket calculator." UC-Berkley trained an AI on hundreds of thousands of math problems with step-by-step solutions, then tested it using 12,500 high school math problems. The result? The AI only had 5% accuracy.

Still think AI is viable? Still want it making life-critical decisions in a cockpit?

Look, people have been working on AI for decades. Despit the current hype, this isn't a new field. While there's been progress, and AI can perhaps assist with a few things and be used as one more tool in the box, it simply hasn't arrived yet when it comes to protecting human life, and if you read what experts are actually writing in the real engineering publications, instead of relying on pop press hype, you'll see that it has a long, long way to go.
I love it when people who have depth-of-knowledge about a subject share what they know. :yes:
 
Suicide by pilot is an aviation event in which a pilot deliberately crashes or attempts to crash an aircraftas a suicide act, with or without the intention of causing harm to passengers on board or people on the ground. If others are killed, it may be considered a type of murder–suicide. A Bloomberg News study conducted in June 2022, focusing on crashes involving Western-built commercial airliners, revealed that pilot murder-suicides ranked as the second most prevalent cause of airline crash deaths between 2011 and 2020. Additionally, the study found that deaths resulting from pilot murder-suicides increased over the period from 1991 to 2020, while fatalities due to accidental causes significantly decreased. Notably, if China Eastern Airlines Flight 5735 is confirmed to be an intentional act, it would indicate that deaths caused by intentional acts have surpassed all other causes since the beginning of 2021. Some listings with fatalities and summary.

JAL Flight 350 - Fatalities 24 - Pilot engaged number 2 and 3 engines' thrust-reversers in flight. The first officer and flight engineer were able to partially regain control.

Royal Air Maroc Flight 630 - Fatalities 44 - Crashed intentionally by pilot.

SilkAir Flight 185 - Fatalities 104 - The United States' NTSB ruled the incident a suicide, but the Indonesian NTSC listed the cause as undetermined. A private investigation blamed a flaw in the plane's rudder.

EgyptAir Flight 990 - Fatalities 217 - After the captain left the cockpit, relief first officer Gameel Al-Batouti turned off the autopilot and engines while repeatedly saying "I rely on Allah" in Arabic, causing the plane to go into a dive and crash into the Atlantic Ocean. The reason for his inputs was not determined. The U.S. National Transportation Safety Board concluded that the crash was a suicide while the Egyptian Civil Aviation Authority blamed a fault in the elevator control system.

LAM Mozambique Airlines Flight 470 - Fatalities 33 - The pilot intentionally crashed the aircraft. The co-pilot was locked out of the cockpit, according to the voice recorder.

Malaysia Airlines Flight 370 - Fatalities 239 - The flight data recorder and CVR have never been recovered. Several possible explanations for the disappearance of the aircraft have been offered. A leading theory amongst experts is that either the pilot or the co-pilot committed an act of murder–suicide. A Canadian air crash investigator also believes the crash was a murder-suicide. Former Australian Prime Minister, Tony Abbott, has also stated that Malaysian officials always believed the crash to have been caused by a suicidal pilot. An investigation by the Malaysian government asserted that the plane was manually flown off course. The lead investigator was quoted as saying that the turns made by MH370 were "not because of anomalies in the mechanical system. The turn back was made not under autopilot but under manual control... We can confirm the turn back was not because of anomalies in the mechanical system”.

Germanwings Flight 9525 - Fatalities 150 - Co-pilot Andreas Lubitz, previously treated for depression and suicidal tendencies, locked the captain out of the cockpit before crashing the plane into a mountain near Prads-Haute-Bléone, Alpes-de-Haute-Provence, France.

China Eastern Airlines Flight 5735 - Fatalities 132 - On May 17 the Wall Street Journal reported that investigators believe the airliner was intentionally crashed. There was no response to repeated calls from air traffic controllers, Chinese investigators found no major safety problems, and China Eastern resumed flying the Boeing 737-800 in April after grounding its fleet for less than a month. Cockpit intrusion was also considered, but China Eastern said it was unlikely, as no emergency signal had been received.
 
Even farther back… will see if I can dig up the reference again.

It was way back, 70+ years ago:

"The original Fitts' List. Reprinted with permission from Human Engineering for an Effective Air Navigation and Traffic Control System, National Academy of Sciences, Washington, D.C., 1951. Reproduced courtesy of the National Academy Press." Please note the distinction of inductive vs. deductive reasoning in the Fitts' List.

Dr. (Lt.Col) Paul Fitts was the father of Human Factors. The Wired story is an interesting read. Among other things, standardization in the cockpit came out of America's experience with aircraft in WWII. Companies that built cars and refrigerators suddenly started building airplanes. Flaps and gear controls were frequently reversed from one design to the next, with predictable results. Other issues with human vision characteristics led to displays optimized for night flight, for example.
https://www.wired.com/story/how-dumb-design-wwii-plane-led-macintosh/

This pilot associate stuff has been in work for decades under different names by different agencies:
https://www.defense.gov/News/News-S...arch-laboratory-works-on-synthetic-teammates/
 
Suicide by pilot is an aviation event in which a pilot deliberately crashes or attempts to crash an aircraftas a suicide act, with or without the intention of causing harm to passengers on board or people on the ground. If others are killed, it may be considered a type of murder–suicide. A Bloomberg News study conducted in June 2022, focusing on crashes involving Western-built commercial airliners, revealed that pilot murder-suicides ranked as the second most prevalent cause of airline crash deaths between 2011 and 2020. Additionally, the study found that deaths resulting from pilot murder-suicides increased over the period from 1991 to 2020, while fatalities due to accidental causes significantly decreased. Notably, if China Eastern Airlines Flight 5735 is confirmed to be an intentional act, it would indicate that deaths caused by intentional acts have surpassed all other causes since the beginning of 2021. Some listings with fatalities and summary.

JAL Flight 350 - Fatalities 24 - Pilot engaged number 2 and 3 engines' thrust-reversers in flight. The first officer and flight engineer were able to partially regain control.

Royal Air Maroc Flight 630 - Fatalities 44 - Crashed intentionally by pilot.

SilkAir Flight 185 - Fatalities 104 - The United States' NTSB ruled the incident a suicide, but the Indonesian NTSC listed the cause as undetermined. A private investigation blamed a flaw in the plane's rudder.

EgyptAir Flight 990 - Fatalities 217 - After the captain left the cockpit, relief first officer Gameel Al-Batouti turned off the autopilot and engines while repeatedly saying "I rely on Allah" in Arabic, causing the plane to go into a dive and crash into the Atlantic Ocean. The reason for his inputs was not determined. The U.S. National Transportation Safety Board concluded that the crash was a suicide while the Egyptian Civil Aviation Authority blamed a fault in the elevator control system.

LAM Mozambique Airlines Flight 470 - Fatalities 33 - The pilot intentionally crashed the aircraft. The co-pilot was locked out of the cockpit, according to the voice recorder.

Malaysia Airlines Flight 370 - Fatalities 239 - The flight data recorder and CVR have never been recovered. Several possible explanations for the disappearance of the aircraft have been offered. A leading theory amongst experts is that either the pilot or the co-pilot committed an act of murder–suicide. A Canadian air crash investigator also believes the crash was a murder-suicide. Former Australian Prime Minister, Tony Abbott, has also stated that Malaysian officials always believed the crash to have been caused by a suicidal pilot. An investigation by the Malaysian government asserted that the plane was manually flown off course. The lead investigator was quoted as saying that the turns made by MH370 were "not because of anomalies in the mechanical system. The turn back was made not under autopilot but under manual control... We can confirm the turn back was not because of anomalies in the mechanical system”.

Germanwings Flight 9525 - Fatalities 150 - Co-pilot Andreas Lubitz, previously treated for depression and suicidal tendencies, locked the captain out of the cockpit before crashing the plane into a mountain near Prads-Haute-Bléone, Alpes-de-Haute-Provence, France.

China Eastern Airlines Flight 5735 - Fatalities 132 - On May 17 the Wall Street Journal reported that investigators believe the airliner was intentionally crashed. There was no response to repeated calls from air traffic controllers, Chinese investigators found no major safety problems, and China Eastern resumed flying the Boeing 737-800 in April after grounding its fleet for less than a month. Cockpit intrusion was also considered, but China Eastern said it was unlikely, as no emergency signal had been received.
Which of these happened in the US or with a US flight crew?
 
"Let me suggest that rather than relying on websites like Defensescoop for your information on AI, you spend some time reading publications of actual engineering professional societies."

Been there done that. Ph.D. in engineering. 30+ years in Big 10 engineering department. Published over 70 research papers in refereed journal publications, including IEEE, with a focus on simulation of complex multi-scale systems across multi-dimensional space-time domains, with soft boundaries and imprecise information. Supervised 18 doctoral students and 34 masters students. Multiple millions in external funding. Built two supporting research laboratories. Etc. Etc. After 30+ years of playing the game, it becomes clear that the main mission for any research endeavor is to deliver the answer that the research sponsor wants, and is paying for. But, even with this, there is good research that is done every day, that can make differences in life.

During these many years, it is also quite clear that bringing refereed journal publications into a PoA blog discussion is a critical error. Witness the third main element of Land Grant institutions, Cooperative Extension -- with a primary mission to translate research into usable information that makes it available in a more conversational and understandable form. Publications such as Defensescoops, or Breaking Defense, or the "front window view" of DARPA also serve this purpose. Once we start talking Structure from Motion, Supervised k-means Clustering, Fuzzy Inference Systems, Convolution Neural Networks, Self Organizing Maps, Deep Learning, etc. etc., and the math that drives these concepts, the eyes will glaze over, and the big picture is lost (ie, the potential for AI as a guardrail against intentional pilot deviations that can result in a catastrophic event). With respect to math, one of my favorite quotes from a math book on the topic of Topology is "A mathematician is like a blind man in a dark room, looking for a black hat, that isn't there".

With respect to accuracy/precision, and the ability to do "math", there is active debate on the extent of the need to complete precise math in order to solve a problem, and make a decision. In fact, theoretical mathematicians have been know to disagree on the answer to 1 + 1. We all know it is 2, but then we are satisfied with a given level of precision. From a theoretical math perspective, one can never get precise enough, because it is always possible to add one more zero at then end of the decimals, out to infinity, or perhaps two infinity, etc. Never finding the end of the numbers to be added, and thus the answer to the math problem. Crazy stuff - kind of like the Horizon jump seater. This realization can quickly lead to concepts such as fuzzy control systems for aircraft. Yes, there are refereed journal publications on the concept - Title: PID and Fuzzy Logic Pitch Attitude Hold Systems for a Fighter Jet. "This paper describes the design of fuzzy logic pitch attitude hold systems for an F-4 fighter jet under a variety of performance conditions that include approach, subsonic cruise and supersonic cruise. It expands the work from a previous paper on the same subject by adding more diverse plant cases, PID controller comparison, and an improved fuzzy logic design."

Now back to the original topic. Are there any other alternatives beyond the 3 listed in post #318? Should the concept espoused by Sully,
https://time.com/3770203/captain-chesley-sully-germanwings/ be included in the array of options? Once an array of options are defined, then it becomes possible to debate the pros and cons. Even if the Sully idea is included as option (4), I'll still put my money on process integrity constrained AI. But there may be more options?? Or is this thread worn out?
 
"Let me suggest that rather than relying on websites like Defensescoop for your information on AI, you spend some time reading publications of actual engineering professional societies."

Been there done that. Ph.D. in engineering. 30+ years in Big 10 engineering department. Published over 70 research papers in refereed journal publications, including IEEE, with a focus on simulation of complex multi-scale systems across multi-dimensional space-time domains, with soft boundaries and imprecise information. Supervised 18 doctoral students and 34 masters students. Multiple millions in external funding. Built two supporting research laboratories. Etc. Etc. After 30+ years of playing the game, it becomes clear that the main mission for any research endeavor is to deliver the answer that the research sponsor wants, and is paying for. But, even with this, there is good research that is done every day, that can make differences in life.

During these many years, it is also quite clear that bringing refereed journal publications into a PoA blog discussion is a critical error. Witness the third main element of Land Grant institutions, Cooperative Extension -- with a primary mission to translate research into usable information that makes it available in a more conversational and understandable form. Publications such as Defensescoops, or Breaking Defense, or the "front window view" of DARPA also serve this purpose. Once we start talking Structure from Motion, Supervised k-means Clustering, Fuzzy Inference Systems, Convolution Neural Networks, Self Organizing Maps, Deep Learning, etc. etc., and the math that drives these concepts, the eyes will glaze over, and the big picture is lost (ie, the potential for AI as a guardrail against intentional pilot deviations that can result in a catastrophic event). With respect to math, one of my favorite quotes from a math book on the topic of Topology is "A mathematician is like a blind man in a dark room, looking for a black hat, that isn't there".

With respect to accuracy/precision, and the ability to do "math", there is active debate on the extent of the need to complete precise math in order to solve a problem, and make a decision. In fact, theoretical mathematicians have been know to disagree on the answer to 1 + 1. We all know it is 2, but then we are satisfied with a given level of precision. From a theoretical math perspective, one can never get precise enough, because it is always possible to add one more zero at then end of the decimals, out to infinity, or perhaps two infinity, etc. Never finding the end of the numbers to be added, and thus the answer to the math problem. Crazy stuff - kind of like the Horizon jump seater. This realization can quickly lead to concepts such as fuzzy control systems for aircraft. Yes, there are refereed journal publications on the concept - Title: PID and Fuzzy Logic Pitch Attitude Hold Systems for a Fighter Jet. "This paper describes the design of fuzzy logic pitch attitude hold systems for an F-4 fighter jet under a variety of performance conditions that include approach, subsonic cruise and supersonic cruise. It expands the work from a previous paper on the same subject by adding more diverse plant cases, PID controller comparison, and an improved fuzzy logic design."

Now back to the original topic. Are there any other alternatives beyond the 3 listed in post #318? Should the concept espoused by Sully,
https://time.com/3770203/captain-chesley-sully-germanwings/ be included in the array of options? Once an array of options are defined, then it becomes possible to debate the pros and cons. Even if the Sully idea is included as option (4), I'll still put my money on process integrity constrained AI. But there may be more options?? Or is this thread worn out?
At least we've gone from only one possible solution to four, but I'm still not clear what specific problem we're trying to solve. Is it down to just suicidal/homicidal pilots now?
 
Which of these happened in the US or with a US flight crew?
That's not a lightweight observation to me. It's not a lot of data, so perhaps just a coincidence, but it might be worth trying to sort out why we're better at it. I can't believe it's that the US medical systems are any better at detecting homicidal people than anyone else is. Maybe something attracting healthier people? Or discouraging the homicidal ones? Since we're all just guessing, I'd guess making sure pilots are well paid, well trained, socially well respected (I know, right?), and somewhat protected in their position helps?

Or in other words, if it's not a coincidence or sampling error, are the problem people less likely to become pilots in the US for some reason, or are the problem people equally distributed and less likely to 'snap' for some reason in the US? I don't think it's an easy question, because the percentage of people that do this is so small.
 
We could install auto GCAS but reverse engineer it for anti-allahu-akbar inputs.

Alternatively, it can lock out the pilot inputs if HAL thinks it's sus, and require a two-key actuation to override, physically separated in the console from single human actuation. With the second key held by the politikal officer of course, in this case the cart donkey *cough* I mean stew *cough cough hairball* I mean flight attendant. Voilá, problem solved. :biggrin:
 
Suicide by pilot is an aviation event in which a pilot deliberately crashes or attempts to crash an aircraftas a suicide act, with or without the intention of causing harm to passengers on board or people on the ground. If others are killed, it may be considered a type of murder–suicide. A Bloomberg News study conducted in June 2022, focusing on crashes involving Western-built commercial airliners, revealed that pilot murder-suicides ranked as the second most prevalent cause of airline crash deaths between 2011 and 2020. Additionally, the study found that deaths resulting from pilot murder-suicides increased over the period from 1991 to 2020, while fatalities due to accidental causes significantly decreased. Notably, if China Eastern Airlines Flight 5735 is confirmed to be an intentional act, it would indicate that deaths caused by intentional acts have surpassed all other causes since the beginning of 2021. Some listings with fatalities and summary.

JAL Flight 350 - Fatalities 24 - Pilot engaged number 2 and 3 engines' thrust-reversers in flight. The first officer and flight engineer were able to partially regain control.

Royal Air Maroc Flight 630 - Fatalities 44 - Crashed intentionally by pilot.

SilkAir Flight 185 - Fatalities 104 - The United States' NTSB ruled the incident a suicide, but the Indonesian NTSC listed the cause as undetermined. A private investigation blamed a flaw in the plane's rudder.

EgyptAir Flight 990 - Fatalities 217 - After the captain left the cockpit, relief first officer Gameel Al-Batouti turned off the autopilot and engines while repeatedly saying "I rely on Allah" in Arabic, causing the plane to go into a dive and crash into the Atlantic Ocean. The reason for his inputs was not determined. The U.S. National Transportation Safety Board concluded that the crash was a suicide while the Egyptian Civil Aviation Authority blamed a fault in the elevator control system.

LAM Mozambique Airlines Flight 470 - Fatalities 33 - The pilot intentionally crashed the aircraft. The co-pilot was locked out of the cockpit, according to the voice recorder.

Malaysia Airlines Flight 370 - Fatalities 239 - The flight data recorder and CVR have never been recovered. Several possible explanations for the disappearance of the aircraft have been offered. A leading theory amongst experts is that either the pilot or the co-pilot committed an act of murder–suicide. A Canadian air crash investigator also believes the crash was a murder-suicide. Former Australian Prime Minister, Tony Abbott, has also stated that Malaysian officials always believed the crash to have been caused by a suicidal pilot. An investigation by the Malaysian government asserted that the plane was manually flown off course. The lead investigator was quoted as saying that the turns made by MH370 were "not because of anomalies in the mechanical system. The turn back was made not under autopilot but under manual control... We can confirm the turn back was not because of anomalies in the mechanical system”.

Germanwings Flight 9525 - Fatalities 150 - Co-pilot Andreas Lubitz, previously treated for depression and suicidal tendencies, locked the captain out of the cockpit before crashing the plane into a mountain near Prads-Haute-Bléone, Alpes-de-Haute-Provence, France.

China Eastern Airlines Flight 5735 - Fatalities 132 - On May 17 the Wall Street Journal reported that investigators believe the airliner was intentionally crashed. There was no response to repeated calls from air traffic controllers, Chinese investigators found no major safety problems, and China Eastern resumed flying the Boeing 737-800 in April after grounding its fleet for less than a month. Cockpit intrusion was also considered, but China Eastern said it was unlikely, as no emergency signal had been received.
I'll counter your fixation on suicide caused accidents, half of which occurred over two decades ago, with some 2022 statistics from IATA. I'll post a brief snippit, you can access the link if you desire.

Fatality Risk

The industry 2022 fatality risk of 0.11 means that on average, a person would need to take a flight every day for 25,214 years to experience a 100% fatal accident. This is an improvement over the five-year fatality rate (average of 22,116 years).

The fatal accident rate improved to 0.16 per million sectors for 2022, from 0.27 per million sectors in 2021, and also was ahead of the five year fatal accident rate of 0.20.



Note that the 2022 figures of 158 fatalities increased by 132 solely because of the China Eastern crash. Four of the five fatal crashes were turboprop aircraft, with China Eastern being the sole fatal jet hull loss.

The years long and undoubtedly ruinously expensive rebuilding of cockpit automation you advocate to solve a supposedly intractable, and apparently in your mind, urgent defect in commercial aviation seems to be colored by your area of expertise rather than a grasp on the realities of the situation.
 
That's not a lightweight observation to me. It's not a lot of data, so perhaps just a coincidence, but it might be worth trying to sort out why we're better at it. I can't believe it's that the US medical systems are any better at detecting homicidal people than anyone else is. Maybe something attracting healthier people? Or discouraging the homicidal ones? Since we're all just guessing, I'd guess making sure pilots are well paid, well trained, socially well respected (I know, right?), and somewhat protected in their position helps?

Or in other words, if it's not a coincidence or sampling error, are the problem people less likely to become pilots in the US for some reason, or are the problem people equally distributed and less likely to 'snap' for some reason in the US? I don't think it's an easy question, because the percentage of people that do this is so small.
I think it's because the U.S. has a more robust general-aviation sector, which results in less need to rely on ab-initio training in order to hire enough pilots. My theory is that the time spent in the GA sector means that the nutcases are more likely to have filtered themselves out of the gene pool by the time they get hired by the airlines.
 
And that is changing… business is trying desperately to find a way to hire off the street. They may not be able to do it, but they’re trying…
 
OK. Picking up from post #336. Lindberg - it's time to get on board. Perhaps the current article from Avweb, pertaining to an expert from University of North Dakota School of Aerospace Science (a leading institution, I think we can all agree) helps to clarify the "problem" and discussion in the prior 340 posts.

https://www.avweb.com/aviation-news/pilot-mental-health-treatment-changes-urged/

Perhaps this article and position espoused presents a 5th option to be added to the list. An interpretation of the article could be the use of advanced monitoring systems to monitor pilots (is this big brother?). In any case, it should be added, since it is another option being proposed.

Post #338 gets a nomination for the most creative option. If it had only included the term Bangkok, it would have been a clear winner. Always enjoy reading 2020's keen insights.

Risk discussion (3393RP) is a whole added layer of discussion and a bit of background is in order. Risk Analysis tends to be integrated into four main elements. 1) Risk identification. 2) Risk assessment. 3) Risk evaluation/management. 4) Risk communication/perception. The risk has been identified (hopefully all are on board with this by now). Risk assessment is generally the process of combining the probability of some adverse event with the consequences of said event. Thus, this process is a two part effort combined into a one expression usually using the convolution integral across a spectrum of probabilities and consequences. Probability of adverse event is generally based on existing data, or research involving dose-response relationships. Results are generally expressed as some level of increased mortality or disease as a result of exposure to the risk. The results of element (2) Risk Assessment has been conveyed by 3393RP post #339.

However, the results of step (2) Risk assessment are only part of the story - and actually a very SMALL part. Because it is the remaining two key elements that drive decision making, and the socio-political process. 3) Risk evaluation/management is the process of identifying alternatives, evaluation of the risk associated with them, and the cost association with them, and seeking to make a decision on the best strategy to move forward. This is generally done within the corporate boardroom, and political venues. Way beyond those that work in the cockpit, or on the development of AI systems (and certainly PoA). And the types of decisions that are made within this realm are driven in large part by the 4th key element - Risk communication/perception. And it is precisely the 4th element that will drive the solution to the problem that is being discussed, as I alluded to in my original post on the topic. Risk perception is a very tricky subject. One can have a very low risk endeavor (flying on an airliner at 30,000 ft msl) and yet the perception of a catastrophic event, in the "minds eye" of John Q. Public (ie, the consequences part of risk assessement - in this case falling 30,000 feet to one's certain demise), become too powerful to overcome the actual low risk value. Thus, the perception of risk becomes the "driver", and JQ Public demands the "crazy" pilot problem be solved, and thus the airline board rooms and politicians become involved and force the issue/solution. Doesn't matter what the actual risk is, the public drives the cart. Happens every day in America.

So, it would appear we are up to 5 candidate options (not withstanding the awesome post #338).

1) process integrity constrained AI to provide guardrails against irrational deviations - deploying AI in cockpit environment along with two crew (pilot and co-pilot)
2) generous LTD insurance policies (ref post #307) - post seems to imply extending LTD beyond 2 years, to cover substance abuse, so that pilots would be more likely to self report
3) 3rd person in cockpit (ref post #317) - has worked for a long time
4) The Captain Sully approach (reference noted in prior post from Time article - more info available with google search)
5) The Hoffman approach (University of North Dakota - current Avweb article)

I will take some exception to post #339, that a process integrity AI system will be a massive "ruinously expensive rebuilding of cockpit automation". Actually, computers are ubiquitous in modern airliners, and the integration of new AI algorithms into the systems will be fairly simple and low cost.

When I review these 5 options, in the context of airline board rooms where strategic decision are made, budgets are considered, and politics are prevalent, I still lean toward option #1 as the single best option. Why? Option 2 just won't happen - insurance companies are not going to pay a person with mental health issues (in many cases linked with substance abuse) $9,000 a month (or you pick the high price number) for the rest of his/her life to do nothing but sit around (and perhaps abuse substances). Option 3 is less likely because if anything, airline boardrooms want to see the number of pilots/crew go to zero, not three (which I completely disagree with, by the way - my opinion is that there needs to be two highly qualified humans and an AI system to serve as a guardrail, creating a process integrity constrained AI system). Option 4 is interesting, and is being advocated by a real leader in aviation, but once JQ Public hears that pilots with known history of problems, such as depression/bipolar?/what else?, are being allowed to fly passenger aircraft, we jump right back to Risk Analysis - key element #4 (Risk Perception), and there is a great probability that JQ Public would push back against this option, even though the actual risk may be determined to be quite low. With respect to Option #5, not sure that the pilots of America will stand for invasive monitoring of their life, such as hooking up to sensors to monitor their sleep patterns for indications of mental health problems. Interesting, but likely too invasive into privacy to be accepted by airline pilots, or perhaps any freedom loving American. Now, is the thread worn down, or are there other options and ideas? If no more ideas, I rest my case. AI guardrails in airline cockpits are inevitable.
 
Rumors of thread death are greatly exaggerated (posts #335, 342).

When anyone of any rank/status/credentials says (highly advanced) software is the solution, all I can think of is MCAS MCAS MCAS MCAS (CAN'T SHUT THE DAMNED THING OFF! AAAAAAGH....)

When arguably the largest (Big 1) aerospace contractor on the planet can't do something that simple without killing hundreds? I rest my case.

Meantime, y'all enjoy the attached article from my WSJ subscription.
 

Attachments

  • Airline safety article today in WSJ.pdf
    3.5 MB · Views: 8
Last edited:
Yep there’s times when I don’t feel like going into work either and I’d like to take six months off as a “mental health break “. But instead I get up at 4 AM and go to work every day. Don’t take pills, don’t take mushrooms, Don’t need to go to a therapist to find coping mechanisms.
I submit that if you suffer from even intermittent clinical depression, you shouldn’t be flying commercially.
Find something else to do.

Ah yes, the "suck it up and get back to work" approach. Works great!
 
Was anticipating the MCAS debacle. Not a good example for what is being discussed in this thread. The defective MCAS was a result of apparent misdirection by Boeing, relative to FAA certification (see headline and lead-in from article on the debacle just below). One can imagine that lessons were learned on attempting to misdirect FAA certification by a regulated company (including substantial financial impacts). If this type of misdirect does occur, it doesn't matter if it's AI in the cockpit, landing gear integrity, turbine blade defects, etc., the consequences can be catastrophic. Given the nature of the transgression, and the fact that it could occur in any critical aviation system, the placement of blame on MCAS is a red herring.

Inspector General report details how Boeing played down MCAS in original 737 MAX certification – and FAA missed it: A report set to be released Wednesday by the Inspector General of the U.S. Department of Transportation concludes that Boeing deliberately played down the details of the flight control system that later helped bring down two 737 MAX jets so that the Federal Aviation Administration (FAA), responsible for certifying the new system, entirely missed its significance and danger.

 
OK. Picking up from post #336. Lindberg - it's time to get on board. Perhaps the current article from Avweb, pertaining to an expert from University of North Dakota School of Aerospace Science (a leading institution, I think we can all agree) helps to clarify the "problem" and discussion in the prior 340 posts.

https://www.avweb.com/aviation-news/pilot-mental-health-treatment-changes-urged/

Perhaps this article and position espoused presents a 5th option to be added to the list. An interpretation of the article could be the use of advanced monitoring systems to monitor pilots (is this big brother?). In any case, it should be added, since it is another option being proposed.

Post #338 gets a nomination for the most creative option. If it had only included the term Bangkok, it would have been a clear winner. Always enjoy reading 2020's keen insights.

Risk discussion (3393RP) is a whole added layer of discussion and a bit of background is in order. Risk Analysis tends to be integrated into four main elements. 1) Risk identification. 2) Risk assessment. 3) Risk evaluation/management. 4) Risk communication/perception. The risk has been identified (hopefully all are on board with this by now). Risk assessment is generally the process of combining the probability of some adverse event with the consequences of said event. Thus, this process is a two part effort combined into a one expression usually using the convolution integral across a spectrum of probabilities and consequences. Probability of adverse event is generally based on existing data, or research involving dose-response relationships. Results are generally expressed as some level of increased mortality or disease as a result of exposure to the risk. The results of element (2) Risk Assessment has been conveyed by 3393RP post #339.

However, the results of step (2) Risk assessment are only part of the story - and actually a very SMALL part. Because it is the remaining two key elements that drive decision making, and the socio-political process. 3) Risk evaluation/management is the process of identifying alternatives, evaluation of the risk associated with them, and the cost association with them, and seeking to make a decision on the best strategy to move forward. This is generally done within the corporate boardroom, and political venues. Way beyond those that work in the cockpit, or on the development of AI systems (and certainly PoA). And the types of decisions that are made within this realm are driven in large part by the 4th key element - Risk communication/perception. And it is precisely the 4th element that will drive the solution to the problem that is being discussed, as I alluded to in my original post on the topic. Risk perception is a very tricky subject. One can have a very low risk endeavor (flying on an airliner at 30,000 ft msl) and yet the perception of a catastrophic event, in the "minds eye" of John Q. Public (ie, the consequences part of risk assessement - in this case falling 30,000 feet to one's certain demise), become too powerful to overcome the actual low risk value. Thus, the perception of risk becomes the "driver", and JQ Public demands the "crazy" pilot problem be solved, and thus the airline board rooms and politicians become involved and force the issue/solution. Doesn't matter what the actual risk is, the public drives the cart. Happens every day in America.

So, it would appear we are up to 5 candidate options (not withstanding the awesome post #338).

1) process integrity constrained AI to provide guardrails against irrational deviations - deploying AI in cockpit environment along with two crew (pilot and co-pilot)
2) generous LTD insurance policies (ref post #307) - post seems to imply extending LTD beyond 2 years, to cover substance abuse, so that pilots would be more likely to self report
3) 3rd person in cockpit (ref post #317) - has worked for a long time
4) The Captain Sully approach (reference noted in prior post from Time article - more info available with google search)
5) The Hoffman approach (University of North Dakota - current Avweb article)

I will take some exception to post #339, that a process integrity AI system will be a massive "ruinously expensive rebuilding of cockpit automation". Actually, computers are ubiquitous in modern airliners, and the integration of new AI algorithms into the systems will be fairly simple and low cost.

When I review these 5 options, in the context of airline board rooms where strategic decision are made, budgets are considered, and politics are prevalent, I still lean toward option #1 as the single best option. Why? Option 2 just won't happen - insurance companies are not going to pay a person with mental health issues (in many cases linked with substance abuse) $9,000 a month (or you pick the high price number) for the rest of his/her life to do nothing but sit around (and perhaps abuse substances). Option 3 is less likely because if anything, airline boardrooms want to see the number of pilots/crew go to zero, not three (which I completely disagree with, by the way - my opinion is that there needs to be two highly qualified humans and an AI system to serve as a guardrail, creating a process integrity constrained AI system). Option 4 is interesting, and is being advocated by a real leader in aviation, but once JQ Public hears that pilots with known history of problems, such as depression/bipolar?/what else?, are being allowed to fly passenger aircraft, we jump right back to Risk Analysis - key element #4 (Risk Perception), and there is a great probability that JQ Public would push back against this option, even though the actual risk may be determined to be quite low. With respect to Option #5, not sure that the pilots of America will stand for invasive monitoring of their life, such as hooking up to sensors to monitor their sleep patterns for indications of mental health problems. Interesting, but likely too invasive into privacy to be accepted by airline pilots, or perhaps any freedom loving American. Now, is the thread worn down, or are there other options and ideas? If no more ideas, I rest my case. AI guardrails in airline cockpits are inevitable.
I'm not going to read that. Have you come up with a succinct definition of the problem you're trying to solve yet?
 
I haven’t had time to read the book about MCAS, but what Boeing did was plainly bungle the software. Sure, there were a lot of other things going on, but they screwed the pooch at the basic engineering level, plain and simple. Not a fish but an unfortunate canine.

One has to look at Aviation in the context of integrity, for lack of a better word. Same thing goes in medicine. Producers and implementers really do need to get things right, at every level. Cutting corners causes serious harm. Apologies in advance for preaching to the choir.

So, how can we the community or they the passengers *trust* something that is not only produced by demonstrably fallible humans but is devilishly difficult to qualify in the airworthiness sense?
 
Rumors of thread death are greatly exaggerated (posts #335, 342).

When anyone of any rank/status/credentials says (highly advanced) software is the solution, all I can think of is MCAS MCAS MCAS MCAS (CAN'T SHUT THE DAMNED THING OFF! AAAAAAGH....)

When arguably the largest (Big 1) aerospace contractor on the planet can't do something that simple without killing hundreds? I rest my case.

Meantime, y'all enjoy the attached article from my WSJ subscription.
:) Just because one of the largest aircraft manufacturers in the world couldn't retrofit a functional human assist in software, that was simpler than an analog PID controller, without killing about 600 people, destroying two aircraft, committing some felonies (I think?), and scaring tens of thousands, doesn't mean that we're not ready to let google chat fly an aircraft, does it? (That's rhetorical.)

We've had autopilots for almost a hundred years. We have automated language processing that makes a chat response, a TV reporter, a Cornell grad, and a random web poster indistinguishable, because they all speak with authority, and can construct a deck of cards argument based on beliefs pretending to be facts. But that doesn't mean we're ready to automate judgement of life safety systems yet.

(Apologies to Cornell grads...it's probably sampling error, but the ones I've met spoke with a complete lack of understanding of how silly they sounded. I'm certain they're excellent with horticulture.)
 
Was anticipating the MCAS debacle. Not a good example for what is being discussed in this thread. The defective MCAS was a result of apparent misdirection by Boeing, relative to FAA certification (see headline and lead-in from article on the debacle just below). One can imagine that lessons were learned on attempting to misdirect FAA certification by a regulated company (including substantial financial impacts). If this type of misdirect does occur, it doesn't matter if it's AI in the cockpit, landing gear integrity, turbine blade defects, etc., the consequences can be catastrophic. Given the nature of the transgression, and the fact that it could occur in any critical aviation system, the placement of blame on MCAS is a red herring.

Inspector General report details how Boeing played down MCAS in original 737 MAX certification – and FAA missed it: A report set to be released Wednesday by the Inspector General of the U.S. Department of Transportation concludes that Boeing deliberately played down the details of the flight control system that later helped bring down two 737 MAX jets so that the Federal Aviation Administration (FAA), responsible for certifying the new system, entirely missed its significance and danger.


Boeing needs an AI ombudsman to detect deviation in the certification process and intervene.
 
Following up on Lindberg post #347 - asking "what is the problem?". Note: a quick review of the first page (+) of posts on this thread conveys my answer to the question posed by Lindberg. Posts are identified by user name and post #. Highlighted elements of these posts clearly illustrate "the problem". This is only a few of the many pages of posts. Continued inquiry regarding "what problem am I trying to solve", is clearly a deflection and denial; presents a very clear sign that this thread is worn out; and I rest my case for AI guardrails in the cockpit which are inevitable -- so that there is less risk of another future thread like this on PoA.

TCABM. #13. October 22, Alaska Airlines Flight 2059 operated by Horizon Air from Everett, WA (PAE) to San Francisco, CA (SFO) reported a credible security threat related to an authorized occupant in the flight deck jump seat.

Zeldmman. #17. Probably not allow anyone in the cockpit except flight crew, who may or may not be off their meds as well

flyingiron. #19. The E175 has T handles in the overhead for each engine. Pulling the handle down cuts off fuel, hydraulics, and bleed air, effectively killing the engine. You can then rotate the handle left or right to discharge either of the two firebottles into the engine.

Ryanshort. #20. Imagine if the flight crew had been somewhat weaker… glad they were able to get him out.

Schmookeeg. #22. Sort of glad he pulled this crap from the jump seat and not at his usual crew station when on the clock. I have to imagine he'd have seen better success ruining his route instead of some other crew's. Jeez.

PaulS. #23. Attempted mass murder. Moron. Probably some type of mental breakdown.

wilkersk. #27. Dang! An Alaska pilot grabbing the fire-pull T-handles? That is beyond crazy-as-F! I'm on the edge of my seat waiting for the rest of the story!

midwestpa24. #30. We assume it was an intentional act? Was he under the influence of drugs or alcohol? Was he under stress or other factors? Was he attempting suicide? Basically, what was his motivation?

PaulS. #45. Sad story. Fortunately people going through a crisis usually don't think devious plans through. Unfortunately they do stuff like this. I really wish pilots who want to do things like this, or who decide to act out, would not do it in an airplane.

Vincent Becker. #48. Thankfully no one was hurt, but this is yet another example of the consequences of the FAA's awful policy on mental health. The FAA would rather incentivize a pilot with mental health issues to not seek help because the alternative is expensive and job jeopardizing.

Followed by pages of posts that continue along the lines of the posts noted above.
 
Highlighted elements of these posts clearly illustrate "the problem". This is only a few of the many pages of posts. Continued inquiry regarding "what problem am I trying to solve", is clearly a deflection and denial;
I'm not a statistics or risk analysis expert, but proportional response calculations almost certainly indicate the incidence of malevolent acts in air transport cockpits compared to the number of sectors flown annually over the last ten years is many decimal points to the right of zero.

That isn't deflection or denial. It's just common sense. A safety system with the record of international air travel does not militate the wholesale rearrangement of the status quo. The unintended consequences of such action would likely expose the flying public to greater risk than the supposed problem you assert, particularly since AI is nowhere near mature enough to be placed in the position you present. It won't be for a number of years that can't be accurately predicted today.
 
Last edited:
The unintended consequences of such action would likely expose the flying public to greater risk than the supposed problem you assert, particularly since AI is nowhere near mature enough to be placed in the position you present. It won't be for a number of years that can't be accurately predicted today.
:yeahthat:

Exactly! You have whacked the nail squarely upon its had.
 
Response to #356. "It's just common sense." As I noted in my post #342 - when it comes to the 4th major element of risk ANALYSIS, that being Risk Communication/Perception, there is NO such thing as "common sense". I agree completely that flying risk is "many decimal points to the right of zero". In fact, I used to use air travel as one of many case studies when teaching graduate level classes about risk ANALYSIS. Risk analysis is much more that risk assessment (see my post #342).

The bottom line is that it doesn't matter what the risk assessment results are (ie, how many decimal places to the right of zero), the JQ Public will respond to their own "PERCEPTION" of the risk. Aviation professionals can jump up and down, scream at the sky, that the risk of flying is much less than the risk of driving (for example - across the country). But, people will make a decision based on THEIR perception of the risk, not what some expert is telling them. And there is a significant portion of JQ Public that have the PERCEPTION that the risk to too great. It is precisely this 4th main element of risk ANALYSIS that is so perplexing, and there is much research that has gone into trying to figure out why people will demure to actual data and a rigorous risk assessment process involving the first 3 elements (again described in post #342). This risk perception research leads into the fields of psychology, sociology, social settings, a person's personal history and life experiences, whether the risk is forced upon someone or they can make the decision to accept the risk, and a multitude of additional factors, that blend together, and lead to the person making THEIR OWN decision(s). For example, "Yes, I know it is safer to fly according to your data and analysis, but I'm not going to fly anywhere, I'm driving." Risk perception is generally regarded as the least understood element of risk analysis, and yet it is "the bottom line" in terms of JQ Public, and that bottom line will lead to fiscal and political pressures to "Do something about what is happening with pilots that are not acting in a rationale manner." In one of my earliest posts, I suggested that all it will take is one, maybe two, more of these types of events, and all of the associated unpleasant media coverage, (especially if a US pilot is successful in bringing down an airliner), that the fiscal and political pressure will demand that a solution is implemented to reduce the risk of another unstable pilot attempting to bring down an aircraft. OK, my guess is 1 or 2 more "crazy" pilot incidents, maybe it will be 2 or 3, but it won't take many until substantial action is taken to manage the risk, and recover the confidence of JQ Public in boarding an aircraft, again. Again, I am convinced that said solution will involve an AI enabled system that provides guardrails against irrational behavior in the cockpit.

A pattern is beginning to emerge in this thread. I keep having to go back to prior threads, to respond to repetitive posts that keep saying the same thing, over and over. While I appreciate and respect the positions that are being taken, and have refrained from diminishing any postings, my observation is that there is no new energy being introduced into the conversation. Mainly in the form of new ideas. Creativity! Only negative - "it can't be done". As a result, I contend that the thread is worn out.
 
Following up on Lindberg post #347 - asking "what is the problem?". Note: a quick review of the first page (+) of posts on this thread conveys my answer to the question posed by Lindberg. Posts are identified by user name and post #. Highlighted elements of these posts clearly illustrate "the problem". This is only a few of the many pages of posts. Continued inquiry regarding "what problem am I trying to solve", is clearly a deflection and denial; presents a very clear sign that this thread is worn out; and I rest my case for AI guardrails in the cockpit which are inevitable -- so that there is less risk of another future thread like this on PoA.

TCABM. #13. October 22, Alaska Airlines Flight 2059 operated by Horizon Air from Everett, WA (PAE) to San Francisco, CA (SFO) reported a credible security threat related to an authorized occupant in the flight deck jump seat.

Zeldmman. #17. Probably not allow anyone in the cockpit except flight crew, who may or may not be off their meds as well

flyingiron. #19. The E175 has T handles in the overhead for each engine. Pulling the handle down cuts off fuel, hydraulics, and bleed air, effectively killing the engine. You can then rotate the handle left or right to discharge either of the two firebottles into the engine.

Ryanshort. #20. Imagine if the flight crew had been somewhat weaker… glad they were able to get him out.

Schmookeeg. #22. Sort of glad he pulled this crap from the jump seat and not at his usual crew station when on the clock. I have to imagine he'd have seen better success ruining his route instead of some other crew's. Jeez.

PaulS. #23. Attempted mass murder. Moron. Probably some type of mental breakdown.

wilkersk. #27. Dang! An Alaska pilot grabbing the fire-pull T-handles? That is beyond crazy-as-F! I'm on the edge of my seat waiting for the rest of the story!

midwestpa24. #30. We assume it was an intentional act? Was he under the influence of drugs or alcohol? Was he under stress or other factors? Was he attempting suicide? Basically, what was his motivation?

PaulS. #45. Sad story. Fortunately people going through a crisis usually don't think devious plans through. Unfortunately they do stuff like this. I really wish pilots who want to do things like this, or who decide to act out, would not do it in an airplane.

Vincent Becker. #48. Thankfully no one was hurt, but this is yet another example of the consequences of the FAA's awful policy on mental health. The FAA would rather incentivize a pilot with mental health issues to not seek help because the alternative is expensive and job jeopardizing.

Followed by pages of posts that continue along the lines of the posts noted above.
I'm of the continued belief that anyone who says "clearly" a bunch of times is probably being anything but clear.
 
Response to #356. "It's just common sense." As I noted in my post #342 - when it comes to the 4th major element of risk ANALYSIS, that being Risk Communication/Perception, there is NO such thing as "common sense". I agree completely that flying risk is "many decimal points to the right of zero". In fact, I used to use air travel as one of many case studies when teaching graduate level classes about risk ANALYSIS. Risk analysis is much more that risk assessment (see my post #342).

The bottom line is that it doesn't matter what the risk assessment results are (ie, how many decimal places to the right of zero), the JQ Public will respond to their own "PERCEPTION" of the risk. .
Those members of JQ Public can make and have made their own risk analysis based on their absolutely valid observations. If they don't want to fly, that's their prerogative. The Germanwings suicide incident barely made a blip on international air travel.

As for "no such thing as common sense" in risk perception, you certainly have a blinkered view of normal life. Not everything must be quantified by rigorous examination.

And speaking of denial, you failed to address my quite correct assessment of the current state of AI and its unquantifiable path of maturity. It cannot provide what you champion.
 
Back
Top