Horizon Jumpseater goes crazy

I am unqualified to self-assess whether I will die in the next n hours before a flight. If I knew that, I'd be out committing crimes, sampling drugs, and maybe running nude through our state legislature.

So really, every "self-assessed" flight launched under Basic Med is hopium. Just like every flight launched under an FAA Medical is.
 
I think that's pretty much a given. If AI were to actually become self aware - or when an AI becomes self aware - would that not be because of some major miscalculation or mistake on the part of the people who created it? And if that's true, do you think that's the only gross mistake that has been made?

I think AI is going to do some incredibly stupid and destructive things, just like its creators have been doing for tens of thousands of years, but it will do those things much faster and, if we're not really careful (and we absolutely are not), quite possibly on a much larger scale.
Check out the book "The Coming Wave:..." by Mustafa Suleyman

AI+Bio engineering+ plus "bad-actors" = a big nightmare!
 
Interesting context from his friend, though his full throated defense is unlikely to change many people's estimation of the guy.

And ah yes: who can forget the "mental health" angle.

The catch-all, buzzword defense phrase we use these days to describe why we feel compelled to shove random people in front of MTA trains, beat random people to death in public, shoot 18 random innocent victims -- or try and kill 80 random passengers aboard an airliner. All things we've seen blamed on "mental health" crises just this week alone.

I'd say we have a crisis of morals, not health.

I'd think, wrt all of those, you could arguably say one has led to the other. We have all these rights to be what and who we want to be, society be damned. And, for some of us, that is confusing and depressing.
 
So tired of hearing about fear in maintaining a medical. I’ve had the equivalent of either a class 1 or 2 for over 30 years. It ain’t that hard to keep a medical. And if I can no longer maintain my medical? Who cares, I go on to a non flying job.

It'll be interesting to see if age 67 on the 121 side results in more stringent medicals. As sedentary as a lot of these dudes are (not throwing stones - I can certainly do a lot better too), it might end up having the opposite effect as everyone intends. Especially now that LTD has been fixed in everyone's contract. I'm not qualified to make that kind of money outside of flying airplanes, so I might as well kick back and coast to the finish line on Uncle Bobby's dime.
 
Last edited:
We subpoenaed the flight surgeon who had recently done a Class One on the guy who crashed my Arrow..basically said, he was not crazy in his office so nothing he could do. It became obvious the med system literally has to be self reported. Considering the circumstances of the incident most would conclude it was a murder suicide…just because his coworkers and boss said he had been acting erratic but nothing to alarm them…the Irony was his employer, the FAA.
 
As predicted in post #268, and prior referenced posts within, the existing system to resolve ill pilots is irreconcilable (ie, FAA relies on self reporting - pilots don’t self report to maintain flight status and income), and the J.Q. Flying Public are letting it be known they aren’t happy. A class action lawsuit has been filed against Alaska/Horizon, claiming the airlines did not take steps tp protect the passengers from ill pilot. The passengers may have a case. Big money settlement? The only tractable solution is advanced AI system in the cockpit, using process integrity methods, to reconcile course of action in the event of ill pilot seeking to do harm, or true emergency. It’s only a matter of time.
 
As predicted in post #268, and prior referenced posts within, the existing system to resolve ill pilots is irreconcilable (ie, FAA relies on self reporting - pilots don’t self report to maintain flight status and income), and the J.Q. Flying Public are letting it be known they aren’t happy. A class action lawsuit has been filed against Alaska/Horizon, claiming the airlines did not take steps tp protect the passengers from ill pilot. The passengers may have a case. Big money settlement? The only tractable solution is advanced AI system in the cockpit, using process integrity methods, to reconcile course of action in the event of ill pilot seeking to do harm, or true emergency. It’s only a matter of time.
What damages do the passengers allege?
 
“The suit from the passengers on Alaska Airlines Flight 2059, traveling from Everett, Washington, to San Francisco, California, claim psychological and other damages after off-duty pilot Joseph David Emerson sprang up from the cockpit "jump seat" he was riding in and tried to cause an emergency engine shutdown, according to the complaint.”
Easy to google for more in-depth information.
 
The only tractable solution is advanced AI system in the cockpit, using process integrity methods, to reconcile course of action in the event of ill pilot seeking to do harm, or true emergency. It’s only a matter of time.

Would you care to prove that the only "tractable" is AI?
 
"Would you care to prove that the only "tractable" is AI?"

At this point in time, there is no need for a proof. Since no other viable alternative solutions have been presented in the 8 pages of posts (or elsewhere in the aviation world), the statement is self proven. Only when another viable option has been presented, is is necessary to debate options and look for evidence of a preferred alternative. As noted in all of the prior posts, the current approach to the challenge is intractable. The FAA asks for self reporting - and self reporting doesn't occur (in many situations) because it will have adverse impacts on aviation career. Continued occurrences of these crazy deviations will not be sustainable (most likely from political perspectives). AI is already making in roads into the aviation world (witness all of the automation already in place - and the continued evolution - even to the point of deployment in military fighter aircraft). It is only a matter of time before AI provides what might be termed "guardrails" to help prevent the kinds of events that are making unfortunate headlines.

The question of the veracity of the lawsuit against Alaska Airlines will be answered in due time. My prediction - an out of court settlement of an undisclosed amount - all parties bound to secrecy.
 
you fail to show that 8 or 800 pages of interweb postings is proof of anything
 
"Would you care to prove that the only "tractable" is AI?"

At this point in time, there is no need for a proof. Since no other viable alternative solutions have been presented in the 8 pages of posts (or elsewhere in the aviation world), the statement is self proven. Only when another viable option has been presented, is is necessary to debate options and look for evidence of a preferred alternative. As noted in all of the prior posts, the current approach to the challenge is intractable. The FAA asks for self reporting - and self reporting doesn't occur (in many situations) because it will have adverse impacts on aviation career. Continued occurrences of these crazy deviations will not be sustainable (most likely from political perspectives). AI is already making in roads into the aviation world (witness all of the automation already in place - and the continued evolution - even to the point of deployment in military fighter aircraft). It is only a matter of time before AI provides what might be termed "guardrails" to help prevent the kinds of events that are making unfortunate headlines.

The question of the veracity of the lawsuit against Alaska Airlines will be answered in due time. My prediction - an out of court settlement of an undisclosed amount - all parties bound to secrecy.
First of all, you haven't defined the problem that your AI is the only solution for. Second of all, your "solution" has to be better then the current solution before anyone needs to bother proposing another one. So you could start by proving that.
 
Last edited:
Since no other viable alternative solutions have been presented in the 8 pages of posts (or elsewhere in the aviation world), the statement is self proven. Only when another viable option has been presented, is is necessary to debate options and look for evidence of a preferred alternative.

:rofl: Oh man, that’s hysterical!

You’re assuming a fact not in evidence: that AI is a viable solution. AI can’t get Basic math correct (see the exchange between @FastEddieB and me from a few months back), AI has been seen presenting false and fabricated information as fact, etc.

AI “viable?” Oh, that’s rich.
 
Factual evidence has been presented supporting AI as a viable solution in a prior post. See the following link from a prior post -- and for evidence of AI being viable solution as "guardrail" in ATP aircraft to constrain pilot induced deviations.

https://defensescoop.com/2023/02/14/ai-agents-take-control-of-modified-f-16-fighter-jet/

Now, let's see. Apparently a choice is being offered for "factual evidence" on the veracity of AI. Option (A). Blogpost discussion between Half Fast, and FastEddieB. Option (B). Numerous articles and videos describing the successful efforts of highly respected DoD scientists and engineers in the deployment of AI within intense aviation setting (ie, F-16 fighter). Thank you for offering Option A, but at this time I'll go with Option B. This thread is worn out, unless additional realistic options can be offered to solve the challenge of "self reporting" vs. "self preservation". At this time, the deployment of AI "guardrails" (within the concepts of process integrity approaches) to constrain unwanted pilot deviations is inevitable.
 
Factual evidence has been presented supporting AI as a viable solution in a prior post. See the following link from a prior post -- and for evidence of AI being viable solution as "guardrail" in ATP aircraft to constrain pilot induced deviations.

https://defensescoop.com/2023/02/14/ai-agents-take-control-of-modified-f-16-fighter-jet/
There's literally zero in that article to support your thesis. A computer flew an F-16 fighter under benign conditions with a human riding backup. That is not anything like a computer acting as a "guardrail" to constrain an airline captain.
 
Factual evidence has been presented supporting AI as a viable solution in a prior post. See the following link from a prior post -- and for evidence of AI being viable solution as "guardrail" in ATP aircraft to constrain pilot induced deviations.

https://defensescoop.com/2023/02/14/ai-agents-take-control-of-modified-f-16-fighter-jet/

Now, let's see. Apparently a choice is being offered for "factual evidence" on the veracity of AI. Option (A). Blogpost discussion between Half Fast, and FastEddieB. Option (B). Numerous articles and videos describing the successful efforts of highly respected DoD scientists and engineers in the deployment of AI within intense aviation setting (ie, F-16 fighter). Thank you for offering Option A, but at this time I'll go with Option B. This thread is worn out, unless additional realistic options can be offered to solve the challenge of "self reporting" vs. "self preservation". At this time, the deployment of AI "guardrails" (within the concepts of process integrity approaches) to constrain unwanted pilot deviations is inevitable.


Here’s that string of posts:

Start at post 74. It can’t get simple math correct, it makes the error worse when it tries to correct it, and it doesn’t understand why it’s making the errors.

Yup. Seems pretty viable to me.....
 
......highly respected DoD scientists and engineers in the deployment of AI within intense aviation setting (ie, F-16 fighter).


Appeals to authority are weak arguments, but if you’re going there anyway I’ll mention that I fall squarely within that set you appear to revere. I’m a retired defense contractor engineer whose name is on the patent for the F16’s targeting system. Your argument carries little weight with me.

I’m a lightweight. Many people on this forum have impressive chops in technical and scientific fields and I have a great respect for their opinions. You might want to try a different line for your argument.
 
to quote someone...

To err is human.

To really screw things up requires a computer.
 
Factual evidence has been presented supporting AI as a viable solution in a prior post. See the following link from a prior post -- and for evidence of AI being viable solution as "guardrail" in ATP aircraft to constrain pilot induced deviations.

https://defensescoop.com/2023/02/14/ai-agents-take-control-of-modified-f-16-fighter-jet/

Now, let's see. Apparently a choice is being offered for "factual evidence" on the veracity of AI. Option (A). Blogpost discussion between Half Fast, and FastEddieB. Option (B). Numerous articles and videos describing the successful efforts of highly respected DoD scientists and engineers in the deployment of AI within intense aviation setting (ie, F-16 fighter). Thank you for offering Option A, but at this time I'll go with Option B. This thread is worn out, unless additional realistic options can be offered to solve the challenge of "self reporting" vs. "self preservation". At this time, the deployment of AI "guardrails" (within the concepts of process integrity approaches) to constrain unwanted pilot deviations is inevitable.

Give your favorite AI the trolley problem; post the results.
 
intractable challenge? nonsense. A solid LTD policy would have this entire aeromed kerfuffle self-limited in a week. Heck, even the retirement age angle would become moot overnight by proxy. AI is a non-sequitur in that context.

As to the DoD efforts, the stuff being worked on is an honest effort, and will yield progress, but that's nowhere near the maturity to field replacement of manned frontline assets outright. The effort from the get go has been focused on loyal wingmen proof of concept (force multiplier stuff, but I digress cuz we're gonna get into too much .mil acronym nonsense not for this board).

The rest of the PR stuff about HAL beating a human pilot, was just DCS fanboi psyops stuff to get the Vo2max challenged, socially awkward Gen Z to choose to enlist in the Space/Chair Force. Another non-sequitur.
 
intractable challenge? nonsense. A solid LTD policy would have this entire aeromed kerfuffle self-limited in a week. Heck, even the retirement age angle would become moot overnight by proxy. AI is a non-sequitur in that context.

As to the DoD efforts, the stuff being worked on is an honest effort, and will yield progress, but that's nowhere near the maturity to field replacement of manned frontline assets outright. The effort from the get go has been focused on loyal wingmen proof of concept (force multiplier stuff, but I digress cuz we're gonna get into too much .mil acronym nonsense not for this board).

The rest of the PR stuff about HAL beating a human pilot, was just DCS fanboi psyops stuff to get the Vo2max challenged, socially awkward Gen Z to choose to enlist in the Space/Chair Force. Another non-sequitur.
excellent
 
Give your favorite AI the trolley problem; post the results.

Yeah, I'm kinda curious about what AI would do with that myself.

1699233890823.png

The trolley problem was posed long before trolleys, though formed a bit differently, of course. The dilemma is as old as original sin and is unsolvable by mortal man. The only possible good solution is for the one on the switch to be God, who can sacrifice the one to save the many and then make all well again by raising the one from the dead.

I wonder whether AI would try to play God. :biggrin:
 
How do human pilots solve the trolley problem?

Depends. The challenge with AI is about limitations; if it can’t detect information because it lacks appropriate sensors or sensors threshold info out due to “noise”, or any number of other experiential scenarios.

ETA: the trolley problem is about binary choices; humans may determine an alternative outcome aside from the binary choices. AI may be able to do the same, depending how it’s trained up, and that’s the part everyone glosses over when it comes to AI.

We’re experimenting with some specialized AI in highly complex scenarios (Reg E) in the financial services industry…our head of compliance is saying not just not yet, but not anytime soon, either.
 
Last edited:
Summary from post #298 until #312. A lot of thrashing around, jumping up and down, stomping feet, and general blather. The challenge to offer alternatives for dealing with ill pilots, beyond AI within the context of process integrity approaches, has been given in post #294. Since that post, only one alternative has been suggested. The concept of Long Term Disability has been presented. Of course, the key problem with this option is that in the case of substance abuse (drug and/or alcohol), the allowed term is typically limited to two years of insurance support. Thus, for the pilot that is not self reporting, but rather self medicating with drugs and/or alcohol (which is a common approach to deal with the stress of said situation), the LTD option is not very promising, and "self preservation" emotions overcome the "self reporting" approach of FAA. Insurance companies are not going to support an alcoholic for his/her life time at the rate of pay typical under aviation LTD policies. The challenge remains. Are there other viable options beyond "process integrity constrained AI", and the proposed but questionable LTD? Without any constructive suggestions, the thread is worn out.

With regard to notations of AI math being incorrect. Yes, it is possible to pay admission for the cheap seats, and get a cheap AI system. The marketplace can be brutal. With regard to personal innuendos and slights, these are sophomoric at best, and don't contribute to the quality of the discussion, and simply point to the general observation that without further infusion of creative ideas, this thread is worn out. AI guardrail systems within the context of airline passenger transport are inevitable. It is only a matter of time.
 
Summary from post #298 until #312. A lot of thrashing around, jumping up and down, stomping feet, and general blather. The challenge to offer alternatives for dealing with ill pilots, beyond AI within the context of process integrity approaches, has been given in post #294. Since that post, only one alternative has been suggested. The concept of Long Term Disability has been presented. Of course, the key problem with this option is that in the case of substance abuse (drug and/or alcohol), the allowed term is typically limited to two years of insurance support. Thus, for the pilot that is not self reporting, but rather self medicating with drugs and/or alcohol (which is a common approach to deal with the stress of said situation), the LTD option is not very promising, and "self preservation" emotions overcome the "self reporting" approach of FAA. Insurance companies are not going to support an alcoholic for his/her life time at the rate of pay typical under aviation LTD policies. The challenge remains. Are there other viable options beyond "process integrity constrained AI", and the proposed but questionable LTD? Without any constructive suggestions, the thread is worn out.

With regard to notations of AI math being incorrect. Yes, it is possible to pay admission for the cheap seats, and get a cheap AI system. The marketplace can be brutal. With regard to personal innuendos and slights, these are sophomoric at best, and don't contribute to the quality of the discussion, and simply point to the general observation that without further infusion of creative ideas, this thread is worn out. AI guardrail systems within the context of airline passenger transport are inevitable. It is only a matter of time.
Can you please state the "problem" you're trying to solve? "Dealing with ill pilots" is an extremely broad statement, and it's not clear how "process integrity constrained AI" is a solution to "ill pilots," let alone the only solution. At one point we were discussing malevolent actors in the cockpit, but here you mention LTD, which seems entirely unrelated, so I'm having a lot of trouble connecting dots. I'm looking for a specific problem and solution statement and not finding it.
 
'Are there other viable options beyond "process integrity constrained AI" '

assumes that "process integrity constrained AI" is actually a viable option. in other words, constantly claims without proof...
 
Problem is malevolent actors in cockpit, as demonstrated by multiple aviation mishaps involving the death of numerous innocent passengers. Reference to the term "ill" is generalized from, perhaps, mentally ill, leading to deviant behavior, within the cockpit. Likely stemming from pilot not self reporting medical problems (crazy Horizon jumpseater is classic example).

Three proposed solutions thus far - at least from review of the 317 posts.
1) process integrity constrained AI to provide guardrails against irrational deviations - deploying AI in cockpit environment along with two crew (pilot and co-pilot)
2) generous LTD insurance policies (ref post #307) - post seems to imply extending LTD beyond 2 years, to cover substance abuse, so that pilots would be more likely to self report
3) 3rd person in cockpit (ref post #317) - has worked for a long time

Are there other viable options?

Once options are on the table, logical debate can proceed toward finding preferred solution. Perhaps a hybrid of the alternatives, or newly generated alternatives.
 
How's a more favorable LTD policy more expensive than imposing a 3rd full time crew member in every US 121 cockpit? And mine is the pie in the sky alternative? Come now.

The VA has a disgustingly plurality of the millions of 'tHanK mE fOr mY seRViX' "vets" raking in their 'pass-go collect 200' sleep apnea +PTSD participation tax free pensions, and nobody bats an eye. Remember, these are folks maligning (false positive), which is the diametric opposite of hiding. That's rank wage inflation too against the taxpayer, unless you don't know how to count with your fingers.

But giving the guy actually hiding (false negative) a go-away check after getting some pound of flesh out of him/her (the same cannot be said about false positives), that's all of a sudden blasphemy? And demand a 3rd full time employee for every two already in existence to boot? Holy penny wise pounds foolish, batman.
 
Problem is malevolent actors in cockpit, as demonstrated by multiple aviation mishaps involving the death of numerous innocent passengers. Reference to the term "ill" is generalized from, perhaps, mentally ill, leading to deviant behavior, within the cockpit. Likely stemming from pilot not self reporting medical problems (crazy Horizon jumpseater is classic example).
Since no one was even injured in this case, it seems that whatever the current solution to malevolent actors causing the death of innocent passengers is might be working. Or at least this isn't a case supporting your proposition that this is a problem to be solved. Do you have examples of mentally ill pilots failing to self-report under the FAA's current process actually leading to the deaths of numerous innocent passengers?
 
Back
Top