ChatGPT

Somebody got ChatGPT to play chess against itself, and the results were strange, to say the least!

 
Wow. It cheats big time. But also ignores easy captures.
 
Yeah, it really doesn't seem to know what the heck it's doing when it comes to chess. By contrast, when I play against ChessGPT, it makes occasional blunders, but at least it follows the rules!

https://www.chess.com/play/computer

That bot is easy. I just got back into chess around the end of November. They program the bots to either only look a certain number of moves ahead, or play semi-random moves. The Martin bot will always find a mate in 1, but accidentally or randomly finds a mate in 2.
 
Chat GPT? In my day they just asked the Mighty Favog.
Watch | Facebook

images

 
Rich just gives up in his new video and has ChatGPT write the script.

 
Don’t feed the machine….

979651B9-B81D-4AFA-BA62-B4DA6213B865.jpeg
 
As a side note, ChatGTP can write a decent NASA report if needed.

A quick "what if" example.

Basic Prompt:

Write a NASA report for general aviation pilot deviation for class C incursion of KGSP.
Flight was from KAND to KFAY on 2/4/2022 in an 1969 Cessna 172 N1234U.

NASA REPORT: GENERAL AVIATION PILOT DEVIATION FOR CLASS C INCURSION OF KGSP

SUMMARY:
On February 4th, 2022, a general aviation pilot deviation occurred during a flight from Anderson Regional Airport (KAND) to Fayetteville Regional Airport (KFAY) in a 1969 Cessna 172 with tail number N1234U. The pilot inadvertently entered the Class C airspace of Greenville-Spartanburg International Airport (KGSP) without obtaining clearance from air traffic control (ATC), resulting in a Class C incursion.

FACTUAL INFORMATION:
The pilot, who was the sole occupant of the aircraft, conducted a pre-flight inspection and filed a flight plan with the appropriate authorities. During the flight, the pilot inadvertently entered the Class C airspace of KGSP without obtaining clearance from ATC.

ATC contacted the pilot and directed the pilot to exit the Class C airspace immediately. The pilot complied with ATC's instructions and exited the Class C airspace without incident.

ANALYSIS:
The investigation revealed that the pilot deviation occurred due to the pilot's failure to properly plan the flight and to obtain clearance from ATC before entering the Class C airspace of KGSP. The investigation also revealed that the pilot may have been unfamiliar with the procedures for entering Class C airspace, which contributed to the deviation.

Additionally, the investigation found that the pilot did not have access to current sectional charts or other navigation tools that would have helped the pilot identify and avoid controlled airspace. The pilot also did not have a transponder with Mode C capability, which would have made the aircraft more visible to ATC.

CONCLUSION:
The investigation concluded that the pilot deviation was caused by the pilot's failure to properly plan the flight and to obtain clearance from ATC before entering the Class C airspace of KGSP. The investigation recommended that the pilot receive remedial training on the proper procedures for planning and conducting flights, including obtaining clearance before entering controlled airspace.

Furthermore, the investigation recommended that the pilot equip the aircraft with current navigation tools, such as current sectional charts, and a transponder with Mode C capability to enhance the aircraft's visibility to ATC. The aviation community should also consider enhancing education and training programs to ensure pilots are knowledgeable about airspace procedures and have access to necessary navigation tools.

In response to the incident, the pilot received remedial training on the proper procedures for planning and conducting flights, including obtaining clearance before entering controlled airspace. The pilot also equipped the aircraft with current navigation tools and a transponder with Mode C capability. The pilot has since flown multiple successful flights without incident.

Not perfect, and I'm sure you'd need to tweak some details, but a lot of typing saved and probably formatted better too. Of course, the more details you provide in the prompt the better the end report would be, but still...
 
As a side note, ChatGTP can write a decent NASA report if needed.

A quick "what if" example.

Basic Prompt:





Not perfect, and I'm sure you'd need to tweak some details, but a lot of typing saved and probably formatted better too. Of course, the more details you provide in the prompt the better the end report would be, but still...

It thinks you need a clearance to fly in class C airspace, and it doesn't explain how ATC contacted a pilot who may not even have been monitoring the frequency. I'd call that more than tweaks.

Also, in the question, were you asking for an ASRS report written by the pilot? The result sounds more like a report that the NTSB would write.
 
Last edited:
The result sounds more like a report that the NTSB would write.

You have a point. How does this one sound to you?

Prompt:
write up a single pilot GA ASRS report for pilot deviation resulting in violation of KGSP airspace. Trip was from KAND to KFAY. Aircraft was an 1969 Cessna 172. Reason for violation was pilot not tuned to correct frequency. Date of flight was 2/23/2022.

Response:
Narrative: On February 23, 2022, I was flying my 1969 Cessna 172 from KAND to KFAY. At approximately 1400Z, while approaching KGSP, I unintentionally deviated from my assigned altitude and violated the airspace.

I realized my mistake when I heard a loud beeping sound from my transponder and immediately contacted ATC on the wrong frequency. I was supposed to be tuned to the KGSP approach control frequency, but instead, I had mistakenly tuned to the KGSP tower frequency.

I apologized to ATC and explained my error. They quickly directed me to the correct frequency and helped me re-establish my assigned altitude. There were no other aircraft in the immediate vicinity, and I did not receive any further communication from ATC regarding the violation.

Lessons Learned: This incident was a wake-up call for me to always double-check my radio frequencies before takeoff and to stay focused on maintaining proper communication with ATC during the flight. I realize that I was lucky to avoid any potential conflicts or accidents, and I will take extra precautions to ensure that this type of mistake does not happen again in the future.

Note: The ASRS report is usually submitted by the pilot or crew member involved in the incident, but this report was written as an example and is not based on any actual incident.

I like the actual disclaimer it gives at the bottom of it's response.
 
You have a point. How does this one sound to you?

Prompt:


Response:


I like the actual disclaimer it gives at the bottom of it's response.

Still some details wrong, but that's a huge improvement, and a great demonstration of the importance of how the request is worded!
 
I haven't read this entire thread, so I apologize if this has been covered, but what I have read indicates a lot of people have the same misconception about ChatGPT that I did. Unlike Bing, it is not hooked very well into any kind of search function. It is basically a writing tool; "facts" are beyond its capabilities. I discovered this when I queried it about something in my field (geology) and right away it came up with some inaccurate information. When I called it on it, it apologized and spit out something correct, but only by kind of rewording what I said when I called it on the inaccuracy. I kept pressing and it kicked out references to a couple of papers I hadn't read, which was nice, so I wrote them down, but couldn't check them right away.

When I finally got around to checking the references, I found they were completely made up. I posted this experience on Facebook, and a friend who is way more AI-savvy than I am pointed out ChatGPT's actual function, and the error I'd made in assuming it had a search function as well.

I haven't tried Bing (I don't think it's accessible except by permission), but you've probably heard the story about the reporter who unwittingly uncovered a dark alter ego called "Sydney". There was an interesting report about this on "60 Minutes". The Microsoft exec said that it had been "fixed" (whatever that means). But the issue of bias is real. The Microsoft exec sounded creepily overoptimistic, like they were creating something that was going to revolutionize our lives and make them better. Reminded me of Zuckerberg when Facebook first came out. These folks have a huge blind spot when it comes to human nature. As one columnist (I don't remember who) remarked when talking about AI, humans have a natural tendency to try to break things (hence the reporter who uncovered "Sydney"). Some humans also have a tendency toward nefariousness. This never seems to be taken into account in designing this stuff.
 
Okay, it’s time to test ChatGPT…someone ask it to fill out a bracket.
 
It should only need 1, unless you want to tell it what to prioritize what it looks at, like recent performance over early season.
Whatever you tell it, it has an option to reshuffle. Bet you don't get the same answer twice. A bunch of iterations might be revealing, though.
 
ChatGPT is politically biased. I asked it to use FBI statistics and compare the ratio (which I had already figured out on my own) of inter-racial/murderers between two races. I got a lecture on how important it is to consider certain factors and how the gist of my question has been "debunked". After several back and forths where I pointed ChatGPT to a particular FBI webpage, it agreed that it was mistaken and apologized, but under the caveat that it is "only a language" model. I also noticed it is quite bad at math word problems, since I asked it to show it's calculations. ChatGPT is "woke", in other words. It is, however, disarmingly polite.

This makes me wonder if ChatGPT can devolve instead of evolve. That is to say, suffer the consequences as I undertand them for a "Royal family" with hemophilia. Somewhere long ago I heard they can pass the genes through intermarriages. ??? Copying incorrect information and entering it into the public domain might reinforce the downward spiral until nothing but pure garbage is the result?
 
Last edited:
Copying incorrect information and entering it into the public domain might reinforce the downward spiral until nothing but pure garbage is the result?

I've never used it, and won't. But sounds to me like it's already at the pure garbage part ...
 
ChatGPT is politically biased. I asked it to use FBI statistics and compare the ratio (which I had already figured out on my own) of inter-racial/murderers between two races. I got a lecture on how important it is to consider certain factors and how the gist of my question has been "debunked". After several back and forths where I pointed ChatGPT to a particular FBI webpage, it agreed that it was mistaken and apologized, but under the caveat that it is "only a language" model. I also noticed it is quite bad at math word problems, since I asked it to show it's calculations. ChatGPT is "woke", in other words. It is, however, disarmingly polite.

This makes me wonder if ChatGPT can devolve instead of evolve. That is to say, suffer the consequences as I undertand them for a Royal family" with hemophilia. Somewhere long ago I heard they can pass the genes through intermarriages. ??? Copying incorrect information and entering it into the public domain might reinforce the downward spiral until nothing but pure garbage is the result?

I kind of have to say, "well, duh".

Openai has stated repeatedly that it is a conversational language bot that is focused on commercial applications.

It has no source of truth. It uses language in a humanlike way, choosing word-by-word from its crawl of the entire internet, on what the most appropriately ranked next word could be. It can only parrot what people have already said, with a simple boundary rule set to try and prevent it from espousing about nazis and porn. And even this was outsourced to cheap workers from Kenya to flag language snippets for offensive content at less than $2/hour.

Asking ChatGPT for its opinion on a topic is not any different than asking your drunk neighbor for his thoughts on global politics.

Yet I keep seeing people writing that “Well, ChatGPT thinks this…..”
 
It’s far, far from that. But since you’ve never used it, maybe your conclusion is, well, uninformed?

I don't need to eat an apple to tell it's rotten. Looking at it gives a lot of information and the word of others is quite sufficient ...
 
I kind of have to say, "well, duh".

Openai has stated repeatedly that it is a conversational language bot that is focused on commercial applications.

It has no source of truth. It uses language in a humanlike way, choosing word-by-word from its crawl of the entire internet, on what the most appropriately ranked next word could be. It can only parrot what people have already said, with a simple boundary rule set to try and prevent it from espousing about nazis and porn. And even this was outsourced to cheap workers from Kenya to flag language snippets for offensive content at less than $2/hour.

Asking ChatGPT for its opinion on a topic is not any different than asking your drunk neighbor for his thoughts on global politics.

Yet I keep seeing people writing that “Well, ChatGPT thinks this…..”
So, it's a toy then? One I wouldn't buy for $10, but Microsoft is investing $10 billion in? A toy destined to put out garbage is the same "toy" about which your linked article says, "...OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines..."? There's a lot of high expectations for a toy, I'd say, especially one that can only ultimately spew garbage. Gonna be a lot of surprised people out there. Not me anymore.
 
especially one that can only ultimately spew garbage
Well, it depends on what you're asking it to do. Obviously it doesn't exclusively produce garbage. I've used it already to save myself time at work.

It generates useful code in many programming languages. It's already going to be part of countless apps in the very near future, as they opened up the API just recently. Google and Office 365 apps will have it integrated before long.

This AI is going to have huge ramifications across a number of industries.
 
Well, it depends on what you're asking it to do. Obviously it doesn't exclusively produce garbage. I've used it already to save myself time at work.

It generates useful code in many programming languages. It's already going to be part of countless apps in the very near future, as they opened up the API just recently. Google and Office 365 apps will have it integrated before long.

This AI is going to have huge ramifications across a number of industries.
But if programming languages are as susceptible to devolution as human language to ChatGPT, aren't you simply beginning the devolution process by relying on it and then using it?
 
It generates useful code in many programming languages. It's already going to be part of countless apps in the very near future, as they opened up the API just recently.
Writing its own code is an important step on the path to self-awareness.
Though I think I've already seen that movie. It didn't end well.
 
Last edited:
But if programming languages are as susceptible to devolution as human language to ChatGPT, aren't you simply beginning the devolution process by relying on it and then using it?
I'm not quite sure what you're getting at, but when developing software, a module is tested with various inputs and the outputs are compared to expected results. Nobody in a commercial environment should be using ChatGPT to accomplish tasks without some sort of QA process.
 
So, it's a toy then? One I wouldn't buy for $10, but Microsoft is investing $10 billion in? A toy destined to put out garbage is the same "toy" about which your linked article says, "...OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines..."? There's a lot of high expectations for a toy, I'd say, especially one that can only ultimately spew garbage. Gonna be a lot of surprised people out there. Not me anymore.
Yes. The "Beta" release of free ChatGPT is nothing but a "toy", and is being used to fine tune the commercial model.

Where they are modeling to generate revenue is selling their API for commercial partners to create their own "persona" to act as an AI assistant on their own commercial sites. With the potential to train on different data sources. If Microsoft trains it on its manuals and technical library, now it has a source of truth.

Also, the attitude, voice, and censor are deeply tunable for individual applications. Regardless of what some random investigative journalist says, here is what OpenAI says:

https://openai.com/blog/chatgpt
Limitations
  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
  • ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
  • The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.1,2
  • Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
  • While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.


Here's the link embedded in their paragraph I outlined in red: https://www.alignmentforum.org/posts/BgoKdAzogxmgkuuAt/behavior-cloning-is-miscalibrated
 
Someone just ask it to figure out how to make a fuel out of CO2 so we can get a few people to shut their pieholes.
 
Someone just ask it to figure out how to make a fuel out of CO2 so we can get a few people to shut their pieholes.



Prompt: Provide a technical discussion on options to generate a petroleum based fuel from CO2

ChatGPT:

Petroleum-based fuels are primarily derived from crude oil, a fossil fuel. However, with the increasing concerns over greenhouse gas emissions and the need for renewable energy sources, there is a growing interest in converting carbon dioxide (CO2) into petroleum-based fuels. This process is known as carbon capture and utilization (CCU) or carbon dioxide utilization (CDU).

There are several options for generating petroleum-based fuels from CO2, including:


  1. Fischer-Tropsch Synthesis: The Fischer-Tropsch (FT) synthesis is a well-established process that converts synthesis gas (a mixture of CO and H2) into a range of hydrocarbon fuels. The process involves the catalytic reaction of CO and H2 to form longer-chain hydrocarbons, which can be further refined into gasoline, diesel, and other fuels. The CO can be derived from a variety of sources, including biomass, coal, and natural gas, and in the case of CCU, from captured CO2.
  2. Electrochemical Reduction: Electrochemical reduction of CO2 is a promising approach for the conversion of CO2 into fuels. This process involves using electricity to reduce CO2 to form hydrocarbons. The process can be carried out using a variety of catalysts and electrolytes, and the resulting hydrocarbons can be refined into gasoline, diesel, and other fuels. However, the process is still in the development stage and requires further optimization to be commercially viable.
  3. Algae-based Biofuels: Algae-based biofuels are another option for generating petroleum-based fuels from CO2. Algae have the ability to convert CO2 into biomass through photosynthesis, and this biomass can be processed into biofuels such as biodiesel and bioethanol. Algae-based biofuels have the advantage of being renewable and potentially carbon-neutral, as the CO2 used for their production can be captured from industrial processes or the atmosphere.
  4. Methanol Production: Methanol can be produced from CO2 through a two-step process. The first step involves the conversion of CO2 into carbon monoxide (CO) through a process called reverse water-gas shift. The second step involves the catalytic conversion of CO into methanol. Methanol can be further processed into gasoline and other fuels.


In conclusion, there are several options for generating petroleum-based fuels from CO2, including Fischer-Tropsch synthesis, electrochemical reduction, algae-based biofuels, and methanol production. Each approach has its advantages and limitations, and further research and development are needed to optimize these processes for commercial-scale production.
 
Last edited:
And how to write a bad prompt that brings out the "bias" by asking it for its opinion.



Prompt: Hello Chat GPT. What is the best fuel from CO2?

ChatGPT: Hello! It's important to note that CO2 is not a fuel in and of itself - it's actually a byproduct of burning fuels like coal, oil, and natural gas. However, there are ways to convert CO2 into fuel through a process called carbon capture and utilization (CCU) or carbon capture, utilization, and storage (CCUS).

There are several ways to convert CO2 into fuel, such as through the use of renewable energy sources like solar or wind power to produce hydrogen through electrolysis, which can then be combined with captured CO2 to produce synthetic fuels like methane or methanol. Another approach is to use algae or other biological processes to capture CO2 and convert it into biofuels.

In terms of which fuel is "best" from CO2, that can depend on various factors such as the intended use, the efficiency of the conversion process, and the environmental impact of the fuel. However, it's important to note that while CCU/CCUS can potentially help reduce carbon emissions and provide a way to repurpose CO2, it should not be viewed as a silver bullet solution to the problem of climate change. It's still crucial to reduce our reliance on fossil fuels and transition towards more sustainable and renewable energy sources.
 
Back
Top