ChatGPT

I’m guessing you aren’t teaching organic chemistry to pre-meds.

Contrary to myth, org chem was never intended to be a weed-out class, but it was always interesting to see how many students started sophomore year as pre-med, and ended it as pre-law.

No, these are MS and PhD level students.
 
It was my understanding that modern academia really didn't care about literacy and gainful knowledge as long the check doesn't bounce. The average level of literacy from those with undergraduate degrees that I encounter on a daily basis would seem to support my claim, unfortunately.

Also, those kids need to get off my lawn.
Everything you’ve said and more has been said for generations.

https://www.bbc.com/worklife/articl...ple-have-always-complained-about-young-adults
 
There are lots of things that can only be done with a human today, but could likely be done just as well or better with an artificial human. A "customer success" agent is only as capable as the knowledge they have of your product. Suddenly, AI like this, could consume every email ever written in your company, every slack message, and every support case. It would very quickly become by-far more effective at quickly getting customers to resolution.

Content writing is another. Companies pay an army of people to write crap on the internet to draw views. This AI, today, can already do this better than the humans could.

Some jobs it will replace. Most jobs it will likely supplement and make you more effective. I use ChatGPT (well the paid openai version of it), already, all throughout my daily work. Primarily to help me handle my communications with humans. Rarely do I want it to help me program, I'm good at that, but I can waste a lot of time "brain blocked" writing the perfect e-mail. It busts through those brain blocks for me like they were nothing.
I wonder if it's been used for speech-writing yet.
 
I'm also wondering if it is able to cite its sources?
 
We ourselves have only been transmitting "intelligent" signals for 130 years or so. Imagine a similar civilization a billion light years away. What are the odds that their "intelligent transmission" phase would have been timed, a billion years ago, such that we'd now be able to receive it?

There may be many many forms of sentinent life in the universe, but the odds against any two finding each other are astronomical (pun intended).

A billion light years away puts it about 400 times further away than the Andromeda galaxy. Radio signals from an intelligent civilization that far away would be undetectable - if we were even looking in the right frequency range - and if they even use the RF signals. I would posit that we would have face to face contact with said civilization before we detected their signals, since if they didn't destroy themselves, they would be at least a billion years more advanced than we are. And with a billion years of progress, I am sure they would have found a way to bend the laws of physics for interstellar travel. Of course, a billion light years away, they would have many many more closer worlds and galaxies to explore before picking some random planet in a random galaxy, that is a billion years behind them in tech.
 
I fed ChatRPT into the old Eliza/Doctor app to see what would happen.

I remember Eliza from the 70s. I was at U of Fla and they had it running as an "app" under TSO on an IBM 370. Pretty damned impressive at the time. As long as you stayed on the rails, you could have some interesting conversations. It would probably appear to the public as AI today. However, that term is very loosely (and incorrectly) used. As clever as Eliza was, it wasn't AI.
 
I remember Eliza from the 70s. I was at U of Fla and they had it running as an "app" under TSO on an IBM 370. Pretty damned impressive at the time. As long as you stayed on the rails, you could have some interesting conversations. It would probably appear to the public as AI today. However, that term is very loosely (and incorrectly) used. As clever as Eliza was, it wasn't AI.
Reminds of Dr. Sbaitso from Creative Labs as well. "Chatbot Therapist" lol.
 
A billion light years away puts it about 400 times further away than the Andromeda galaxy. Radio signals from an intelligent civilization that far away would be undetectable - if we were even looking in the right frequency range - and if they even use the RF signals. I would posit that we would have face to face contact with said civilization before we detected their signals, since if they didn't destroy themselves, they would be at least a billion years more advanced than we are. And with a billion years of progress, I am sure they would have found a way to bend the laws of physics for interstellar travel. Of course, a billion light years away, they would have many many more closer worlds and galaxies to explore before picking some random planet in a random galaxy, that is a billion years behind them in tech.


I was exaggerating to make the point.

The odds of two civilizations transmitting and receiving the correct frequency and mode at the correct time for one to hear the other, given the random variable of how many years away they are in distance, and the randomness of how each progresses technically, make it very very unlikely that they would ever detect one another.

Consider that during the brief time we've communicated using RF waves, we have already used broadband spark gap, AM, AM-USB or AM-LSB, FM, PSK, and a variety of digital modes, plus we've used spread-spectrum frequency hopping. Who knows what's next?

If there is another transmitting civilization out there somewhere, chances are their transmissions sailed past us a few hundred years ago and now they're onto something else we haven't thought of yet.

The odds of intelligent life existing elsewhere are about 50:50 depending upon whose science you trust (see https://news.columbia.edu/life-intelligence-universe-earth-bayesian-statistics ). The odds of having the correct overlap in time to make one detectable by another are awfully slim.
 
chances are their transmissions sailed past us a few hundred years ago and now they're onto something else we haven't thought of yet.

More likely, a few million or billion years ago or in the future...

That your example was "a few hundred years" reveals our inherent limited vision as a species. I'm a geologist, so I think in millions regularly (just not in relationship to my bank account).

If they are out there and know about us, they are smart enough to keep their distance.
 
Not to mention the fact that for isotropic emissions you have the inverse square law to deal with. Even if those signals passed by us in the insanely short time span (relative to the time span in which intelligent life feasibly could have evolved) we’ve had technology to theoretically receive them, were they strong enough for us to detect?
 
Not to mention the fact that for isotropic emissions you have the inverse square law to deal with. Even if those signals passed by us in the insanely short time span (relative to the time span in which intelligent life feasibly could have evolved) we’ve had technology to theoretically receive them, were they strong enough for us to detect?

If they knew where the wanted the transmission to go, they could avoid some of the inverse square problem by encoding the info in a laser or maser (microwave frequency) beam. They both still spread, though not as much.

Heck, if their targeting information is a little out of date, right now they could be beaming detailed instructions for building a warp drive into rubble at the bottom of a valley in Aricibo.
 
If they knew where the wanted the transmission to go, they could avoid some of the inverse square problem by encoding the info in a laser or maser (microwave frequency) beam. They both still spread, though not as much.

Heck, if their targeting information is a little out of date, right now they could be beaming detailed instructions for building a warp drive into rubble at the bottom of a valley in Aricibo.

I’m not sure how they would ever know where to aim a non-isotropic radiator given that any signal we could’ve produced either hadn’t existed at the time they send a signal or won’t reach them for hundreds/thousands/millions of years after we produced it, but yes if they somehow did it’s much less of a factor.

RF is hardly my forte. How wide is the tightest beam transmission and what is the attenuation on the scale of thousands to millions of light years? Is there any real effective difference or is this a 10^-17 vs 10^-20 sort of deal?
 
I’m not sure how they would ever know where to aim a non-isotropic radiator given that any signal we could’ve produced either hadn’t existed at the time they send a signal or won’t reach them for hundreds/thousands/millions of years after we produced it, but yes if they somehow did it’s much less of a factor.

RF is hardly my forte. How wide is the tightest beam transmission and what is the attenuation on the scale of thousands to millions of light years? Is there any real effective difference or is this a 10^-17 vs 10^-20 sort of deal?

All valid points. I posted mostly because I'm still mourning the loss of the Aricibo radio telescope.
 
All valid points. I posted mostly because I'm still mourning the loss of the Aricibo radio telescope.
Yeah that was a real shame. Iconic device that produced some amazing work. Really makes you wonder what we could achieve if we spent a little less time killing each other and a little more time collaborating. Call me an idealist.
 
Here's an unusual application of ChatGPT:

Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence.

Here's how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci.

The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore.

As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in.​

https://www.npr.org/2023/01/25/1151...-to-argue-in-court-then-came-the-jail-threats

This lawyer discusses not only the potential legal issues, but also the practical issues:

 
I asked ChatGPT to "tell me a story about a dragon that couldn't breath fire."

The story it gave me was very reminiscent of the story of Rudolph the red-nose reindeer! LOL
 
There's actually a big debate going on about how ChatGPT is so entirely biased toward the left. The left thinks it's peachy of course. The pro-liberty side doesn't like it.

But it just goes to show how this tool isn't thinking at all, it's just regurgitating what's programmed into it.
 
Last edited:
There's actually a big debate going on about how ChatGPT is so entirely biased toward the left. The left thinks it's peachy of course. The pro-liberty side doesn't like it.

But it just goes to show how this tool isn't thinking at all, it's just regurgitating what's programmed into it.
Meh. Not sure what the “big debate” is.

OpenAI is a private company. They can train their bot on any selection of text they decide to, and the market will decide if it is valuable. Same for Twitter “bias”, post or pre Elon. None of this rises to congressional investigation.

Plenty of opportunities for Parler or Truth Social to start up their own AI chatbot if they think there is a return in that space.


OpenAI LP is governed by the board of the OpenAI nonprofit, comprised of OpenAI LP employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Reid Hoffman, Will Hurd, Tasha McCauley, Helen Toner, and Shivon Zilis.

Our investors include Microsoft, Reid Hoffman’s charitable foundation, and Khosla Ventures.
 
This thread has turned gross. Well done for taking a huge advancement in technology and finding a way to make it political. Good job.

Come on.. do better.
 
I’ve just played with Chat GPT a little bit. One frustration is the answers often border on “Maybe yes, maybe no”. Here are some recent queries:

“Is melatonin useful for insomnia?”

“In an aircraft engine, does lower rpm mean less wear?”

“Are squats good for SI joint pain?”

In each case, the answer bordered on, “It depends”. Part of the issue, I suppose, is that many things in life are not black and white, and an AI needs to present both sides. Unlike Google, you have no idea of the provenance of the reply. For instance, which option is from WebMD and which is from some alternative medicine site.

Another such was “Does using the word “provenance” in an online post come across as pretentious?”

The reply:

“It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.”

In other words, maybe yes, maybe no.

Anyway, ChatGPT itself has a take on the dilemma:

“Why do your answers often seem wishy washy?”

As an AI language model, I aim to provide the most accurate and neutral information possible. When answering questions, I often include nuances and multiple perspectives to account for different interpretations or situations. This can sometimes come across as "wishy-washy," but it's meant to convey a balanced and comprehensive answer. However, I understand that sometimes a straightforward answer may be preferred, and I will try my best to provide that in the future.

Well, at least it will try to do its best!
 
I’ve just played with Chat GPT a little bit. One frustration is the answers often border on “Maybe yes, maybe no”. Here are some recent queries:

“Is melatonin useful for insomnia?”

“In an aircraft engine, does lower rpm mean less wear?”

“Are squats good for SI joint pain?”

In each case, the answer bordered on, “It depends”. Part of the issue, I suppose, is that many things in life are not black and white, and an AI needs to present both sides. Unlike Google, you have no idea of the provenance of the reply. For instance, which option is from WebMD and which is from some alternative medicine site.

Another such was “Does using the word “provenance” in an online post come across as pretentious?”

The reply:

“It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.”

In other words, maybe yes, maybe no.

Anyway, ChatGPT itself has a take on the dilemma:

“Why do your answers often seem wishy washy?”

As an AI language model, I aim to provide the most accurate and neutral information possible. When answering questions, I often include nuances and multiple perspectives to account for different interpretations or situations. This can sometimes come across as "wishy-washy," but it's meant to convey a balanced and comprehensive answer. However, I understand that sometimes a straightforward answer may be preferred, and I will try my best to provide that in the future.

Well, at least it will try to do its best!
If you tell it to give you more precise answers, and give it more insight into how you want it to behave, it is quite good at it.

As an example. Try something like "Simulate a general aviation mechanic, and lets have a conversation" you're going to notice that it behaves totally differently in how it gives responses. I often times ask it to simulate a ford technican, then I ask it VERY precise questions about my truck, like whether a different part will bolt up, what tools I will need to take something apart, etc. It provides shockingly accurate answers.

Basically it is a very generic personality, but it can simulate other personalities, so you're better off to steer it into who you wish it were before you start asking it questions.
 
As a writer, I am already dismayed at the rampant illiteracy exhibited by much of today's youth. AI will simply exacerbate that illiteracy, and hasten our descent into mindless, self-absorbed, entertainment, and eventual ennui.

Think of the flesh blobs riding the spaceship Axiom from the old movie WALL-E.

Think of social media, where one's point of existence is nothing more than getting a click as some sort of validation from a total stranger. We are becoming a civilization of circle-jerkers.

I love humanity, particularly the honest search for knowledge and freedom, the yearning to understand the universe and eventually get off this beautiful rock and roam the stars.

Yet, I believe we are doomed. We are living on borrowed time. Both from internecine strife, loss of motivation, denigration of work and effort, geological events, and from all sorts of astronomical dangers. A mile-wide asteroid would put us back into the stone age. A small solar hiccup would fry everything. A few nuclear air bursts would destroy all electrical grids.

I suspect in the long run sentience is not a survival characteristic. It inevitably descends into self-destruction--and I'm not referring to climate nonsense. Hence, another reason, despite the age of the universe, we see no evidence of sentient life.

Or maybe I didn't sleep well and just need a cup of coffee.

Do you actually know any young people?

Put another way:
“The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.”

― Socrates
 
Do you actually know any young people?

Put another way:
“The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.”

― Socrates

I know young people and am a young person (25 is still "young", right?), and if you're under the impression that young people are as literate as previous generations, you're sorely mistaken.
 
I know young people and am a young person (25 is still "young", right?), and if you're under the impression that young people are as literate as previous generations, you're sorely mistaken.
I know far more than you. My students write far better than most of you.
 
Still making nonsensical claims
 
I know young people and am a young person (25 is still "young", right?), and if you're under the impression that young people are as literate as previous generations, you're sorely mistaken.
Go take a look at the studies on literacy in adults and children. Look over the last three or four decades. You'll find its much easier to find data that counters your argument vs supports it. Reading/writing is actually much more common in the average person's life than it used to be. We are all carrying devices around in our pockets that make us do it constantly.

For an illiterate, life would have been easier a few decades ago without so much written technology. You could talk to a person. You could call them. Now it is expected that you interact over a device that requires you read/write for even the most basic of services.
 
I know far more than you. My students write far better than most of you.

Well, I'm not about to argue with someone who says they know better than me! :) Besides, my info is outdated. I haven't been in college for about four years now, and almost everything else seems to be written in "Text-English".
 
Another thing, students taking a higher level science course are a great sample for the average young person today.
 
Yes, they are.
No, they aren’t. Throw out the 40% that don’t attend any college, then take out the 30% that don’t take science classes.
 
Back
Top