Everything you’ve said and more has been said for generations. https://www.bbc.com/worklife/articl...ple-have-always-complained-about-young-adults
A billion light years away puts it about 400 times further away than the Andromeda galaxy. Radio signals from an intelligent civilization that far away would be undetectable - if we were even looking in the right frequency range - and if they even use the RF signals. I would posit that we would have face to face contact with said civilization before we detected their signals, since if they didn't destroy themselves, they would be at least a billion years more advanced than we are. And with a billion years of progress, I am sure they would have found a way to bend the laws of physics for interstellar travel. Of course, a billion light years away, they would have many many more closer worlds and galaxies to explore before picking some random planet in a random galaxy, that is a billion years behind them in tech.
I remember Eliza from the 70s. I was at U of Fla and they had it running as an "app" under TSO on an IBM 370. Pretty damned impressive at the time. As long as you stayed on the rails, you could have some interesting conversations. It would probably appear to the public as AI today. However, that term is very loosely (and incorrectly) used. As clever as Eliza was, it wasn't AI.
I was exaggerating to make the point. The odds of two civilizations transmitting and receiving the correct frequency and mode at the correct time for one to hear the other, given the random variable of how many years away they are in distance, and the randomness of how each progresses technically, make it very very unlikely that they would ever detect one another. Consider that during the brief time we've communicated using RF waves, we have already used broadband spark gap, AM, AM-USB or AM-LSB, FM, PSK, and a variety of digital modes, plus we've used spread-spectrum frequency hopping. Who knows what's next? If there is another transmitting civilization out there somewhere, chances are their transmissions sailed past us a few hundred years ago and now they're onto something else we haven't thought of yet. The odds of intelligent life existing elsewhere are about 50:50 depending upon whose science you trust (see https://news.columbia.edu/life-intelligence-universe-earth-bayesian-statistics ). The odds of having the correct overlap in time to make one detectable by another are awfully slim.
More likely, a few million or billion years ago or in the future... That your example was "a few hundred years" reveals our inherent limited vision as a species. I'm a geologist, so I think in millions regularly (just not in relationship to my bank account). If they are out there and know about us, they are smart enough to keep their distance.
Not to mention the fact that for isotropic emissions you have the inverse square law to deal with. Even if those signals passed by us in the insanely short time span (relative to the time span in which intelligent life feasibly could have evolved) we’ve had technology to theoretically receive them, were they strong enough for us to detect?
If they knew where the wanted the transmission to go, they could avoid some of the inverse square problem by encoding the info in a laser or maser (microwave frequency) beam. They both still spread, though not as much. Heck, if their targeting information is a little out of date, right now they could be beaming detailed instructions for building a warp drive into rubble at the bottom of a valley in Aricibo.
I’m not sure how they would ever know where to aim a non-isotropic radiator given that any signal we could’ve produced either hadn’t existed at the time they send a signal or won’t reach them for hundreds/thousands/millions of years after we produced it, but yes if they somehow did it’s much less of a factor. RF is hardly my forte. How wide is the tightest beam transmission and what is the attenuation on the scale of thousands to millions of light years? Is there any real effective difference or is this a 10^-17 vs 10^-20 sort of deal?
All valid points. I posted mostly because I'm still mourning the loss of the Aricibo radio telescope.
Yeah that was a real shame. Iconic device that produced some amazing work. Really makes you wonder what we could achieve if we spent a little less time killing each other and a little more time collaborating. Call me an idealist.
Here's an unusual application of ChatGPT: Joshua Browder, the CEO of the New York-based startup DoNotPay, created a way for people contesting traffic tickets to use arguments in court generated by artificial intelligence. Here's how it was supposed to work: The person challenging a speeding ticket would wear smart glasses that both record court proceedings and dictate responses into the defendant's ear from a small speaker. The system relied on a few leading AI text generators, including ChatGPT and DaVinci. The first-ever AI-powered legal defense was set to take place in California on Feb. 22, but not anymore. As word got out, an uneasy buzz began to swirl among various state bar officials, according to Browder. He says angry letters began to pour in. https://www.npr.org/2023/01/25/1151...-to-argue-in-court-then-came-the-jail-threats This lawyer discusses not only the potential legal issues, but also the practical issues:
I asked ChatGPT to "tell me a story about a dragon that couldn't breath fire." The story it gave me was very reminiscent of the story of Rudolph the red-nose reindeer! LOL
There's actually a big debate going on about how ChatGPT is so entirely biased toward the left. The left thinks it's peachy of course. The pro-liberty side doesn't like it. But it just goes to show how this tool isn't thinking at all, it's just regurgitating what's programmed into it.
Meh. Not sure what the “big debate” is. OpenAI is a private company. They can train their bot on any selection of text they decide to, and the market will decide if it is valuable. Same for Twitter “bias”, post or pre Elon. None of this rises to congressional investigation. Plenty of opportunities for Parler or Truth Social to start up their own AI chatbot if they think there is a return in that space. OpenAI LP is governed by the board of the OpenAI nonprofit, comprised of OpenAI LP employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Reid Hoffman, Will Hurd, Tasha McCauley, Helen Toner, and Shivon Zilis. Our investors include Microsoft, Reid Hoffman’s charitable foundation, and Khosla Ventures.
You're new here, so I'll just drop this reminder. https://www.pilotsofamerica.com/com...onduct-roc-update-effective-9-21-2018.113845/
This thread has turned gross. Well done for taking a huge advancement in technology and finding a way to make it political. Good job. Come on.. do better.
I’ve just played with Chat GPT a little bit. One frustration is the answers often border on “Maybe yes, maybe no”. Here are some recent queries: “Is melatonin useful for insomnia?” “In an aircraft engine, does lower rpm mean less wear?” “Are squats good for SI joint pain?” In each case, the answer bordered on, “It depends”. Part of the issue, I suppose, is that many things in life are not black and white, and an AI needs to present both sides. Unlike Google, you have no idea of the provenance of the reply. For instance, which option is from WebMD and which is from some alternative medicine site. Another such was “Does using the word “provenance” in an online post come across as pretentious?” The reply: “It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.” In other words, maybe yes, maybe no. Anyway, ChatGPT itself has a take on the dilemma: “Why do your answers often seem wishy washy?” “As an AI language model, I aim to provide the most accurate and neutral information possible. When answering questions, I often include nuances and multiple perspectives to account for different interpretations or situations. This can sometimes come across as "wishy-washy," but it's meant to convey a balanced and comprehensive answer. However, I understand that sometimes a straightforward answer may be preferred, and I will try my best to provide that in the future.” Well, at least it will try to do its best!
If you tell it to give you more precise answers, and give it more insight into how you want it to behave, it is quite good at it. As an example. Try something like "Simulate a general aviation mechanic, and lets have a conversation" you're going to notice that it behaves totally differently in how it gives responses. I often times ask it to simulate a ford technican, then I ask it VERY precise questions about my truck, like whether a different part will bolt up, what tools I will need to take something apart, etc. It provides shockingly accurate answers. Basically it is a very generic personality, but it can simulate other personalities, so you're better off to steer it into who you wish it were before you start asking it questions.
Do you actually know any young people? Put another way: “The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers.” ― Socrates
I know young people and am a young person (25 is still "young", right?), and if you're under the impression that young people are as literate as previous generations, you're sorely mistaken.
The old geezer knows more about being 25 than the 25 year old because he teaches 25 year olds. This is what happens to you when you're in academia for too long.
Go take a look at the studies on literacy in adults and children. Look over the last three or four decades. You'll find its much easier to find data that counters your argument vs supports it. Reading/writing is actually much more common in the average person's life than it used to be. We are all carrying devices around in our pockets that make us do it constantly. For an illiterate, life would have been easier a few decades ago without so much written technology. You could talk to a person. You could call them. Now it is expected that you interact over a device that requires you read/write for even the most basic of services.
Well, I'm not about to argue with someone who says they know better than me! Besides, my info is outdated. I haven't been in college for about four years now, and almost everything else seems to be written in "Text-English".
Another thing, students taking a higher level science course are a great sample for the average young person today.
No, they aren’t. Throw out the 40% that don’t attend any college, then take out the 30% that don’t take science classes.