ChatGPT

But, but, but AI told him NOT to go to those sites!

Surely all humans are good, decent, respectful and would never ever do something illegal intentionally. So this should be fine.
 
Here’s a test they should do, feed the AI all the scientific data/journals that was available before 1905 and ONLY that information. Then ask it to explain the incompatibility of Newtonian mechanics with Maxwell's equations of electromagnetism and the Michelson–Morley experiment result.
If it comes up with Einsteins theory of special relativity….we’re in trouble.
 
Isn't one of the basic principles of software "garbage in, garbage out"?
 
Isn't one of the basic principles of software "garbage in, garbage out"?

Really good piece on AI on 60 Minutes:


They mentioned AI ChatBoxes "hallucinating". In one example, they asked for a list of good books on an economics topic. All the ones the ChatBot listed were completely fabricated!
 
Really good piece on AI on 60 Minutes:


They mentioned AI ChatBoxes "hallucinating". In one example, they asked for a list of good books on an economics topic. All the ones the ChatBot listed were completely fabricated!

Technically aren't all books fabricated?
 
...They mentioned AI ChatBoxes "hallucinating". In one example, they asked for a list of good books on an economics topic. All the ones the ChatBot listed were completely fabricated!
Here's a different view on what, or who, is hallucinating:

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

AI machines aren’t ‘hallucinating’. But their makers are
Naomi Klein

Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

Excerpt:

Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word “hallucinate”.

This is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong. Like, for instance, when you ask a bot for a definition of something that doesn’t exist and it, rather convincingly, gives you one, complete with made-up footnotes. “No one in the field has yet solved the hallucination problems,” Sundar Pichai, the CEO of Google and Alphabet, told an interviewer recently.

That’s true – but why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?

Warped hallucinations are indeed afoot in the world of AI, however – but it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.

Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year....​
 
Hmmm... "Superintelligence" is pro-First Amendment, the developer says. Makes me wonder if regulating AI should simply be on the order of requiring AI models to defend the Constitution? In fact, I had already asked ChatGPT if it were programmed to do that and it offered some disclaimer that sounded like an excuse to say whatever it wants. There's more to the Constitution than the First Amendment.

Developer creates pro-First Amendment AI to counter ChatGPT's 'political motivations' | Fox News
 
Hmmm... "Superintelligence" is pro-First Amendment, the developer says. Makes me wonder if regulating AI should simply be on the order of requiring AI models to defend the Constitution? In fact, I had already asked ChatGPT if it were programmed to do that and it offered some disclaimer that sounded like an excuse to say whatever it wants. There's more to the Constitution than the First Amendment.

Developer creates pro-First Amendment AI to counter ChatGPT's 'political motivations' | Fox News
I wonder if AI can be sued for libel.
 
Thanks for the point-out. I'm a big fan of that guy's legal explanations.

The indemnification and hold-harmless agreements are especially scary. One time when I was hired to play in an opera orchestra, they wanted me to sign a contract that required me to indemnify them. I told them "no way!" It really made me appreciate union gigs (which this was not).

So the lesson with regard to ChatGPT is that before posting or otherwise publishing one of its answers, better fact-check it for potentially libelous assertions.
 
This is hilarious: An attorney is facing the possibility of sanctions for submitting a brief that contained phony case citations that were provided by ChatGPT. He checked them by asking ChatGPT itself if the cases were real or fake, and of course it claimed that they were real. :rofl:


Just goes to show, just as intelligent humans are capable of lying, there's no reason why artificial intelligence can't lie!
 
Back
Top