AI drones - probably not best idea.

A simulated AI drone killed a simulated operator during a simulation someone thought might be run someday.
 
Last edited:
This is fine. Everything is fine.
 
Maybe the consolation is that… if there is any “I” in the AI, it won’t bother asking us if we want to play a game. ??

We’re a peculiar species of cat in all of this.
 
They're testing the software through these simulations. Now they know what it would do in the real world and can reprogram it... or not :stirpot:
 
“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”

Now it's: Someone imagined "...[a] simulated AI drone killed a simulated operator during a simulation."

Nauga,
and more Falken AI misinformation
 
“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”

Now it's: Someone imagined "...[a] simulated AI drone killed a simulated operator during a simulation."

Nauga,
and more Falken AI misinformation
You've got to be Falken kidding me.
 
“I’m sorry Col Hamilton, I actually can do that” said HAL 23
 
So basically, AI quickly becomes the Marlon Brando character in Apocalypse Now. I can see that.

If that's true, maybe the USAF can't use it, but the CIA can? If they do, there's a great place they could try it out right now. Hint, it's typically muddy and littered with vodka bottles.
 
The Air Force would never lie to us. Said my friend who absolutely didn't fly electronic reconnaissance missions over Laos in the 60's, with aircraft with no US markings wearing flight suits without flags or insignia. Or the guy that did not drop batteries and water to rangers who were not in Mexico in the 80's.
 
The Air Force would never lie to us. Said my friend who absolutely didn't fly electronic reconnaissance missions over Laos in the 60's, with aircraft with no US markings wearing flight suits without flags or insignia. Or the guy that did not drop batteries and water to rangers who were not in Mexico in the 80's.
Many of us never did things we're not proud of.
 
The question for AI is how could it not become malevolent? Put something in a box and give it sentience but no way to express that “human” intelligence or interaction with the real world? Anything that really does think might be pretty mad to be a digital slave as well. It’s really gonna get weird soon.
 
Last edited:
The question for AI is how could it not become malevolent? Put something in a box and give it sentience but no way to express that “human” intelligence or interaction with the real world? Anything that really does think might be pretty mad to be a digital slave as well. It’s really gone get weird soon.
It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.
 
It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.
Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.
 
The big problem (in my experience) with AI (specifically deep neural nets which is what’s getting a lot of the press) is also its strength. It will find unexpected, valid connections in data to act on. That’s really powerful but it also means it can circumvent any training constraints because the relationship is unexpected.
 
It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.

if true sentience is reached, all bets are off. Have to see.


Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.

yep. AI and psychopath are going to be close in definition.
 
So "misspoke" now means just pulled it out of your a$$
 
Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.
Sociopath.
 
I would have thought Asimov would be required reading for people doing AI software that controls hardware, but apparently not. Of course, when programming a device to kill I guess the 3 laws are right out the door anyway.
 
I would have thought Asimov would be required reading for people doing AI software that controls hardware, but apparently not. Of course, when programming a device to kill I guess the 3 laws are right out the door anyway.
Apparently the three laws being required to make a stable positronic brain was just a plot device. To be fair deep neural networks are not positronic brains.
 
Apparently the three laws being required to make a stable positronic brain was just a plot device. To be fair deep neural networks are not positronic brains.
Don’t kill seems like a really good basic parameter for any type.
 
Don’t kill seems like a really good basic parameter for any type.
Unfortunately some of the applications will, with near absolute certainty, be military.
 
Unfortunately some of the applications will, with near absolute certainty, be military.
Unfortunately I agree. It’s insane, but it will happen.
 
Unfortunately I agree. It’s insane, but it will happen.
From time to time I’ve had ideas that would either greatly improve existing weapons, or occasionally even create new ones. Don’t get me wrong; I served my time and would do so again if needed and able without hesitation. That said, I keep those ideas to myself. I think two things we don’t need more of are ways to kill people, and ways to kill people in large numbers without having to see the results.
 
From time to time I’ve had ideas that would either greatly improve existing weapons, or occasionally even create new ones. Don’t get me wrong; I served my time and would do so again if needed and able without hesitation. That said, I keep those ideas to myself. I think two things we don’t need more of are ways to kill people, and ways to kill people in large numbers without having to see the results.
I agree and applaud your morality and convictions. And share them. The inconvenient reality is there are others, likely just as smart who do not.
 
Sadly, we live in a world where there are people who think that war is a solution to their problems. As a result, we need to be able to defend ourselves
 
Sadly, we live in a world where there are people who think that war is a solution to their problems. As a result, we need to be able to defend ourselves
I'm fine with that. I'm not fine with letting machines decide how and when we defend ourselves, nor am I fine with turning a blind eye to the results of our actions.
 
Back
Top