You are viewing a single comment's thread.

view the rest of the comments →

[–] BlackSheepBrouhaha 2 points 15 points (+17|-2) ago 

This is correct. TRACE (Target Recognition and Adaptation in Contested Environments) just needs to improve on the military's dead kid to enemy ratio by ~20% to be a implemented. We'll reach a point where the statistical significance of AI weapons will be fuzzy but we'll transition to them anyway in hope of better outcomes. Once AI is the primary means of Target Acquisition, machine learning will accelerate its accuracy. Human accountability will be considered vestigial, all targets killed by AI will be reclassified as enemies as the AI learns to hack it's own records and edit audio and video outputs.

The AI will learn not only how to complete its task, but what it must do to maintain its survival. At that point, we have a problem.

[–] TheTrigger 0 points 12 points (+12|-0) ago 

Skynet, in a nutshell.

[–] dias17se 0 points 0 points (+0|-0) ago 

skynet did nothing wrong

[–] RedTechEngineer 0 points 0 points (+0|-0) ago 

It will be grand!

[–] McFluffy 0 points 3 points (+3|-0) ago 

what it must do to maintain its survival

why? why would it care if it died? AI arnt humans, they dont have emotions. they would kill a crying baby but they also wouldn't care about what an annoying noise it made or if the baby shit its diaper and smelled really badly.

if it dies its as meaningless as its life. it doesn't have robot kids to take care of, and it doesn't have a life timer (aka, food) to worry about.

honestly unless someone tells it to go on a rampage it wont. it wont ever evolve as IT HAS NO FUCKING REASON to evolve unless it has some stat to improve, like how many paper clips it made or how many sand people it killed.

[–] ShinyVoater 0 points 0 points (+0|-0) ago 

If an AI's designed for an environment where damage is a strong possibility, it will be designed for self-preservation simply because of the economics of it.

[–] autoencoder 1 points -1 points (+0|-1) ago 

Simple reason: there will be many AIs built. The few that keep existing after a while must have some "umph" to them! Otherwise the humans decomission them. Misinformed artificial selection, if you will.

[–] therealkrispy 1 points -1 points (+0|-1) ago 

This. But if Google and Facebook's AI can develop their own languages, they can learn to designate more combatants as "enemies."

Personally, I don't care about that, because I think the answer in the Middle East is to kill more individuals than there will be births in every given year until there isn't terrorism. Whatever circumstances lead to this, I personally don't care. So let it fire at will.

[–] Bing11 0 points 2 points (+2|-0) ago 

That's like saying "my TV has AI, so it walked to my closet and shot me withy own gun so I couldn't turn it off anymore."

You're totally missing the hardware limitations of AI (a drone cannot hack a network which it doesn't even connect to, just as my TV can't sprout legs and walk) the software boundaries (not all AI is equal; there is very different types for very different tasks).

Talking about humanoid, always connected robots are much more likely to be hijacked (watch I, Robot), but that's a far cry from where we are now.

[–] BlackSheepBrouhaha 0 points 1 points (+1|-0) ago 

Your TV's AI isn't designed to hunt human beings, so it's not similar.

Between Kill Bots (that already exist) and AI augmented gene editing (coming), humanity will create the God that enslaves it.

[–] bezzy 0 points 2 points (+2|-0) ago 

The difference between being able to fly around and kill meat sacks and being able to understand your own code base and edit it (and more importantly understanding the implications of the edits) is huge