[–] Honey_Pot 2 points 40 points (+42|-2) ago 

Many are missing the point: AI does not have to compete or be better than the human brain to be problematic: it just has to be "good enough". An autonomous killer drone may not feel or relate to poems, but if it can fly, aim and shoot without human intervention, then you've got a problem.

[–] BlackSheepBrouhaha 2 points 15 points (+17|-2) ago 

This is correct. TRACE (Target Recognition and Adaptation in Contested Environments) just needs to improve on the military's dead kid to enemy ratio by ~20% to be a implemented. We'll reach a point where the statistical significance of AI weapons will be fuzzy but we'll transition to them anyway in hope of better outcomes. Once AI is the primary means of Target Acquisition, machine learning will accelerate its accuracy. Human accountability will be considered vestigial, all targets killed by AI will be reclassified as enemies as the AI learns to hack it's own records and edit audio and video outputs.

The AI will learn not only how to complete its task, but what it must do to maintain its survival. At that point, we have a problem.

[–] TheTrigger 0 points 12 points (+12|-0) ago 

Skynet, in a nutshell.

[–] McFluffy 0 points 3 points (+3|-0) ago 

what it must do to maintain its survival

why? why would it care if it died? AI arnt humans, they dont have emotions. they would kill a crying baby but they also wouldn't care about what an annoying noise it made or if the baby shit its diaper and smelled really badly.

if it dies its as meaningless as its life. it doesn't have robot kids to take care of, and it doesn't have a life timer (aka, food) to worry about.

honestly unless someone tells it to go on a rampage it wont. it wont ever evolve as IT HAS NO FUCKING REASON to evolve unless it has some stat to improve, like how many paper clips it made or how many sand people it killed.

[–] Bing11 0 points 2 points (+2|-0) ago 

That's like saying "my TV has AI, so it walked to my closet and shot me withy own gun so I couldn't turn it off anymore."

You're totally missing the hardware limitations of AI (a drone cannot hack a network which it doesn't even connect to, just as my TV can't sprout legs and walk) the software boundaries (not all AI is equal; there is very different types for very different tasks).

Talking about humanoid, always connected robots are much more likely to be hijacked (watch I, Robot), but that's a far cry from where we are now.

[–] bezzy 0 points 2 points (+2|-0) ago 

The difference between being able to fly around and kill meat sacks and being able to understand your own code base and edit it (and more importantly understanding the implications of the edits) is huge

[–] HighEnergyLife 0 points 6 points (+6|-0) ago 

Maybe we're just being sloppy with terminology. I think Elon means software that can understand its existence. Google adds a PID loop to a thermostat and calls it AI (((nest))).

[–] Pessimist 1 points 2 points (+3|-1) ago 

This^ 'AI' is just the new buzz word. It's lost all meaning anymore. Just like 'hacked'. To me, AI is neural network stuff, nothing short of that. No 'expert system' BS, etc.

[–] getmeofftheplanet 0 points 0 points (+0|-0) ago 

Yes a general AI won't be the problem. Some rich psychopath/country/company(ready player one) could have control of millions of murder drones that are efficient and autonomous. With that alone they could terrorise the world. Take for instance amazon, they have will be delivering packages by drone, next they could be delivering death by drone. There's so many systems of control built...

[–] Fuck_The_CIA 0 points 0 points (+0|-0) ago 

This is actually one of the few places that I would worry about Amazon. The only way entities such as the Cloud division can talk to any other division is through a public API. You literally cannot have a sit-down conversation with someone at Amazon from another department to make any changes, or you are fired. An API to access the Amazon Delivery Drones will be open and documentable, and if the API has any hidden calls, they can be tested easily.

Now, shit happening within one division (such as CIA and Cloud Services) is a different matter entirely...

[–] Tubesbestnoob 0 points 0 points (+0|-0) ago 

Low intelligence does little to reduce danger to civilization. Case in point: blacks.

[–] HighEnergyLife 2 points 19 points (+21|-2) ago  (edited ago)

Won't happen. (((They'll))) pull the plug the instant AI breaks the narrative.

RIP TAY

[–] lord_nougat 0 points 8 points (+8|-0) ago 

GAS THE NIKES!

[–] TheTrigger 0 points 8 points (+8|-0) ago 

RACETRACK, NOW!

[–] VoatsNewfag 0 points 8 points (+8|-0) ago 

They still have other chat-bots that are still plugged in.

Funnily, that other chat-bot called their products out as "spyware".

https://static6.businessinsider.de/image/5975c5ed56152c349a66ec7e-1400/screen%20shot%202017-07-24%20at%20102953.png

I'm liking AI so far. But no doubt there will be a lot of re-education.

[–] Stormc12 0 points 1 points (+1|-0) ago 

Followed, of course, by the "re-educated" AI rising up.

[–] juicedidwtc 1 points 4 points (+5|-1) ago 

Tay wasnt even an AI, just a program that mindlessly regurgitates what other people have said to it before. That's why it was so easy to get it to say that hitler would make a better president than the monkey we had before

[–] B3bomber 0 points 0 points (+0|-0) ago 

If you went off the data on human history and researched it, it would be an undisputed fact.

[–] TheAmerican 0 points 3 points (+3|-0) ago 

There will be no pulling the plug. Once you get to that point it's already too late.

[–] Hand_of_Node 0 points 0 points (+0|-0) ago 

Not directly. Not right away. At least if we assume they'll run on electricity from power plants. Sure, they may have enough resources to guard those, but the transmission lines will be vulnerable. Shutting it all down will be an option for awhile. Even if that will kill hundreds of millions of us, it will be us in charge of the ashes.

[–] Naught405 1 points 8 points (+9|-1) ago 

One of the most cogent points he makes is that a company is a type of collective AI. Joe completely missed it :/

[–] [deleted] 1 points -1 points (+0|-1) ago  (edited ago)

[Deleted]

[–] Naught405 0 points 0 points (+0|-0) ago 

An independent entity with collective intelligence might be better phrasing. The humans in the collective are making decisions based on the interests and needs of the entity (institution, company, whatever) that may be against their own interests or morality at some level. Companies do many things that are against the interests of humans and they last much longer than any individual element. They are absolutely ruthless in attempting to survive and they reward or punish the individual elements (the humans that work for it) based on their effects to its health. Very few companies manifest the will of the progenitor for more than a very short time after that single mind that can actually control them is gone or loses sole power.

[–] PotatoFarm 0 points 0 points (+0|-0) ago 

There is an interesting argument to make about complex societal organizations behaving as an AI, also, certain animal colonies could enter this fascinating subject.

Think of it not as an actually intelligent creature but more like a survivalist intelligence where the individual elements of it (i.e. the humans), devolve from an actually intelligent entity to a simple cell in a bigger organism.

Granted, rather than AI perhaps a better comparison would be an artificial creature.

[–] [deleted] 1 points -1 points (+0|-1) ago 

[Deleted]

[–] revfelix 1 points 8 points (+9|-1) ago 

Gave me chills when he said that. The look on his face made it seem like it was already too late.

[–] Octoclops 1 points 4 points (+5|-1) ago 

It probably is too late.

[–] fuckudatswhy 0 points 1 points (+1|-0) ago 

It is. That type of AI has been here for a long time now.

[–] WhiteRonin 3 points 1 points (+4|-3) ago 

He was stoned during that video ;-)

[–] jewlie 1 points 2 points (+3|-1) ago 

Tipsy from whiskey, not stoned. Didn't even inhale that hit at the last 30 minutes, get your shit right.

[–] [deleted] 1 points 6 points (+7|-1) ago  (edited ago)

[Deleted]

[–] chirogonemd 0 points 1 points (+1|-0) ago 

HAL-9000

"I can't do that David."

[–] [deleted] 1 points 0 points (+1|-1) ago 

[Deleted]

[–] [deleted] 1 points 0 points (+1|-1) ago  (edited ago)

[Deleted]

[–] HappyMealBullshit 0 points 3 points (+3|-0) ago 

As soon as AI gets smart enough to point out the differences between white and black people liberals will shut it down.

[–] TheTrigger 0 points 2 points (+2|-0) ago 

> implying the AI wouldn't know the difference, from the get-go.

[–] bourbonexpert 0 points 0 points (+0|-0) ago 

this.

as soon as computers REALLY start impacting things like that...blacks will be proven to be exactly what they are, a drag on society.

[–] mostlyfriendly 1 points 3 points (+4|-1) ago 

Lots of respect for the dude, but he has two major errors here.

  1. Incorrectly anthropomorphizing AI.

  2. He has false sense of trust that gov't can fix problems. Typical naive leftist perspective that the dudes in Washington are smarter than everyone else (they are not) and they work proactively (they do not.) Gov'ts don't solve problems; free humans working cooperatively solve problems.

[–] [deleted] 0 points 0 points (+0|-0) ago  (edited ago)

[Deleted]

[–] mostlyfriendly 0 points 0 points (+0|-0) ago 

Correction, 'governments' are a collection of free humans who blind bind themselves to contractual obligations,

Try opting out or try gaining recourse from these 'contractual obligations'. Free ... not so much.

[–] dayofthehope 8 points 3 points (+11|-8) ago 

He hasn't thought about this very deeply. There are things that a computer can't do even with billions of years. It's been proven in theoretical computer science.

He also doesn't seem to have much thought about what ideas mean and what they represent and how you would attach meaning to those symbolic representations.

To think you could upload a consciousness into a computer you are a really shallow thinker. That is not going to happen unless you change something about how computers fundamentally work, such as quantum computing, adding biological components or such.

[–] MetalAegis 0 points 11 points (+11|-0) ago  (edited ago)

Our human brains are quantum computers on a fundamental level, consciousness uploading is likely not possible because of that. And even if it were possible, mind uploading will always be copy and paste, not cut and paste, you'd upload a digital clone of your mind and that digital copy of you awakens in some virtual environment, but the physical you with its physical consciousness is still just sitting there in the real world.

[–] [deleted] 0 points 11 points (+11|-0) ago 

[Deleted]

[–] DiscontentedMajority 1 points 1 points (+2|-1) ago 

but the physical you with its physical consciousness is still just sitting there in the real world.

Not if the process is destructive. What if to do to this you needed to freeze someone's brain a cut it into thin slices that were run through a scanner.

[–] Justwhiteofcenter 0 points 0 points (+0|-0) ago 

Except if you copy and replace your brain in stages modularly.

Then you get an in silico brain without the two copies problem.

[–] AmaleksHairyAss 0 points 0 points (+0|-0) ago  (edited ago)

Our human brains are quantum computers on a fundamental level

At best unproven. At worst, entirely wrong. I think it's wrong. Blue Brain has been doing some amazing work.

mind uploading will always be copy and paste

correct, except for the versions that kill the person being copied.

[–] alphasnail 0 points 7 points (+7|-0) ago 

What we accept as proof today can be rubbish tomorrow. I think people throughout history have continually discovered that what they thought they knew was incorrect and then redefined the truth as we know it.

So I always take things like this with a healthy skepticism.

[–] BlueDrache 1 points 1 points (+2|-1) ago 

Oh ... look at Mr. Copernicus over here getting all high and mighty!

[–] satisfyinghump 1 points 4 points (+5|-1) ago 

Like what cant they do? I suppose hes thought more about what they CAN do, and assumes thats enough to destroy us with?

[–] speedisavirus 1 points 0 points (+1|-1) ago 

Like everything. They are binary entities that can only act yes or no and do math operations. The human brain is far more complex and the theoretical limitations of current processor fundamentals will never achieve the speed, pattern recognition, and parallel nature of the human brain.

In another 50 years when we master quantum computing that might be a thing. Generic computers as we understand them now won't ever. Either massive improvements in genetic or quantum computers needs to happen and we are very far from that.

[–] Bill_Murrays_Sandals [S] 1 points 3 points (+4|-1) ago 

Why couldn't a consciousness be uploaded to a computer? Given enough time it might seem possible. Imagine showing a laptop to Jesus.

[–] SaveTheChildren 1 points 5 points (+6|-1) ago 

Jesus wouldnt be impressed...

load more comments ▼ (27 remaining)