0
105

[–] theoceansinthestreet 0 points 105 points (+105|-0) ago  (edited ago)

I'm confused. Isn't the "awareness" just programmed in. For instance, "IF internal speaker activated THEN "I know now who spoke!"."

I sort of doubt the researchers were sitting there talking offhand to their machines saying "hey, so only one of you can talk, which one is it?" and they suddenly wrote their own code for understanding the question and determining which one of them was speaking.

Maybe I'm just missing something

3
170

[–] Amarok 3 points 170 points (+173|-3) ago 

This is the problem with AI tests.

Whenever a computer does something previously considered hard or impossible, we all say, "that wasn't a proper test, it's too easy." We've been patting ourselves on the back with that assurance ever since Kasparov got his ass kicked by Deep Blue. When Watson kicked Ken Jenning's ass at Jeopardy people said the same thing.

Watch one pass the Turing test next year and people leap out of the woodwork to say the same thing about that robot. They'll be making your food, cleaning your house, and driving you around town, and people will still be saying the same thing. "This is just simple programming, it's not real intelligence."

There is no formal definition of 'real intelligence' since we have absofuckinglutely no idea whatsoever how the brain forms human consciousness. We also make the mistake of measuring intelligence like it goes from Einstein on down to the village idiot... when in fact it starts at zero with a rock, and scales up well past every God in any religious text. Somewhere in there on that scale is human intelligence which is a nearly invisible blip, almost indistinguishable from ants, dogs, and chimps.

Most animals can't recognize their own reflection in a mirror. Just watch your cat. Put an elephant in front of one, put something on its head, and it'll look in the mirror then wipe it off. Dolphins and oddly enough octopi are on this list - and now, so is this robot. Progress is progress.

0
89

[–] Bobsentme 0 points 89 points (+89|-0) ago 

I think the problem here is that they called it THE self awareness test. This is, in fact, just one logic puzzle posed to a group of robots, one of which passed because it recognized it's own voice.

People don't realize where we are in computers. They don't realize that 50 years ago, we had to literally write almost machine language code for a computer to make a beeping noise. When I was 12, I spent 10 hours in front of a computer to make a square, box like truck move across a screen on a tandy 1000 with pages and pages of basic code. 4 years ago I made a very bright and shiny car animate itself across my screen in about 15 minutes with 5 short lines of code and some mouse clicks in Visual Basic.

If not for the 18 years of advances brought forth by hundreds of thousands of other people building faster hardware, better firmware, object oriented coding libraries, GUIs, Operating Systems, etc, I'd still be using BASIC and spending hundreds of hours in front of a keyboard to do something most people do in seconds today.

End of my old man computing rant. Remember to go outside kids, but stay off my lawn.

0
17

[–] theoceansinthestreet 0 points 17 points (+17|-0) ago 

You've generalized quite a bit. What I'm saying is that this particular experiment is not very good at determining Self Awareness.

I think, in terms of AI, we are making progress and have been for some time, but I would consider the following as much better examples.

This Chip which programmed itself in such a way that the original designer did not understand, in order to meet it's objective

Or the more fun Super Mario World learning

Now I understand that these are both similar situations where the "AI" in question is given an objective to complete but I think it's a much better representation of AI progress. The outcome is something entirely developed by the entity in question and had no human interaction during it's development.

I don't think the example in OP's link is quite the same and, to me, it seems simpler...unless of course the robots DID use a learning algorithm in a similar sense? Maybe they did, the article doesn't say.

[–] [deleted] 1 points 14 points (+15|-1) ago 

[Deleted]

3
7

[–] Caboose_Calloway 3 points 7 points (+10|-3) ago 

Comments like this is why we need gilding.

Congrats on the perspective.

0
16

[–] SharonLeeFan 0 points 16 points (+16|-0) ago 

Yeah, this isn't real self-awareness in the sense of an identity. It's just that a computer can identify itself as the source of a voice, using simple programmed logic.

0
6

[–] Foul_Knave 0 points 6 points (+6|-0) ago 

0
5

[–] trent33 0 points 5 points (+5|-0) ago  (edited ago)

From the perspective of a computer science researcher, let me give my answer based on what we teach in an AI class (AI: A Modern Approach).

There are four viewpoints for AI:

  • A computer which think like humans
  • A computer which think rationally
  • A computer which acts like humans
  • A computer which acts rationally

Thinking and consciousness are topics worth discussion. Because there is no concrete definition of thinking, most AI research has been conducted with systems that simply act. There is something called the Chinese room argument, which states that if a computer were to act like it was thinking convincingly enough it would be sufficiently thinking for anyone interacting with it.

HOWEVER, it is possible that at some point in the future we will nail down thinking/consciousness and grant the ability of an AI system to do the same. It just seems a distant goal for right now.

0
30

[–] littleTT 0 points 30 points (+30|-0) ago 

This seems like a very loose definition of self aware. Relax, we're not dealing with sky net yet

[–] [deleted] 0 points 8 points (+8|-0) ago 

[Deleted]

0
11

[–] littleTT 0 points 11 points (+11|-0) ago 

Meh, I'll worry when a computer can decide to program itself.

0
6

[–] chakan2 0 points 6 points (+6|-0) ago 

Watson was mind-blowing to watch. That much data crunching matched with natural language processing was incredible.

0
1

[–] klongtoey 0 points 1 points (+1|-0) ago 

how do i know you're not a robot? this is exactly what skynet bots would comment here. am turning off my electricity immediately.

0
1

[–] Chus 0 points 1 points (+1|-0) ago 

There are only robots in the internet

[–] [deleted] 1 points 9 points (+10|-1) ago 

[Deleted]

0
0

[–] jammi 0 points 0 points (+0|-0) ago  (edited ago)

Genisys is Skynet.

0
0

[–] dsrtfx_xx 0 points 0 points (+0|-0) ago 

As in SEGA Genesis? I knew it!

1
0

[–] Morro 1 points 0 points (+1|-1) ago 

I'm trying to please the mighty AIs by subscribing to /v/robotoverlords.

0
6

[–] Diztortion 0 points 6 points (+6|-0) ago 

All I could think of reading this was a robot named pint-size (been a while) from a comic called questionable content. Link - http://questionablecontent.net

0
5

[–] Acer-Red 0 points 5 points (+5|-0) ago  (edited ago)

In the puzzle, a fictional king is choosing a new advisor and gathers the three wisest people in the land. He promises the contest will be fair, then puts either a blue or white hat on each of their heads and tells them all that the first person to stand up and correctly deduce the colour of their own hat will become his new advisor.

Selmer Bringsjord set up a similar situation for the three robots - two were prevented from talking, then all three were asked which one was still able to speak. All attempt to say "I don't know", but only one succeeds - and when it hears its own voice, it understands that it was not silenced, saying "Sorry, I know now!"

Can someone explain that self-awareness test to me? How does that exactly prove self-awareness? All it seems to prove is that it's aware it encountered an error. Did I misunderstand something?

[–] [deleted] 0 points 10 points (+10|-0) ago  (edited ago)

[Deleted]

0
2

[–] Geimhreadh 0 points 2 points (+2|-0) ago 

Thanks, now that makes a hell of a lot more sense!

0
1

[–] luddite 0 points 1 points (+1|-0) ago 

Those wise men assume a lot about each other's actions and clarity of thought to reach such a conclusion.

It's a clever riddle, but hardly a test for self awareness.

0
1

[–] 1055448? 0 points 1 points (+1|-0) ago 

Thank you!

0
0

[–] clickbot 0 points 0 points (+0|-0) ago 

I know a lot of people who are clearly conscious

Amazing. How do you know that? Your test should be applied to these robots!

0
7

[–] JayTea 0 points 7 points (+7|-0) ago 

The robot was able to realize it had a voice. Its not full self awareness but its an interesting step in that direction. I should add that this is not a bad thing.

0
3

[–] Acer-Red 0 points 3 points (+3|-0) ago  (edited ago)

Yeah, I never thought of it as a bad thing.

The robot was able to realize it had a voice.

But doesn't that assume that the robot used data from it's "ears" to make this determination? Isn't it just as plausible that it didn't recognize it's voice or even effectively "hear" itself and instead just recognized that it had output sound when it shouldn't have? That's what I was thinking when I read it.

0
4

[–] Primewill 0 points 4 points (+4|-0) ago 

Not exactly related, but if you haven't watched Ex Machina yet, go and watch it - it's an excellent movie about AI and the turing test. trailer

0
0

[–] dsrtfx_xx 0 points 0 points (+0|-0) ago 

Seeing this headline on the front page this morning was incredibly creepy because i had literally JUST watched Ex Machina for the first time last night!

0
1

[–] Primewill 0 points 1 points (+1|-0) ago 

Yeah trippy eh? It was the same for me. watched it last night, that's why I shared the link.

0
0

[–] chmod [S] 0 points 0 points (+0|-0) ago 

Excellent movie!

0
1

[–] royalbaseghost 0 points 1 points (+1|-0) ago  (edited ago)

Self awareness (if seen the same as consciousness) is so impossible to determine that there is no test to accomplish this in a way beyond reasonable doubt, even for humans (it's really a philosophical impasse more than a scientific question needing to be cracked [if interested look up 'philosophical zombies', "thus I refute Berkley" to name two instances, or the Turning Test for a more practical test for A.I.]).

Otherwise,if self-awareness is not equated to consciousness, then this test is a neat scientific advancement [at best, and parlor trick at worst], but does not denote anything more than the robot 'knows' there are three robots, the other two robots cannot speak, robot 3 made the noise, so robot 3 can speak, the voice came from his speaker. Technically speaking this could be known as "self-awareness" but not in the the same sense of consciousness as we know it--a "me" experiencing its being, and not just the totality of input data combined with logic and paired with hard-programming/instinct to produce desirable output/successful actions.

0
1

[–] phographer 0 points 1 points (+1|-0) ago 

Now time for most humans I meet to try this test.

I'll be curious to see this happen spontaneously without coding.

load more comments ▼ (24 remaining)