?

Log in

No account? Create an account
Zer Netmouse
July 30th, 2007
01:09 pm

[Link]

Previous Entry Share Next Entry
More links
Peter Morville Discusses User Experience Strategy
Peter presents a "T-shaped" consulting process: intersecting Information Architecture with User Experience Strategy. More, he discusses the concept of user experience, how to avoid prejudicing your study with the framework of your expectations, why a strategy can have impact, and how to think about the strategy paradox and strategizing as part of the process of making disruptive technologies -- see his column here.

Robin Marantz Henig Discusses Sociable Robots
The Real transformers on the NY Times. Among other things, it asks if robots will ever have self-consciousness. What do you think? And how would you go about proving to someone that you yourself are conscious?

Do you think animals are conscious?

When I was younger, I theorized that a difference between conscious people and other animals is that with consciousness, people are aware of being aware of themselves. Other animals may be self-aware, as in they know they exist and can reason about their own health and comfort and environment, etc, but the question for me would be whether or not they can examine and reason that quality of being self-aware: to be aware of being aware of being aware of themselves. That is abstract thought of a special nature that I find integral to the concept of consciousness.

We have robots that can reason about the situation they are in and match environmental inputs to reasoning about how they should change their own situation. Even if you have that robot refer to itself in its planning and calculations, I would argue that does not make it conscious. Among other things, it ought to have a sense of "embodied intelligence" - knowing what in the world constituted its self.

What do you think constitutes consciousness? And how would you tell in a robot or AI?

(9 comments | Leave a comment)

Comments
 
[User Picture]
From:mishamish
Date:July 30th, 2007 05:50 pm (UTC)
(Link)
Personally, I gave up all hopes of unravelling the self-awareness question and put a big-yellow stickey note on it that just read "Impassable Road-Block: No common language." I mean, the Turing Test is a band-aid at best. With more and more sophisticated AI programming, it's becoming easier and easier to make machines that could pass a Turing test because they are programmed to FAKE self-awareness. However, just because they are cleverly designed to FAKE self-awareness does not make them self-aware. And I think the only REAL way to judge self-awareness is to have some kind of language in which a meaningful dialogue can take place. And not even a meaningful dialogue about self-awareness! A meaningful dialogue of ANY type. When it comes to animals, we lack a mutual language and with computers we lack a meaningful mutual language. So... we're still at loggerheads.
From:yaleartificer
Date:July 30th, 2007 06:52 pm (UTC)
(Link)
It's interesting that so many people see consciousness as something that can be functionally defined -- if X can do Y, then X is conscious. One of the reasons I did the mirror self-recognition stuff is that I wanted to show that you can be pretty dumb, but still recognize yourself in the mirror. So if you're defining "self-awareness" as "the ability to recognize yourself in the mirror," it's a pretty impoverished definition of self-awareness, and one starts to wonder why we should care about self-awareness at all if it implies so little about overall intelligence.

But then, the idea of "intelligence" as this mass noun, like "water," this stuff that you can have more or less of, is probably just bad folk psychology. It tends to be an average of many specific abilities, some of which can be wildly disparate; compare, for example, the intelligence of someone with Williams syndrome with the intelligence of an autistic child. Is this really a single-dimensional attribute? Probably not.

Similarly with consciousness -- the functional definitions all seem to be grasping at, "Is this thing 'one of us,' or not?" And the answer with artificial systems for a long time will probably be, "Not in any way that really matters," followed by a period of, "In some ways yes, and in some ways, no." And everyone's definitions of consciousness will hinge on what they want to accept as conscious, rather than starting from some kind of first principles and figuring out what's conscious from there.

Personally, I'm sympathetic to philosophers David Chalmers and Ned Block when they talk about "phenomenal consciousness" -- that the most interesting question is, "Is there something that it's like to be that thing?" Or in other words, when I imagine being that thing, am I imagining a "sensory world" that actually exists, or am I making a mistake and imagining experiences that don't exist? (For an example of the latter: there's probably nothing that it's like to be a thermometer, even though one could argue that it embodies a kind of simple representation of the external world.) But I see no way with current science to tell whether something is phenomenally conscious or not; it might even be unknowable to anybody but the thing itself, though I wouldn't be hasty in jumping to that conclusion.
[User Picture]
From:dionysus1999
Date:July 30th, 2007 10:22 pm (UTC)
(Link)
I think, like the Supreme Court, I'm going to cop out and say I'll know it when I see it.

An underlying assumption is that machine intelligence will in some way be similar to human awareness. I personally think that biological computers will be the coming wave, so the question of intelligence will be moot, the computers will have humanlike brains.
From:(Anonymous)
Date:July 31st, 2007 12:07 am (UTC)
(Link)
I remember reading some description of the stages of childhood intellectual development where the final stage was defined as the ability to understand hypothetical scenarios, and was shocked that something like 40% of people never reach that stage. I still question the percentage that was mentioned, but that the percentage is non-trivial explains why the response to "if such-n-such had happened then..." is often "but such-n-such never happened!". In fact, I seem to recall such non-arguments were used to illustrate this final stage.

Anyway, your conscious-of-consciousness description sounds a lot like the same kind of processing that goes into hypothetical scenarios, and your definition of consciousness might exclude a goodly number of adult humans.
[User Picture]
From:delosd
Date:July 31st, 2007 12:37 am (UTC)
(Link)
The day that the robot can ask us a self-generated existential question, is the day I'll be convinced they it's aware.
[User Picture]
From:delosd
Date:July 31st, 2007 12:38 am (UTC)
(Link)
...convinced THAT it's aware, darn it. Good thing lack of typos is not the criteria for self-awareness.
[User Picture]
From:netmouse
Date:July 31st, 2007 05:24 am (UTC)
(Link)
Well, I guess there the question is, how do you tell the existential question is self-generated?
[User Picture]
From:delosd
Date:July 31st, 2007 05:40 am (UTC)
(Link)
If you can trace the question back in its programming to see where it came from, it's not self-generated. If the programming has the capacity to modify itself in a non-deterministic fashion, then the robot is entering that area of "alive". When it goes from there to asking the meaning of life, it's aware.
From:childe
Date:August 4th, 2007 01:04 am (UTC)
(Link)
I like the answer to this question that's contained here:

http://www.theafternow.com/

in the episode called "Rachael's Mutt"
Netmouse on the web Powered by LiveJournal.com