Re: Robot Pets Almost as Good as Real Ones?
Stephen, on host 68.5.240.34
Wednesday, January 25, 2006, at 11:45:54
Re: Robot Pets Almost as Good as Real Ones? posted by Sam on Wednesday, January 25, 2006, at 11:12:05:
> The burden of proof, therefore, is on you, not me, to establish that at some mysterious point on the progression of electronic computing technology we suddenly achieve the creation of consciousness. Good luck doing that without a pseudo-intellectual contrivance for a definition of "consciousness."
I thought that in this discussion we were assuming electronics had reached that point. I thought Darien's hypothetical included more or less that point (that the robodog was indistinguishable in its behaviors and responses from a real dog).
The fact is there is no good definition of consciousness. M-W.com gives a couple of definitions that aren't overly helpful: "1 a : the quality or state of being aware especially of something within oneself b : the state or fact of being conscious of an external object, state, or fact", "2 : the state of being characterized by sensation, emotion, volition, and thought."
Those are all pretty vague. Worse, there's no easy way to *test* something for consciousness. If you play The Sims, the little computer dudes go about their business responding to internal desires and also to external conditions. They're not conscious, but they do a decent job (especially for a game and not a serious AI experiment) of faking it.
But are they less complex in their behaviors than living things? Certainly they seem more complex than many single-celled organisms. But bacteria aren't conscious, you say. What about certain insects or fish? They exhibit complex behaviors but are they conscious? How do you decide which organisms are conscious and which are not?
Even more problematic are humans since obviously we are conscious. But when? Are infants conscious? What about embryos? A fertilized egg? Somebody who is under anesthesia?
Consciousness is a sliding scale, Sam. Ants are less conscious than dogs which are less conscious than apes which are less conscious than humans. Your remark about being able to "suddenly achieve the creation of consciousness" I think misses this. It's not that there will be some huge breakthrough where we finally figure out consciousness. Rather I suspect we'll work our way up that scale in a non-linear fashion.
We don't really understand how consciousness works. Until we do, none of us can say whether or not it can be simulated using traditional computing methods (certainly it is a much, much harder problem than computer scientists first realized).
But my guess is that consciousness turns out to be nothing more than a complicated data processing system, as wintermute said. Consciousness takes input, processes it, and outputs actions. Consciousness seems to be marked by the ability to alter the rules it uses to process data, but other than that I'm not so sure why Sam assumes it is anything special.
Like I said in another post, the brain seems to be a computer in which there is no clear separation between hardware and software which complicates the analogy. But you can emulate hardware in software if you understand how the hardware works.
Given all this, I do not see why it's unreasonable to assume that a complex enough piece of software could do everything that humans do. What is it about consciousness that makes it so special?
Stephen
|