Re: Robot Pets Almost as Good as Real Ones?
Dave, on host 65.116.226.199
Thursday, January 26, 2006, at 12:28:56
Re: Robot Pets Almost as Good as Real Ones? posted by Lirelyn on Thursday, January 26, 2006, at 09:36:46:
> But doesn't it make a difference to you that you *know* the > thing isn't alive?
Part of the point I've been arguing is that, given a robodog of sufficient complexity such that I cannot tell the difference between it and a real dog, it'd be damned hard to convince me it *wasn't* alive in at least an analagous way to a meatdog. You can say all you want that "The organic dog is alive and the robodog is just a cold, unthinking simulation of a dog!!" but I just don't buy it. Life is as life does. I'm very much a behaviorist when it comes to animals--I believe all that we know about them, and probably all that we *can* know about them, comes from observing their behaviors. Sure, we can make comparisons to their behaviors and our behaviors, and perhaps infer what might be going on in their doggy brains because we can compare it to similar situations with ourselves. But I can't fathom not being able to do the same with a robodog.
I thought about this overnight, and I've realized a very important but unspoken part of my whole argument in this thread is that I just don't believe we'd ever be able to "simulate" a dog such that the robodog will be indistinguishable from a real dog. And by "simulate" I mean use the type of hard-coded logic I referenced in another post, where the programmers of the robodog create reactions for every contingency, with sufficient randomness to make the robodog not completely deterministic. That's just impossible in my mind, although I could be wrong. Rather, what we might be able to do is what I also stated in a previous post: Start with an "infant" robodog, program in certain instincts, desires, and needs common to dogs, and have a way for the robodog to learn from experiences and "grow". This would be exactly analagous to how organic dogs develop, and given a robodog brain of sufficient complexity, I simply can't see consciousness *not* being the result, if indeed you actually get to the point of having a robodog indistinguishable from a real dog. I have been tacitly assuming all along (while also trying, at times, to "prove" the point) that consciousness would almost be a pre-requisite for creating a robodog that could exactly duplicate a real dog.
Again, I could be wrong. People used to think that a chess-playing computer would have to be conscious or at least intelligent in some non-trivial way. Instead, we now have chess playing computers that can beat the best human players in the world, but all they do is brute-force the game. They are better than any human chess player, but they approach the game entirely differently. In fact, it's pretty amazing to me thinking about it that humans play as well as we do, since we obviously do *not* just use a brute-force algorithm where we look seven or eight moves ahead at every possible move that could be done. Instead, we play in some completely different manner that doesn't require massive computing power to look at every possible move in advance, and yet somehow we play as well or better than most every computer in the world (and can fry an egg too!)
So I could be wrong. Someday, maybe the "brute force" approach will produce a robodog that fools me into thinking it's a real dog. I'd *still* have a hard time, on a simple emotional level, convincing myself the brute-force robodog wasn't alive. And I still don't think I'd mind having one as a pet. However, I'd be more open to discuss whether, on a philosophical level, it was "alive" and "conscious" or not.
-- Dave
|