Re: Robot Pets Almost as Good as Real Ones?
Dave, on host 65.116.226.199
Wednesday, January 25, 2006, at 17:30:22
Re: Robot Pets Almost as Good as Real Ones? posted by Sam on Wednesday, January 25, 2006, at 15:35:56:
> By contrast, we also understand that computers and robots come into existence via entirely different means. They're constructed differently, and they work differently. If we intentionally manufacture computers with external similarites to mammals, WHERE is the logic in assuming that *internal* similarities have also come into being as a byproduct?
I guess this would be one of the fundamental areas where we differ in our thinking, then. You seem to assume that biological brains and electronic computers work completely differently on a fundamental level. But to me, on a fundamental level, a brain is just a collection of neurons, each one either firing or not firing--that sounds like the ones and zeroes of a computer to me. I think the difference is only one of levels of complexity, not of any fundamental difference in how things work. Of course, my own personal suspicion is that my brain relies on quantum principles that my personal PC does not rely on, but that's totally just an opinion based on nothing more than my own random musings on the subject. And even if I'm right, that doesn't preclude the possibility (or more likely in my mind, probability or even inevitability) that we may someday invent a quantum computer that *can* take advantage of the same quantum weirdness that I suspect my brain uses. In which case we're right back at "levels of complexity" rather than "fundamental differences."
Your point about relating a behavior observed in dogs to our own internal experiences is valid, but I don't see how that precludes you from extending that to robodogs that exhibit the same behavior. For example, if you kick a meatdog, it'll tend to yip, run under the table, and cower there, whimpering in pain and confusion. We associate that with the pain and confusion we ourselves would feel if similarly kicked, so we feel bad for having done it (or hopefully, don't do it in the first place). But a robodog like we've been positing would exhibit exactly the same behaviors. Can you honestly tell me you wouldn't feel guitly kicking a robodog in the ribs if it looked and behaved *exactly* like a real dog would? It's the behavior (cowering, whimpering) we associate with our own internal thoughts on pain and suffering, not some abstract idea of dog consciousness. I guarantee you're going to feel guilty on some fundamental emotional level *regardless* of your thoughts on whether or not the robodog actually "felt" the pain the same way you and I would have.
The idea that two biological entities have more in common with each other than a biological and a mechanical entity do seems valid. It seems logical that we can infer how meatdogs feel about things based on how we would feel about the same things. But that still, to me, doesn't preclude the same sort of consideration being given to robodogs displaying the same sort of "feelings". If I met a space alien with a completely different biological makeup from mine (and biologically, that's infinitely more likely than meeting one that looks like the typical Star Trek "human with head ridges" aliens) I wouldn't assume that I could kick it indiscriminately if it displayed a pain reaction just because it has a different biological makeup than I do.
There's the argument that the robodog is just "programmed" to "simulate" a pain reaction. But first of all, I can't show that a meatdog isn't similarly programmed, and second of all, my *own* pain reactions aren't exactly well-thought out either. If you kick me, I'll yelp and cower just like a dog would, and not because any sort of higher mental function kicked in and some reasoning took place. It just hurts, and that's how I respond to pain, because that's *my* programming.
Also, I don't think the tactic of simply programming a robodog to yelp and cower when kicked, as in:
if ($kicked){ yelp(); cower(); }
is at all how a "real" robodog would be programmed. First of all, such programming would require too much forethought on the part of the programmers. They'd have to plan for every contingency. More likely, simple "instincts" would be programmed, such as survival (the most important and fundamental instinct every biological organism has) and from there some way for the robodog to learn as it "grows" would be implemented. Which would be directly analagous to how "real" organisms apparently develop. This would of course require hardware, software, and programming skills far beyond anything we have now, and possibly beyond anything we will ever have. But I don't think it's impossible. And given that a robodog wouldn't be strictly "programmed" in the sense that every reaction would be predictable based on a given input, I don't see how consciousness couldn't or wouldn't arrise in such a robodog. I feel (and current scientific thought seems to back me on this) that consciousness is just a result of the interaction of the complex systems within the brain. So similarly complex systems created within a mechanical robobrain could very well give rise to the same sort of consciousness.
Anyway, we're getting more and more into "this is how I feel" territory and farther away from anything that could even be considered objective fact. I don't have too much more to say because we obviously have differing fundamental assumptions and can't therefore debate properly. And that's cool. It's certainly been a fun debate, though. :-)
-- Dave
|