Main      Site Guide    
Message Forum
"Can versus Should" revisited
Posted By: Issachar, on host 207.30.27.2
Date: Thursday, June 22, 2000, at 10:22:31

The title is for the old folks who remember the first "major" discussion thread on this Forum, a couple of years ago. :)

Supposing that you've read the first post on nanotechnology, I'm willing to bet that in addition to a "land without sickness or want", plenty of darker visions of the future have sprung to your mind, as they did to mine. This is the post that I needed to write in order to get off my chest the feeling I had after reading about nanotechnology: more sick with dread than with excitement.

A fundamental characteristic of nanotech machines, or "nanites", is self-replication: the ability to produce multiple copies of themselves. Without self-replication, nanites would not be useful, since millions of nano-bots would be required to carry out any substantial task, and manufacturing millions of nanites one at a time is not a feasible solution.

If the benefit of a self-replicating entity is its ability to quickly spread out and work on a large area at once, that is also its inherent threat. The 20th century's dreaded instruments of destruction, nuclear weapons, were amazingly destructive, but also extremely difficult to acquire, so that we could at least be assured that only a small number of parties could have access to the rare materials needed to create nuclear weapons.

The 21st century has greater reason for fear, because nanites of all kinds will be cheap and easy to produce; the only serious obstacle to producing nanotech weapons of mass destruction will be the knowledge and expertise required to design them. And such knowledge is not likely to be hard to come by.

What could a nanotech weapon do? It could selectively kill persons of a particular genetic (read: ethnic) heritage. It could reduce a nation's arable land to non-arable dust. It could do any number of horrific things, it could do them quickly, and it could do them on a global scale. Even the crudest use of nanotechnology renders us completely defenseless: what will happen if someone strings a microscopic wire, many times stronger than steel, across your doorway at neck level?

Herein lies the first "can versus should" problem: as a global society, we are obsessed with discovering all that we *can* do with our incredible scientific knowledge; we have made startlingly little progress, on the other hand, in acquiring the wisdom to know what we *should* do -- contrary to the expectations of our Enlightenment-era forefathers. The danger is not simply that nanotechnology will *enable* us to cut our own throats, but that we are likely to *want* to do it. Yet I am at a loss to come up with a means of halting nanotech research pending the development of enforceable ethical safeguards against its misuse. The pace of our advancement in knowledge seems inexorable.

To resume the discussion: the only possible defense against nanotechnology is more nanotechnology, designed to counteract hostile nanites and destroy them, adapting to new varieties as they appear. But how successful could this strategy be? If our turnaround response time to new computer viruses is any indication, "not nearly fast or effective enough" would be my guess. Put 100 armed people in a room, and instruct one of them to kill all the others. Even though all are armed and can theoretically defend themselves, if the victims are unable to respond for the first five minutes or so, they will all be killed before they can offer any resistance.

Let us say, though, that nanotechnology provides a sufficient, instantly-responsive defense against aggression of all kinds. Perhaps each person has millions of nanites operating within their bodies to destroy infectants. Their clothing is impervious to bullets and even micro-wire. Their visual and auditory acuity is greatly enhanced. Nanotechnology has made them, for all intents and purposes, supermen.

To what end, then, do machines far "smarter", more adaptive, and more capable than humans, serve our human needs? At the point where our survival is machine-assisted, what justification do we offer for what appears to be such an inefficient use of machine resources: keeping alive a planet of invalids, who might as well otherwise be stricken to their beds for all that their unassisted efforts can produce? When we are no longer the most valuable beings by reason of either our mental superiority or our creative ability, what do we have to say beyond the cry, "I want to live!" And why should our superior creations see fit to indulge that wish?

My expectation is that it will not be long before we see a renewed interest in the dusty old philosophical questions of the significance of human life and existence, since they will suddenly have an immediate and practical application. Is there a good reason why the human race should be the endpoint of history -- why we should not become extinct in favor of our more mechanical descendants?

Frank Tipler, in his book The_Physics_of_Immortality, thinks not. His view is that it is absolutely necessary for Homo sapiens to die out so that the more advanced forms of life, to which we are the midwife, may continue with greater success in what he considers the ultimate goal: surviving the collapse of the universe and the end of all existence. Tipler's hope is that our demise will not be final, that each person will exist in the future as an emulation -- a form of the "brain-in-a-vat" that does not suspect that its perceived existence is not grounded in reality, yet is happy and satisfied in its ignorance.

Tipler's approach is pragmatic; that of others will be philosophical or religious. Has God, or Nature, ordained a special role for humans that justifies our continued existence, as "inefficient" as we may be within the new system? Here is the second "can versus should" problem: do we have not only the wherewithal to survive, but the justification for doing so? Even if we can survive, should we?

Those of you who know me will of course suspect that I have my own ideas about this issue, and you are right. But I mostly want to hear how other people approach the problem. Post away, all of you smart Rinkydinks! (My third post on this topic will have to wait until after lunch; I'm starved. :) )

Iss "survival of the unfit" achar


Link: Article:

Replies To This Message