by Philip Tung Yep
Very nearly the earliest art of which we have evidence is concerned with animals. It appears to represent species we know our early ancestors hunted and one hypothesis is that the images were part of magical rituals designed to placate the spirits of prey. We may never know whether this is true but the drawings, crude as they are, are vital and carefully rendered, often deep in caves where the only light must have been fire, carefully established and tended.
We know the animals were hunted and eaten. We know they were lovingly represented at great difficulty. There is a human ambivalence towards animals which runs like a thread from that remote time until the present day. In more recent times, animals have been worshipped as gods, tried and punished like human criminals in human courts and bred into useful or bizarre forms. Their status relative to humans has swung wildly through the centuries.
In the past century or two, there have been attempts to establish more firmly their rights, or indeed whether animals can have rights, and our responsibilities towards them. These attempts have met with some success in that most Western countries have animal welfare legislation but they can hardly be said to have established any consensus about our ethical responsibility to animals.
It might seem that this state of affairs could continue indefinitely however radical advances in some sciences and technologies now present possibilities which could force a rapid reappraisal. As is often the case, these possibilities are foreshadowed, however imperfectly, in science fiction and it is worth examining the fiction for clues as to how events will develop.
An immense amount has already been written about human ethical treatment of animals. There is little point adding to this other than to point out some of the sources of conflict. Opinion on the subject ranges from the ultimately naive "one mouse life is equal to one human life" to the equally cardboard cut-out "Mankind was granted Dominion over the beasts of the field" view.
The main problem appears to be that animals are living responsive creatures and thus are capable of triggering off a whole range of reactions in us, from love to the need to dominate. We are poorly placed to be objective about them. Of course, the same applies to our reactions to other humans but there are widely agreed standards of treatment and people are capable of stating their case in disputes. Animals fit in different niches in our mental worlds. We see them as anything from friends to meat, from responsive and communicative souls to "brute beasts". By defining an animal appropriately, it is possible to justify almost any sort of treatment of it and equally to justify almost any sort of action on its behalf.
A related problem is that our bases for assigning value to sentient creatures vary. At times, intelligence, the capacity to feel pain, religious value, scarcity and evolutionary closeness to Man have been proposed as measures for determining how we treat an animal species. Most of these fail dismally if only because they are so utterly unquantifiable and arbitrary. It seems that our treatment of animals says more about ourselves than about them.
Science Fiction rarely addresses this issue. Most of the SF stories incorporating unaltered animals either tend towards genre Fantasy or the Reader's Digest Amazing Animals! type. Where human-animal relations are addressed, the animals are usually at or over the threshhold to sapience as in H. Beam Piper's "Fuzzy" series. There is a strong tradition of stories about alien animals, usually possessing unusual attributes or powers, but these are more concerned with exploring strangeness or surprise than with how we would treat such creatures.
Uplift is the process, defined by David Brin in his book "Sundiver", of elevating animal species to full sapience. If we could do this, and did, our ethical position seems clear. A creature of equal capacity to ourselves, able to think, feel and suffer as we do, surely deserves equal legal and civil rights. However, the more closely the question of Uplift is examined, the fuzzier it becomes.
Consider how Uplift might be achieved. In Brin's books, the methods used, while not necessarily gentle, were not drastic. Selective mutation, breeding programs, education and prostheses were all used to mould the natural form into thinking beings. Undesirable traits were culled and admirable characteristics encouraged. Generally, these programs took place over vast lengths of time. As we are now, humans have neither the patience or constancy for this sort of project. It seems likely that any individual attempt to Uplift a species would be done as quickly as we knew how.
It is possible to imagine a variety of methods being used - massive genetic engineering, neurosurgery, brain-machine connections or more likely a combination of these. The effort might succeed, however there are likely to be many failures and partial successes before we could produce a creature we could call an equal. Some of them are likely to be very damaged, unfortunate things but undeniably, if Uplift is possible, some of them will be sapient. So, the first question to ask is whether we should embark on a course that will create so much misery no matter how noble the aim? And should we kill our failures, no matter how merciful that might be?
Let's imagine the Uplift project has succeeded and Pan Troglodytes Sapiens enters the world. Let's also imagine that our miserable history of prejudice and xenophobia doesn't repeat itself and Uplifted chimps are accorded full human rights and are welcomed into society. What sort of creature will we meet? How much of the original animal will remain and how much will be behaviours, values, concerns and drives grafted on by the engineers? In short, will we meet another equal but alien mind or a reflection of our own desires? If the latter, will there be any point to the exercise?
In Cordwainer Smith's stories of the exploitation and oppression of animal people, his cat-people and dog-people are an underclass with aspirations and resentments analogous to those of human underclasses. The stories are still popular and entertaining to read but there is very little which is alien about his protagonists. It is his "A Game of Rat and Dragon" where the humans and animals are clearly distinct but linked telepathically that remains one of his most powerful tales.
It might be argued that deciding to engage in Uplift is like deciding to have a child, that parents mould the clay of a newborn mind according to their desires yet are nearly always surprised when a full-blown independent being emerges. Parents, however, don't have access to or cause to use the tools that will be used to Uplift animals else the result might be very different. One of the most difficult questions faced when considering Uplift is likely to be whether we understand our own motives.
Finally, there is the question of the human state, with its attendent woes. There can't be many people who haven't occasionally envied dogs or cats living what appear to be troublefree lives. Inevitably, the question will arise of whether we have the right to force the pain of self-awareness onto another species, to engineer another Fall from innocence. There are two good answers to that question. The first is that animal lives are probably potentially as stressful and fraught as our own. Certainly, low animals in the hierarchy in baboon packs show all the symptoms of stress that a harried office worker might. The second is simpler. Would you change places with the animal you envy?
The Posthuman World
While our ethical treatment of animals, uplifted or otherwise, may vary widely, we can at least understand the basis for it and what impels it. When dealing with entities which are intellectually superior, we may not have that comfort.
There is a common belief, akin to the '50s idea that future cars would be larger and more powerful than the models of the day, that future humans will be like us but smarter, more peaceful and wiser. In fact, just about the only safe prediction we can make is that our successors will be unlike us, probably in ways we can't comprehend.
Technological development and scientific knowledge have advanced at such a rate over the last few decades that science fiction has not only failed to predict important advances but has failed to extrapolate from known developments. Vernor Vinge, one of the more technologically savvy of "hard" SF authors, has ruefully noted this. In an early story he took account of Moore's Law (the rule of thumb that computer processing power doubles roughly every eighteen months) but utterly failed to describe the effects of cheaply available computers equivalent to '60s supercomputers. Since then, Vinge has been careful to avoid repeating the mistake but finds that when current trends are extrapolated, a point is reached where the figures either have to be regarded as ridiculous or, if accepted, imply fundamental changes in the nature of human society.
For instance, the amount of energy an average citizen in the US can utilise is rising at almost exponential rates. At some point early in the next century, our citizen may be able to command energy resources equivalent to a large corporation and a few years beyond that, to a small developed country. Similarly, depending upon which measure you accept of the processing capacity of a human brain, computers possessing an equivalent capacity should available some time in the next fifty years. This level of resource cannot fit any more comfortably into our sort of society than large disposable incomes and copious free time could fit into a feudal system.
Rapid, accelerating technological change implies more than just social changes however. Consider the human-equivalent computers which may arise in the next century. Human-equivalent hardware doesn't imply the software expertise and neurological knowledge necessary to create a human-like intelligence. There is no guarantee that we will be able to build a human mind into a machine, indeed if Roger Penrose and his intellectual predecessors are correct, it may not even be possible in principle.
Nonetheless, there are already numerous examples where computer systems can exceed human capabilities. Chess programs can already defeat the vast majority of human players. Semiconductor design systems can layout microprocessor circuitry too complex for human designers to handle. Medical and geological expert systems already achieve lower error rates than human experts diagnosing disease and locating ore bodies.
Imagine a large distributed system of the early 2100s consisting of a sophisticated planning module with access to specialised design programs, large databases of psychological and political theory, powerful economic modelling software, automated factories, conversational expert systems and a clearly defined set of goals. The system might be no more conscious than a rock, no more self-aware than a plant but to those dealing with it, would this really matter? It might never write a stirring sonnet (although it could probably write a cohesive one) but when planning a political campaign, organising the development of the next generation of biochip or prosecuting a war, it could well be superior to any human living.
Is this system self-aware? No. Is it intelligent? Maybe, maybe not - it depends upon your prejudices. Is it more capable than a human? Definitely - frighteningly so.
The foregoing is just one model for the development of future superhuman entities. There are many others ranging from the naive, say A. E. Van Vogt's smarter, more peaceful Slan, to the unknowable, say Vinge's Powers, the result of runaway exponential boosting of intelligence. The Powers are incomprehensible, their concerns and conflicts both huge and subtle beyond human boundaries. They rarely deal with lesser beings and when they do the results are so awesome and terrifying that the study of these events is known as Applied Theology. The Powers are a curious model of what we may one day become, something which should be central to human concerns. The model is almost blank, featureless. All that can be discerned is the effect and that is inexplicable.
The ethical treatment of humans by advanced entities will depend upon the nature of those entities but for the reasons mentioned above, this will probably not be obvious to us. The non-aware system described will have no ethical concerns but may base its behaviour upon projections of the results of different courses of action and their match with its goals.
An entity composed of multiple human (and likely computer) elements, an intellectual collective, such as Star Trek's Borg or the Comprise in Michael Swanwick's "Vacuum Flowers", might well regard individuals as potentially valuable, depending upon what they brought to the whole, but replaceable. The nearest examples we currently have of this sort of being, large hi-tech corporations, already live by the rule that "No-one is indispensible".
More exotic possibilities such as man-machine hybrids or human personalities translated into computer hardware offer a different range of possibilities. In principle, they come from the same ethical heritage as "natural" humans and can be expected to understand the principles we regard as natural to humans. However, much of our ethical behaviour is determined by non-rational impulses. We seem to be hardwired to respond to a large head-body ratio and thus accord more sympathy to babies than to adults. Similarly, with some notable exceptions, we grant more sympathy and value to humans than to animals. Much of this behaviour seems to be mediated by hormonal and biological influences and may not survive the transition to non-human bodies or bodies where the biological component is under conscious control. Entirely different bases may be necessary for ethics when the instinctual drives are gone or controlled. They may be more rational and thus perhaps more universal but to our way of thinking they may seem harsh and unforgiving.
The whole area of the ethical treatment of humans by superhuman beings is explored widely in Science Fiction but unfortunately much is fairly naive or human-chauvinist. There are honourable exceptions. Arthur C. Clarke's "Childhood's End" portrays the collision between Humanity and their suddenly incomprehensible children with a stark eye. It is not flattering but it rings true. Greg Bear's Noocytes in "Blood Music" treat humanity well but carry them along in their headlong rush towards transcendence. Yet finally, it is Vernor Vinge's vision which rings truest. In "A Fire Upon the Deep", a war between Powers spills into the lesser domain of Human-sized intellects. One Power, the Blight, has twisted the history of the entire galaxy in its machinations, "Evil on a Transcendent scale" as one protagonist puts it. Another, Old One, works through human intermediaries to defeat it, yet lets them retain some choice and dignity. This is the range of ethics we find in humans and is the very least we should expect of those that are greater than us.
The future would seem to be a dismal place for the human race. The very best we could expect is ethical treatment, as we define it, from our successors. There is only one path which clearly offers us any option other than to be subject to the whims of others and that is to become the others.
There is a slim window of opportunity somewhere in the next fifty years when we will either create entities more capable than ourselves or become them. If we close off the paths to superhumanity, others will take them or create beings to fill those opening niches. If we halt human genetic engineering, if we ban research into brain-machine interfacing, if we outlaw smart drugs and place restraints on intellectual collectives, then something without those restraints will expand into those niches. And they are not the small closed niches, the crevices in rocks or dark undisturbed caves. The only equivalent in evolutionary terms is the colonisation of dry land thousands of millions of year ago. The landscapes beyond humanity are empty now. Whatever colonises them will face immense challenges but will also have the space to expand to become whatever it wants.