Leon Felkins                                                          5600 words 
email: leonf@cora.net                                        First Serial Rights 
                                                                © December, 1995 
                                                                Revised: 2/14/96                    
                                                           
                                                                


A RATIONAL JUSTIFICATION FOR ETHICAL BEHAVIOR

By: Leon Felkins leonf@cora.net

"There was only one catch and that was Catch-22, which specified that a concern for one's safety in the face of dangers that were real and immediate was the process of a rational mind. Orr was crazy and could be grounded. All he had to do was ask, and as soon as he did, he would no longer be crazy and would have to fly more missions. Orr would be crazy to fly more missions and sane if he didn't, but if he was sane he had to fly them. If he flew them he was crazy and didn't have to; but if he didn't want to he was sane and had to."
- Joseph Heller [Catch-22]

It has always seemed to me that the universe is a bit more diabolical than one would expect. There are just too many strange and frustrating incidents that can not be attributed to pure chance. Can there be some validity to the thousands of "Murphy's Laws" that we have heard about or been subjected to? Maybe. Murphy's Laws may not be the worst of it.

There is strong scientific evidence that there are some aspects of society, "social dilemmas", that defy solution -- or at least a technical solution. The economists talk about an "invisible hand" that guides the individual towards collective rewards. This essay will discuss the backside of this "invisible hand", following the line suggested by Russell Hardin [Note 1].

Still, most of us are intellectually opposed to believing in these "paradoxes", "dilemmas", "Murphy's laws" and other diabolical forces in the universe. Even if we admit that there appear to be phenomena with no rational explanation, after more sober reflection, most of us convince ourselves that this surely cannot be. "We just haven't found the solution yet", we would like to say.

Unfortunately, with our most concentrated efforts at solving these societal problems, some of these mysterious phenomena will not completely go away -- in fact, some seem to get even worse. This essay is about a group of such phenomena -- rarely discussed in the lay press -- called the "social dilemmas" which have the characteristic that, in spite of a great deal of study by many great thinkers, no technical solution seems possible.

Further, these "social dilemmas" are not just academic amusements that are of little interest or concern to the average citizen. No, these phenomena are quite real. In fact, these societal paradoxes appear to be the real reason why we are making little or no progress in solving the awful social problems that we are confronted with today.

The Social Dilemmas

". . . Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit -- in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all." -- Garrett Hardin, "Tragedy of the Commons", 13 DECEMBER 1968 SCIENCE, VOL. 162

To introduce this subject, let me relate the details of an incident that happened to me in my youth. In the small town where I was raised, the Sears store was the primary source of just about everything needed to survive except groceries. From clothing to tools, whatever we needed, Sears was the place to get it.

But I had a real serious problem with Sears. I had applied for a credit card and was rejected -- which really was not all that illogical on their part considering my financial situation. Not only was I rejected but I was insulted by the store clerk that I was dealing with. I vowed never to buy so much as a wood screw from Sears again.

I remember relating this story to my mother and how I was getting even with Sears by depriving them of the vast amount of profit they were blowing away by losing my business. My mother said, "Don't cut off your nose to spite your face."

"What do you mean by that?"

"I mean", my mother explained, "that it seems to me that you're going to lose a lot more than Sears will by your so-called 'getting even'. Think about it."

I did -- and I wasn't happy with my conclusions. It didn't take a rocket scientist to see that Sears would never know if I ever existed or not. The loss of one customer to them just wasn't measurable. On the other hand, I had lots to lose. They just happened to have the tools, the equipment, the supplies and the clothing that I needed and at a better price and quality than most anyone else.

Over the years, I have observed this phenomenon of the conflict between the individual and group he or she belongs to exhibited in many forms. The basic structure is that the individual's contribution, which typically carries a significant cost, has an insignificant or unmeasurable impact on the group that the individual belongs to. This may not be of great social impact if we are only concerned with such groups as "Sear's Customers", but does become very serious when the group of interest is society itself.

In general, the individual is helpless in making a change to the overall group preferences. For example, if I feel that having a diving board in the Community Center swimming pool is a worthwhile thing, but most people feel differently, I'm not going to get a diving board. However, we are all quite aware of the other side of the coin where the individual takes advantage of belonging to a large group. People make excessive insurance claims, insist on refunds for merchandise that they have damaged, and cheat the government in any way they can, all based on the logic "that the company is so large, or the group is so large, or they have so much money, etc." that the action of one individual will never be noticed.

The arithmetic for contributing to a group's welfare just doesn't add up! Let us examine some efforts of philosophers to make some sense of it.

Background on the Social Dilemmas

"Hereby it is manifest that during the time men live without a common power to keep them all in awe, they are in that condition which is called war; and such a war as is of every man against every man." Thomas Hobbes, Leviathan, 1651

Academics have known about the Social Dilemmas problem for a long time. It just didn't get formulated very well until recently. In fact, today, while the subject is well known and discussed in academia, the general public, for the most part, is still completely unaware of the ramifications. One form of the problem did get significant exposure in the lay press a few years ago. That form is called the Prisoners' Dilemma. Several books and articles that discussed this phenomenon appeared at that time [Note 2]. While there are other social dilemmas that, in general, are far more complex than the Prisoners' Dilemma, it helps to understand these other problems if we have a clear understanding of the much simpler problem, the Prisoners' Dilemma.

Prisoners' Dilemma

Let me extract a bit from my previously published Free Inquiry article [Note 2b]:

The Prisoners' Dilemma model as presented by Robert Axelrod, Douglas Hofstadter, and others, goes as follows:

Two prisoners, lets call them Joe and Sam, are being held for trial. They are being held in separate cells with no means of communication. The prosecutor offers each of them a deal. He also disclosed to each that the deal was made to the other. The deal he offered is this:
a) If you will confess that the two of you committed the crime and the other guy denies it, we will let you go free and send him up for five years.
b) If you both deny the crime, we have enough circumstantial evidence to put both of you away for two years.
c) If both of you confess to the crime, then you'll both get 4 year sentences.
Put yourself in Joe's position. If Sam stays mum and you sing, you get zero years. If he stays mum and you stay mum, you will each get 2 years. On the other hand if both of you confess, you both get 4 years. Finally, if he confesses and you don't, you will get 5 years. For either decision that Sam makes, it is to your advantage to admit your wrong doing. Of course, Sam is also a rational person and he will, therefore, come to the same conclusion. So you both end up confessing -- which nets a total of 8 man-years in the pokey. The paradox is, if you had both denied the crime, a total of only 4 man-years would be spent behind bars.
Wait a minute! Can this really be that rationality leads to an inferior result? Let's look at this one more time. We will use a payoff matrix, a common tool of the game theoreticians. Also, to be consistent with the literature on this subject, we will use the standard terms "cooperate" and "defect" where in this case "cooperate" means "not confess" and "defect" means "confess".
The payoff matrix is usually presented in the following form:
              ACTION                     PAYOFF                
  
          Joe        Sam             Joe       Sam     

          Cooperate  Cooperate      -2 (R)    -2 (R)  
          Cooperate  Defect         -5 (S)     0 (T)   
          Defect     Cooperate       0 (T)    -5 (S)  
          Defect     Defect         -4 (P)    -4 (P)  
                   

(The codes represent standard terminology for each action:

R Reward for mutual cooperation
S Sucker's payoff
T Temptation to defect
P Punishment for mutual defection )

The general form of the Prisoners' Dilemma model is that the preference ranking of the four payoffs be, from best to worst, T, R, P, S and that R be greater than the average of T and S. That is, any situation that meets these conditions will be a "Prisoners' Dilemma".
In summary, the Prisoners' Dilemma model postulates a condition in which the rational action of each individual is to not cooperate (that is, to defect), yet, if both parties act rationally, each party's reward is less than it would have been if both acted irrationally and cooperated.
The model can be applied to many real world situations, from genetics to business transactions to international politics.

A Deeper Analysis

Let us examine this Prisoners' Dilemma thing a bit closer. While we might want to dismiss it as some sort of academic nonsense, having little application to real life, there is a basic characteristic here that is common to a much more serious phenomena, the group social dilemmas. Let us go over the logic one more time.

The exasperating conclusion that the rational prisoner must face here is that there is really no choice but to defect. When you look at what the other person might do, for each case, your best option (less time in jail) is to defect. But the other guy will come to the same conclusion which results in situation that is inferior to the situation you would get if both cooperated.

It is often stated when this dilemma is described that the prisoners are not allowed to communicate with each other during these deliberations. Actually, that is not necessary and the problem gets even more interesting if they do communicate. Let us assume that they do.

Joe says, "Sam, you know, we are both rational, logical people. Both of us have looked at this thing very carefully and it's rather obvious what we are going to do. We're both going to defect which means that we will have done a rather stupid thing that we will have four years of time to reflect on."

"I know. It is really troubling, but how can we come to a better solution? I would like to cooperate if only I could be sure you will." Sam laments.

Joe responds, "Look Sam, let's be sensible. Let us agree to cooperate, since it is obvious that we will either both defect or both cooperate since we are both intelligent and are both using the same logic."

Sam agrees. They shake hands and they are escorted back to their own cells.

That night, both men lay in their cells pondering the agreement they made that day. Their thoughts -- which are the same -- go something like this:

"How do I know that rascal is still not going to dump on me? Sure, he's talked me into keeping my mouth shut but he can get off scot-free by squealing on me and then I'm in here for five years! Hell, that's worse than the four I would get when we were both going to defect. Well, to protect myself, I have no choice but to defect. Actually that is not a bad idea -- if he really is sucker enough to go through with our agreement, then I will get off scot-free!"

So, they are back to where they started. So, what went wrong? Why couldn't they have improved their situation by cooperating? It turns out to be a matter of trust. They can't achieve a better solution because they can't trust each other.

That conclusion suggests the theme of this essay. As a minimum, this little exercise suggests that maybe mutual trust could provide a far better solution for Prisoners' Dilemma like situations than can be achieved by just using pure logic. Moreover, it may be that this confirms what many wise and kind hearted advisors have been trying to tell us for years -- "Sometimes the heart suggests a better solution than the mind".

We will dig into this much deeper later on but first let us look at more realistic examples of social dilemmas.

The Individual vs. the Group

Just for the moment, let us suppose that many of the serious ailments of our societies are rooted in the conflict between individual interest and group interest - the so-called "social dilemmas". I will try to convince you of this subsequently but at the moment all I ask is that you suppose that it might be so.

Now, if you made such claims amongst your non-academic friends and associates, they would say you're nuts. In fact, it has been my experience, that they will even get angry with you for even bringing it up. What we are saying is that in day-to-day society, just like in the Prisoners' Dilemma, the greater payoff to the individual is to defect, while acknowledging that if everyone does this, society will collapse! Most folks dismiss you as a crackpot and get considerably worked up if you insist on talking to them about it.

I think many of these people would be quite surprised to find out that this is a well known and studied phenomenon in the academic world with respected disciplines devoted to it. The subject seems to crop up in several major and minor fields: Political Science, Sociology, Social Choice Theory, Game Theory and others.

But how the subject is handled is a little strange. The academics treat the subject material with the off-handiness and the aloofness of the mathematician. It is almost as if they were studying the habits of aliens or some strange insect colony or something -- not that they are talking about real humans and problems that are destroying millions of lives. To be fair, maybe no one will listen to this little academic "secret" -- especially the politicians in charge.

I bring this up partly to defend some of what I have to say. That is, while some of you may not believe what I am telling you and some of you may think that this is just one more crack pot theory from yet one more loony, I want to assure you that what I have to say can be easily verified at the local university. See the notes for references that will substantiate my claims.

We cannot get away with relegating the "individual vs. the group" problem to the bin of "academic exercises" as we may the Prisoners' Dilemma. The group social dilemma is very real and apparently much more difficult to solve. A few examples would help at this point.

"The Tragedy of the Commons"

In 1968, Garret Hardin published an essay called "The Tragedy of the Commons" [Note 3] in which the conflict between group interest and self interest was clearly described. This classic essay shows the hopelessness of population control by using the example of a common pasture being shared by the local community in which access was free and without restrictions.

Each individual realizes that his best interests are served by putting as many of his cattle as possible on the pasture, even though the pasture has reached its carrying capacity and even though it is obvious that if everyone does this, the Commons will totally collapse [Note 4]. This, of course, is a classic example of a social dilemma.

Let us expand the example a bit. What if a person instead of just selfishly taking from the common pasture, actually contributed to it. Let us say this cooperative person actually has made some calculations and concluded that with proper management, restraint in use and contributing a little fertilizer, the pasture would reward everyone with a return at least four times the amount invested. That is, if everyone contributed fertilizer, everyone would get rewards four times the amount invested. Let us say there are 20 users. What would happen if no one else contributes anything? Then the return to the one communityspirited individual is four times his investment divided by 20! A losing proposition!

Of course it would all work if we only had mutual trust and cooperation.

"The Voter's Paradox"

Is it rational to vote? Not from an individual's perspective. It is obvious that one vote is not going to make a difference unless there is a tie (extremely improbable!) and it does cost something to vote. But what if nobody voted? We would likely pay a dear price. So, we have the social dilemma again: it is not rational for me to vote but if everyone declines to vote, then we all lose!

So what is the solution? The solution is to follow your moral duty to vote and not strict rationality.

Government Spending

We have to be honest about it: we need that pork that our congressperson is obtaining for our district! Actually, being the good congressperson that she is, we want her to do two things: cut back national spending while protecting or possibly increasing the spending for local industry!

Yes, you guessed it, the Social Dilemma on a grand scale. Just like the Tragedy of the Commons. We individually take more and more from the pasture until it collapses and we all starve!

Insurance and law suits

Sure we know its bad for society and we realize that we are part of society but, rationally, it is obvious that an individual who has the opportunity should take all she can. It is hard to pass up the possibility of a $500,000 jury award for slipping on the neighbor's sidewalk just to be a good neighbor.

But since insurance companies make money, I have to conclude that society as a whole will be paying that $500,000 (and then some) which means we will all have to contribute a little. But personally, I come out way ahead!

Greed is the problem here. Maybe if we weren't so selfish we would all be better off. That is, society, of which I am a member, would benefit if we all declined to pursue large insurance settlements.

Are there Solutions to the Social Dilemmas?

The necessity for external government to man is in an inverse ratio to the vigor of his self-government. Where the last is most complete, the first is least wanted. Hence, the more virtue the more liberty. R. T. Coleridge, 1833

From the time of Hobbes, and possibly before, philosophers have struggled with trying to find a solution to the Social Dilemmas. They have had little success. Let us examine some of the schemes that have been tried.

Religion

Religion, under certain conditions, solves the problem. All you have to do is convince people that they will suffer in hell for immoral acts and that an all-seeing god is watching their every action and they will cooperate. Note that in this situation, it has now become rational to cooperate! To suffer in hell for an eternity is an extremely irrational choice.

And, if cooperation is achieved, everyone is rewarded. That is why we call it "Social Dilemmas". Cooperation yields even greater rewards than individual selfishness even when that selfishness is absolutely rational. So, while the method may be offensive to some, the end result is good.

But, what are those "certain conditions"? Ignorance is number one. Passiveness is another. Both of these are in conflict with modern societal trends. The more people are educated, the more they are likely to be rational and -- therefore -- the more they are likely to go for selfish, but rational, interests.

Government

From Hobbes to Hardin, philosophers generally agree that it takes the force of government to control the rational but destructive desires of humans. [There are alternative views: see Note 4]

Ideally, government, as Hobbes envisioned, can force the populace into cooperation. Specifically, government can enforce contracts between individuals to avoid the Prisoners' Dilemma situation.

The problem is, government itself is composed of men and women. Therefore it is likely to also be plagued by the problems of the social dilemmas.

In fact, the situation is even worse in a government environment. The nature of operating with little restraint, operating with other peoples money, having an all powerful force to subdue any rebellion, encourages the kind of abuse the theory of social dilemmas predicts.

Further, government makes matters even worse in the social dilemma it was instituted to solve! To quote Michael Taylor in his book, Anarchy and Cooperation: [Note 5]

"The arguments for the necessity of the state which I am criticizing in this book are founded on the supposed inability of individuals to cooperate voluntarily to provide themselves with public goods, and especially, in the theories of Hobbes and Hume, with security of person and property. The intervention of the state is necessary, according to these arguments, in order to secure for the people a Pareto-optimal provision of public goods, or at least to ensure that some provision is made of the most important public goods. In this section I argue that the more the state intervenes in such situations, the more 'necessary' (on this view) it becomes, because positive altruism and voluntary cooperative behavior atrophy in the presence of the state and grow in its absence. Thus, again, the state exacerbates the conditions which are supposed to make it necessary. We might say that the state is like an addictive drug: the more of it we have, the more we 'need' it and the more we come to 'depend' on it."

In summary, government in all forms know today, suffers from the same social dilemmas that it attempts to solve in addition to making the problem worse that it is supposedly there to solve. We desperately need to devise a form of government in which the social dilemmas cannot exist or at least cannot flourish. Not a small challenge!

Morality

It has oft been suggested that if people were only moral, then the social dilemmas would be solved. The key ingredient necessary for solving the social dilemmas without using the force of government or the threats of religion is trustworthiness. In the aforementioned book of Michael Taylor, he makes a logical case that a social dilemma can have a solution under certain conditions, the key ingredient being the assumption of trust that others will cooperate, "conditional cooperation". I quote:

"It plainly cannot be concluded from these results that the 'dilemma' in the Prisoners' Dilemma game is 'resolved' upon the introduction of time: that people will cooperate voluntarily in Prisoners' Dilemma supergames. Nevertheless, it has been shown that, no matter how many players there are, it is rational for some or all of the players to cooperate throughout the supergame under certain conditions. The question arises, whether these conditions are likely to be met in practice. In this connection, it is clear that cooperation amongst a relatively large number of players is 'less likely' to occur than cooperation amongst a small number. This is for two reasons. In the first place, we have seen that if mutual cooperation amongst some of the players throughout the supergame is to occur at all, it will occur only when some players adopt conditionally cooperative strategies which in every case are such that cooperation is conditional upon the cooperation of all the other cooperators (conditional and unconditional)"

David Gauthier, in his book, Morals By Agreement [Note 6], and other works develops a theory of morality based on mutual benefit. His theory is based on the concept that under certain conditions it is mutually beneficial for persons to adopt a disposition to comply with basic morals. He calls such persons "constrained maximizers". His theory further requires that such persons agree to a set of moral principles, that is, be contractarians. Here I quote a paragraph that summarizes this concept:

"Here we introduce the third conception central to our theory, constrained maximization. We distinguish the person who is disposed straightforwardly to maximize her satisfaction, or fulfil her interest, in the particular choices she makes, from the person who is disposed to comply with mutually advantageous moral constraints, provided he expects similar compliance from others. The latter is a constrained maximizer. And constrained maximizers, interacting one with another, enjoy opportunities for co-operation which others lack. Of course, constrained maximizers sometimes lose by being disposed to compliance, for they may act co-operatively in the mistaken expectation of reciprocity from others who instead benefit at their expense. Nevertheless, we shall show that under plausible conditions, the net advantage that constrained maximizers reap from co-operation exceeds the exploitative benefits that others may expect. From this we conclude that it is rational to be disposed to constrain maximizing behaviour by internalizing moral principles to govern one's choices. The contractarian is able to show that it is irrational to admit appeals to interest against compliance with those duties founded on mutual advantage."

From this approach, Gauthier suggests that a certain minimal set of morals can be generated. He says, however, "Of course, we must not suppose that the moral principles we generate will be identical with those that would be derived on the universalistic conception". It would be interesting to see how this set of morals derived rationally would compare with those society is burdened with today. I would be willing to bet that many of the "morals" we make such fuss about -- expecially those involving the love life of humans -- will never surface using this approach!

Derek Parfit in his fascinating book, Reasons and Persons [Note 7], states the case for moral based solution to the group social dilemma (he calls it "Contributor's Dilemma") as follows:

"The commonest true Dilemmas are Contributor's Dilemmas. These involve public goods: outcomes that benefit even those who do not help to produce them. It can be true of each person that, if he helps, he will add to the sum of benefits, or expected -benefits. But only a very small portion of the benefit he adds will come back to him. Since his share of what he adds will be very small, it may not repay his contribution. It may thus be better for each if he does not contribute. This can be so whatever others do. But it will be worse for each if fewer others contribute. And if none contribute this will be worse for each than if all do.
. . .
Such Contributor's Dilemmas often need moral solutions. We often need some people who are directly disposed to do their share. If these can change the situation, so as to achieve a political solution, this solution may be self-supporting. But without such people it may never be achieved.
The moral solutions are, then, often best; and they are often the only attainable solutions. We therefore need the moral motives. How could these be introduced? Fortunately, that is not our problem. They exist. This is how we solve many Prisoners' Dilemmas. Our need is to make these motives stronger, and more widely spread.
. . .
One solution is, we saw, a conditional agreement. For this to be possible, it must first be true that we can all communicate. If we are purely self interested, or never self-denying, the ability to communicate would seldom make a difference. In most large groups, it would be pointless to promise that we shall make the altruistic choice, since it would be worse for each if he kept his promise. But suppose that we are trustworthy. Each can now promise to do A, on condition that everyone else makes the same promise. If we know that we are all trustworthy, each will have a motive to join this conditional agreement. Each will know that, unless he joins, the agreement will not take effect. Once we have all made this promise we shall all do A."

What these and other philosophers are proposing is a very intriguing solution to the social dilemma; that an ethical structure may be necessary for a social structure to function at all! Democracies, large institutions, relationships between nations and civilization itself -- we can conclude -- cannot survive without the basic morals such as cooperation and trust.

The religious and governmental solutions discussed above have been shown to fail in implementation. But there is a more serious deficiency for these attempts to solve the problem compared to the ethical solution; that is, they lack universal applicability. In particular, where is the government and/or religion that will insure that nations cooperate with each other? If we can use the argument that government is justified to solve the social dilemmas, would not such an argument be the justification for a world government. A scary thought!

For those of us who would like to see a more scientific basis for an ethical structure for humanity than religious pronouncements or the weak "normative standards that we discover together", the conclusion reached above is profound. The solution to the social dilemma based on an ethical structure defines a minimum set of ethics. That is, there must at least be the common moral decencies defined in the Humanist's "Statement of Principles and Values" [Note 8]: altruism, integrity, honesty, truthfulness, and responsibility. At the risk of even more overlap and duplication, I would add trustworthiness and cooperation.

Summary

"Someone who is trustworthy gains little if no one else is trustworthy and gains most if everyone else is trustworthy." -- Derek Parfit in Reasons and Persons, 1984

There are paradoxical phenomenon inherent in the concept of membership in a group. In fact, this paradoxical behavior is exhibited for as small a group as two as we have seen in the example of the Prisoners' Dilemma. In particular, there are serious dilemmas, which remain unsolved, in the relationship of an individual and the group the individual belongs to. If the individual acts rationally, the benefit to the group -- the so-called "common good" -- will suffer. If too many people act rationally, the common good will collapse, a situation called the Social Dilemma.

How can the actions of an individual be rational if the result is disaster? The reasoning is based on the premise that any individual's action will make no serious impact on the group. If I vote or not, the election results will be the same. The Red Cross drive for more blood donors will succeed (or not) whether I contribute or not. When the audience is applauding the performance of the great violinist, she cannot know if I am sitting on my hands. These are incontestable facts. It is also a fact that if many people act in these ways, the result will be noticeable. But that is different fact.

The dilemma is real and is responsible for the difficulty in maintaining a civilized and just society. Most of the deterioration of society we are witnessing today results from the Social Dilemmas. Two major attempts at solving the dilemmas -- religion and government -- have both failed.

This essay proposes a solution that has actually been proposed many times before: a moral structure is necessary for the existence of a stable society. In particular, the members of the society must at least be trustworthy and cooperative.

The fact that these moral values must be necessary for society to survive provides a rational basis for justifying a minimum set of morals. This minimum set must include those moral values that are needed to solve the Social Dilemmas.

This minimum set happens to just be those moral decencies prescribed by the Secular Humanist's "Statement of Principles and Values"; altruism, integrity, honesty, truthfulness, and responsibility.

To know a solution and to implement that solution, of course, are totally different issues. Implementation is a very serious challenge. But it would be worth a try. The alternative is that the present disaster we are witnessing will continue to worsen.

While a discussion on implementation would require far more words than there is room for in this essay, I will make a few comments. The implementation that I propose is to teach children the principles suggested in this paper. Instead of teaching morals based on religion or guilt or the threat of government harassment, the basis should be an honest expression of the need for cooperation. In particular, the synergistic benefits of cooperation should be emphasized.

The present methods of emphasizing the retribution of God and/or Government on persons who cheat against society is not really effective with most modern children. What is effective, however, are social pressures. There are still communities and societies in the world in which the power of societal pressure to cause people "to do the right thing" can be observed. Youngsters in Japan are not nearly as likely to steal as they are in America simply because in that society a thief is despicable.

The most powerful force to youngsters is peer pressure. To harness that peer pressure and direct it to the beneficial needs of society rather than its destruction is a major step towards solving the social dilemmas.

Notes:

Note 1: Hardin, Russell. Collective Action. Baltimore: The John Hopkins University Press, 1982

Note 2: Suggested readings on the Prisoners' Dilemma:

Note 3: Garrett Hardin, "Tragedy of the Commons", 13 DECEMBER 1968 SCIENCE, VOL. 162

Note 4: In Immanuel Kant's "Morality and the Duty of Love Toward Other Men", his "categorical imperative" establishes a test for a social dilemma: "Act as if the maxim of thy action were to become by thy will a universal law of nature".

Note 5: Taylor, Michael. Anarchy and Cooperation. New York: John Wiley & Sons, 1976

Note 6: Gauthier, David. Morals by Agreement. Oxford: Clarendon Press, 1986

__Review and comment on Morals by Agreement:

Note 7: Parfit, Derek. Reasons and Persons. Oxford: Clarendon Press, 1984

Note 8: "The Affirmations of Humanism: A Statement of Principles and Values", back cover of Free Inquiry, Spring, 1989


Leon Felkins, leonf@cora.net, is a philosopher and retired software engineer. More of his essays can be found on his Web pages.