domingo, 29 de dezembro de 2013

Trolley problem

TROLLEY PROBLEM


The gap between people’s convictions and their justifications is also on display in the favorite new sandbox for moral psychologists, a thought experiment devised by the philosophers Philippa Foot and Judith Jarvis Thomson called the Trolley Problem. On your morning walk, you see a trolley car hurtling down the track, the conductor slumped over the controls. In the path of the trolley are five men working on the track, oblivious to the danger. You are standing at a fork in the track and can pull a lever that will divert the trolley onto a spur, saving the five men. Unfortunately, the trolley would then run over a single worker who is laboring on the spur. Is it permissible to throw the switch, killing one man to save five? Almost everyone says “yes.”

Consider now a different scene. You are on a bridge overlooking the tracks and have spotted the runaway trolley bearing down on the five workers. Now the only way to stop the trolley is to throw a heavy object in its path. And the only heavy object within reach is a fat man standing next to you. Should you throw the man off the bridge? Both dilemmas present you with the option of sacrificing one life to save five, and so, by the utilitarian standard of what would result in the greatest good for the greatest number, the two dilemmas are morally equivalent. But most people don’t see it that way: though they would pull the switch in the first dilemma, they would not heave the fat man in the second. When pressed for a reason, they can’t come up with anything coherent, though moral philosophers haven’t had an easy time coming up with a relevant difference, either.

When psychologists say “most people” they usually mean “most of the two dozen sophomores who filled out a questionnaire for beer money.” But in this case it means most of the 200,000 people from a hundred countries who shared their intuitions on a Web-based experiment conducted by the psychologists Fiery Cushman and Liane Young and the biologist Marc Hauser. A difference between the acceptability of switch-pulling and man-heaving, and an inability to justify the choice, was found in respondents from Europe, Asia and North and South America; among men and women, blacks and whites, teenagers and octogenarians, Hindus, Muslims, Buddhists, Christians, Jews and atheists; people with elementary-school educations and people with Ph.D.’s. 

Joshua Greene, a philosopher and cognitive neuroscientist, suggests that evolution equipped people with a revulsion to manhandling an innocent person. This instinct, he suggests, tends to overwhelm any utilitarian calculus that would tot up the lives saved and lost. The impulse against roughing up a fellow human would explain other examples in which people abjure killing one to save many, like euthanizing a hospital patient to harvest his organs and save five dying patients in need of transplants, or throwing someone out of a crowded lifeboat to keep it afloat.

When people pondered the dilemmas that required killing someone with their bare hands, several networks in their brains lighted up. One, which included the medial (inward-facing) parts of the frontal lobes, has been implicated in emotions about other people. A second, the dorsolateral (upper and outer-facing) surface of the frontal lobes, has been implicated in ongoing mental computation (including nonmoral reasoning, like deciding whether to get somewhere by plane or train). And a third region, the anterior cingulate cortex (an evolutionarily ancient strip lying at the base of the inner surface of each cerebral hemisphere), registers a conflict between an urge coming from one part of the brain and an advisory coming from another.


But when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge. Together, the findings corroborate Greene’s theory that our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.

The findings of trolleyology — complex, instinctive and worldwide moral intuitions — led Hauser and John Mikhail (a legal scholar) to revive an analogy from the philosopher John Rawls between the moral sense and language. According to Noam Chomsky, we are born with a “universal grammar” that forces us to analyze speech in terms of its grammatical structure, with no conscious awareness of the rules in play. By analogy, we are born with a universal moral grammar that forces us to analyze human action in terms of its moral structure, with just as little awareness.

The idea that the moral sense is an innate part of human nature is not far-fetched. A list of human universals collected by the anthropologist Donald E. Brown includes many moral concepts and emotions, including a distinction between right and wrong; empathy; fairness; admiration of generosity; rights and obligations; proscription of murder, rape and other forms of violence; redress of wrongs; sanctions for wrongs against the community; shame; and taboos.

Though no one has identified genes for morality, there is circumstantial evidence they exist. People given diagnoses of “antisocial personality disorder” or “psychopathy” show signs of morality blindness from the time they are children.

When anthropologists like Richard Shweder and Alan Fiske survey moral concerns across the globe, they find that a few themes keep popping up from amid the diversity. People everywhere, at least in some circumstances and with certain other folks in mind, think it’s bad to harm others and good to help them. The exact number of themes depends on whether you’re a lumper or a splitter, but Haidt counts five — harm, fairness, community (or group loyalty), authority and purity — and suggests that they are the primary colors of our moral sense. 


The five spheres are good candidates for a periodic table of the moral sense not only because they are ubiquitous but also because they appear to have deep evolutionary roots. ... Fairness is very close to what scientists call reciprocal altruism, where a willingness to be nice to others can evolve as long as the favor helps the recipient more than it costs the giver and the recipient returns the favor when fortunes reverse. The analysis makes it sound as if reciprocal altruism comes out of a robotlike calculation, but in fact Robert Trivers, the biologist who devised the theory, argued that it is implemented in the brain as a suite of moral emotions. Sympathy prompts a person to offer the first favor, particularly to someone in need for whom it would go the furthest. Anger protects a person against cheaters who accept a favor without reciprocating, by impelling him to punish the ingrate or sever the relationship. Gratitude impels a beneficiary to reward those who helped him in the past. Guilt prompts a cheater in danger of being found out to repair the relationship by redressing the misdeed and advertising that he will behave better in the future (consistent with Mencken’s definition of conscience as “the inner voice which warns us that someone might be looking”).

Community, the very different emotion that prompts people to share and sacrifice without an expectation of payback, may be rooted in nepotistic altruism, the empathy and solidarity we feel toward our relatives (and which evolved because any gene that pushed an organism to aid a relative would have helped copies of itself sitting inside that relative). In humans, of course, communal feelings can be lavished on nonrelatives as well. Sometimes it pays people (in an evolutionary sense) to love their companions because their interests are yoked, like spouses with common children, in-laws with common relatives, friends with common tastes or allies with common enemies. And sometimes it doesn’t pay them at all, but their kinship-detectors have been tricked into treating their groupmates as if they were relatives by tactics like kinship metaphors (blood brothers, fraternities, the fatherland), origin myths, communal meals and other bonding rituals.  


All this brings us to a theory of how the moral sense can be universal and variable at the same time. The five moral spheres are universal, a legacy of evolution. But how they are ranked in importance, and which is brought in to moralize which area of social life — sex, government, commerce, religion, diet and so on — depends on the culture.    The ranking and placement of moral spheres also divides the cultures of liberals and conservatives in the United States. Many bones of contention, like homosexuality, atheism and one-parent families from the right, or racial imbalances, sweatshops and executive pay from the left, reflect different weightings of the spheres. In a large Web survey, Haidt found that liberals put a lopsided moral weight on harm and fairness while playing down group loyalty, authority and purity. Conservatives instead place a moderately high weight on all five. It’s not surprising that each side thinks it is driven by lofty ethical values and that the other side is base and unprincipled.

CRITICS


Evolutionary psychologists seem to want to unmask our noblest motives as ultimately self-interested — to show that our love for children, compassion for the unfortunate and sense of justice are just tactics in a Darwinian struggle to perpetuate our genes. The explanation of how different cultures appeal to different spheres could lead to a spineless relativism, in which we would never have grounds to criticize the practice of another culture, no matter how barbaric, because “we have our kind of morality and they have theirs.” And the whole enterprise seems to be dragging us to an amoral nihilism, in which morality itself would be demoted from a transcendent principle to a figment of our neural circuitry. Now, if the distinction between right and wrong is also a product of brain wiring, why should we believe it is any more real than the distinction between red and green? And if it is just a collective hallucination, how could we argue that evils like genocide and slavery are wrong for everyone, rather than just distasteful to us?
 

This throws us back to wondering where those reasons could come from, if they are more than just figments of our brains. They certainly aren’t in the physical world like wavelength or mass. The only other option is that moral truths exist in some abstract Platonic realm, there for us to discover, perhaps in the same way that mathematical truths (according to most mathematicians) are there for us to discover. On this analogy, we are born with a rudimentary concept of number, but as soon as we build on it with formal mathematical reasoning, the nature of mathematical reality forces us to discover some truths and not others. (No one who understands the concept of two, the concept of four and the concept of addition can come to any conclusion but that 2 + 2 = 4.) Perhaps we are born with a rudimentary moral sense, and as soon as we build on it with moral reasoning, the nature of moral reality forces us to some conclusions but not others (MORAL REALISM).


Morality, then, is still something larger than our inherited moral sense, and the new science of the moral sense does not make moral reasoning and conviction obsolete.

From: The Moral Instinct  (STEVEN PINKER) 







 




  


Nenhum comentário:

Postar um comentário