But we’re better off believing in it anyway. Our ability to choose our fate is not free, but depends on our biological inheritance.
For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.
Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”
So what happens if this faith erodes?
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.
Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something.
In recent decades, research on the inner workings of the brain has helped to resolve the nature-nurture debate—and has dealt a further blow to the idea of free will. Brain scanners have enabled us to peer inside a living person’s skull, revealing intricate networks of neurons and allowing scientists to reach broad agreement that these networks are shaped by both genes and environment. But there is also agreement in the scientific community that the firing of neurons determines not just some or most but all of our thoughts, hopes, memories, and dreams.
We know that changes to brain chemistry can alter behavior—otherwise neither alcohol nor antipsychotics would have their desired effects. The same holds true for brain structure: Cases of ordinary adults becoming murderers or pedophiles after developing a brain tumor demonstrate how dependent we are on the physical properties of our gray stuff.
Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion.
The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.
This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.
This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?
In 2002, two psychologists had a simple but brilliant idea: Instead of speculating about what might happen if people lost belief in their capacity to choose, they could run an experiment to find out. Kathleen Vohs, then at the University of Utah, and Jonathan Schooler, of the University of Pittsburgh, asked one group of participants to read a passage arguing that free will was an illusion, and another group to read a passage that was neutral on the topic. Then they subjected the members of each group to a variety of temptations and observed their behavior. Would differences in abstract philosophical beliefs influence people’s decisions?
Yes, indeed. When asked to take a math test, with cheating made easy, the group primed to see free will as illusory proved more likely to take an illicit peek at the answers. When given an opportunity to steal—to take more money than they were due from an envelope of $1 coins—those whose belief in free will had been undermined pilfered more. On a range of measures, Vohs told me, she and Schooler found that “people who are induced to believe less in free will are more likely to behave immorally.”
It seems that when people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions. Consequently, they act less responsibly and give in to their baser instincts. Vohs emphasized that this result is not limited to the contrived conditions of a lab experiment. “You see the same effects with people who naturally believe more or less in free will,” she said.
In another study, for instance, Vohs and colleagues measured the extent to which a group of day laborers believed in free will, then examined their performance on the job by looking at their supervisor’s ratings. Those who believed more strongly that they were in control of their own actions showed up on time for work more frequently and were rated by supervisors as more capable. In fact, belief in free will turned out to be a better predictor of job performance than established measures such as self-professed work ethic.
Another pioneer of research into the psychology of free will, Roy Baumeister of Florida State University, has extended these findings. For example, he and colleagues found that students with a weaker belief in free will were less likely to volunteer their time to help a classmate than were those whose belief in free will was stronger. Likewise, those primed to hold a deterministic view by reading statements like “Science has demonstrated that free will is an illusion” were less likely to give money to a homeless person or lend someone a cellphone.
Further studies by Baumeister and colleagues have linked a diminished belief in free will to stress, unhappiness, and a lesser commitment to relationships. They found that when subjects were induced to believe that “all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules,” those subjects came away with a lower sense of life’s meaningfulness. Early this year, other researchers published a study showing that a weaker belief in free will correlates with poor academic performance.
The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.
Few scholars are comfortable suggesting that people ought to believe an outright lie. Advocating the perpetuation of untruths would breach their integrity and violate a principle that philosophers have long held dear: the Platonic hope that the true and the good go hand in hand. Saul Smilansky, a philosophy professor at the University of Haifa, in Israel, has wrestled with this dilemma throughout his career and come to a painful conclusion: “We cannot afford for people to internalize the truth” about free will.
Smilansky is convinced that free will does not exist in the traditional sense—and that it would be very bad if most people realized this. “Imagine,” he told me, “that I’m deliberating whether to do my duty, such as to parachute into enemy territory, or something more mundane like to risk my job by reporting on some wrongdoing. If everyone accepts that there is no free will, then I’ll know that people will say, ‘Whatever he did, he had no choice—we can’t blame him.’ So I know I’m not going to be condemned for taking the selfish option.” This, he believes, is very dangerous for society, and “the more people accept the determinist picture, the worse things will get.”
Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.
Smilansky advocates a view he calls illusionism—the belief that free will is indeed an illusion, but one that society must defend. The idea of determinism, and the facts supporting it, must be kept confined within the ivory tower. Only the initiated, behind those walls, should dare to, as he put it to me, “look the dark truth in the face.” Smilansky says he realizes that there is something drastic, even terrible, about this idea—but if the choice is between the true and the good, then for the sake of society, the true must go.
When people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions.
Smilansky’s arguments may sound odd at first, given his contention that the world is devoid of free will: If we are not really deciding anything, who cares what information is let loose? But new information, of course, is a sensory input like any other; it can change our behavior, even if we are not the conscious agents of that change. In the language of cause and effect, a belief in free will may not inspire us to make the best of ourselves, but it does stimulate us to do so.
Illusionism is a minority position among academic philosophers, most of whom still hope that the good and the true can be reconciled. But it represents an ancient strand of thought among intellectual elites. Nietzsche called free will “a theologians’ artifice” that permits us to “judge and punish.” And many thinkers have believed, as Smilansky does, that institutions of judgment and punishment are necessary if we are to avoid a fall into barbarism.
Smilansky is not advocating policies of Orwellian thought control. Luckily, he argues, we don’t need them. Belief in free will comes naturally to us. Scientists and commentators merely need to exercise some self-restraint, instead of gleefully disabusing people of the illusions that undergird all they hold dear. Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. “Promoting determinism is complacent and dangerous.”
Yet not all scholars who argue publicly against free will are blind to the social and psychological consequences. Some simply don’t agree that these consequences might include the collapse of civilization. One of the most prominent is the neuroscientist and writer Sam Harris, who, in his 2012 book, Free Will, set out to bring down the fantasy of conscious choice. Like Smilansky, he believes that there is no such thing as free will. But Harris thinks we are better off without the whole notion of it.
“We need our beliefs to track what is true,” Harris told me. Illusions, no matter how well intentioned, will always hold us back. For example, we currently use the threat of imprisonment as a crude tool to persuade people not to do bad things. But if we instead accept that “human behavior arises from neurophysiology,” he argued, then we can better understand what is really causing people to do bad things despite this threat of punishment—and how to stop them. “We need,” Harris told me, “to know what are the levers we can pull as a society to encourage people to be the best version of themselves they can be.”
According to Harris, we should acknowledge that even the worst criminals—murderous psychopaths, for example—are in a sense unlucky. “They didn’t pick their genes. They didn’t pick their parents. They didn’t make their brains, yet their brains are the source of their intentions and actions.” In a deep sense, their crimes are not their fault. Recognizing this, we can dispassionately consider how to manage offenders in order to rehabilitate them, protect society, and reduce future offending. Harris thinks that, in time, “it might be possible to cure something like psychopathy,” but only if we accept that the brain, and not some airy-fairy free will, is the source of the deviancy.
Accepting this would also free us from hatred. Holding people responsible for their actions might sound like a keystone of civilized life, but we pay a high price for it: Blaming people makes us angry and vengeful, and that clouds our judgment.
“Compare the response to Hurricane Katrina,” Harris suggested, with “the response to the 9/11 act of terrorism.” For many Americans, the men who hijacked those planes are the embodiment of criminals who freely choose to do evil. But if we give up our notion of free will, then their behavior must be viewed like any other natural phenomenon—and this, Harris believes, would make us much more rational in our response.
Although the scale of the two catastrophes was similar, the reactions were wildly different. Nobody was striving to exact revenge on tropical storms or declare a War on Weather, so responses to Katrina could simply focus on rebuilding and preventing future disasters. The response to 9/11, Harris argues, was clouded by outrage and the desire for vengeance, and has led to the unnecessary loss of countless more lives. Harris is not saying that we shouldn’t have reacted at all to 9/11, only that a coolheaded response would have looked very different and likely been much less wasteful. “Hatred is toxic,” he told me, “and can destabilize individual lives and whole societies. Losing belief in free will undercuts the rationale for ever hating anyone.”
Whereas the evidence from Kathleen Vohs and her colleagues suggests that social problems may arise from seeing our own actions as determined by forces beyond our control—weakening our morals, our motivation, and our sense of the meaningfulness of life—Harris thinks that social benefits will result from seeing other people’s behavior in the very same light. From that vantage point, the moral implications of determinism look very different, and quite a lot better.
What’s more, Harris argues, as ordinary people come to better understand how their brains work, many of the problems documented by Vohs and others will dissipate. Determinism, he writes in his book, does not mean “that conscious awareness and deliberative thinking serve no purpose.” Certain kinds of action require us to become conscious of a choice—to weigh arguments and appraise evidence. True, if we were put in exactly the same situation again, then 100 times out of 100 we would make the same decision, “just like rewinding a movie and playing it again.” But the act of deliberation—the wrestling with facts and emotions that we feel is essential to our nature—is nonetheless real.
The big problem, in Harris’s view, is that people often confuse determinism with fatalism. Determinism is the belief that our decisions are part of an unbreakable chain of cause and effect. Fatalism, on the other hand, is the belief that our decisions don’t really matter, because whatever is destined to happen will happen—like Oedipus’s marriage to his mother, despite his efforts to avoid that fate.
Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. It is “complacent and dangerous” to air them.
When people hear there is no free will, they wrongly become fatalistic; they think their efforts will make no difference. But this is a mistake. People are not moving toward an inevitable destiny; given a different stimulus (like a different idea about free will), they will behave differently and so have different lives. If people better understood these fine distinctions, Harris believes, the consequences of losing faith in free will would be much less negative than Vohs’s and Baumeister’s experiments suggest.
Can one go further still? Is there a way forward that preserves both the inspiring power of belief in free will and the compassionate understanding that comes with determinism?
Philosophers and theologians are used to talking about free will as if it is either on or off; as if our consciousness floats, like a ghost, entirely above the causal chain, or as if we roll through life like a rock down a hill. But there might be another way of looking at human agency.
Some scholars argue that we should think about freedom of choice in terms of our very real and sophisticated abilities to map out multiple potential responses to a particular situation. One of these is Bruce Waller, a philosophy professor at Youngstown State University. In his new book, Restorative Free Will, he writes that we should focus on our ability, in any given setting, to generate a wide range of options for ourselves, and to decide among them without external constraint.
For Waller, it simply doesn’t matter that these processes are underpinned by a causal chain of firing neurons. In his view, free will and determinism are not the opposites they are often taken to be; they simply describe our behavior at different levels.
Waller believes his account fits with a scientific understanding of how we evolved: Foraging animals—humans, but also mice, or bears, or crows—need to be able to generate options for themselves and make decisions in a complex and changing environment. Humans, with our massive brains, are much better at thinking up and weighing options than other animals are. Our range of options is much wider, and we are, in a meaningful way, freer as a result.
Waller’s definition of free will is in keeping with how a lot of ordinary people see it. One 2010 study found that people mostly thought of free will in terms of following their desires, free of coercion (such as someone holding a gun to your head). As long as we continue to believe in this kind of practical free will, that should be enough to preserve the sorts of ideals and ethical standards examined by Vohs and Baumeister.
Yet Waller’s account of free will still leads to a very different view of justice and responsibility than most people hold today. No one has caused himself: No one chose his genes or the environment into which he was born. Therefore no one bears ultimate responsibility for who he is and what he does. Waller told me he supported the sentiment of Barack Obama’s 2012 “You didn’t build that” speech, in which the president called attention to the external factors that help bring about success. He was also not surprised that it drew such a sharp reaction from those who want to believe that they were the sole architects of their achievements. But he argues that we must accept that life outcomes are determined by disparities in nature and nurture, “so we can take practical measures to remedy misfortune and help everyone to fulfill their potential.”
Understanding how will be the work of decades, as we slowly unravel the nature of our own minds. In many areas, that work will likely yield more compassion: offering more (and more precise) help to those who find themselves in a bad place. And when the threat of punishment is necessary as a deterrent, it will in many cases be balanced with efforts to strengthen, rather than undermine, the capacities for autonomy that are essential for anyone to lead a decent life. The kind of will that leads to success—seeing positive options for oneself, making good decisions and sticking to them—can be cultivated, and those at the bottom of society are most in need of that cultivation.
To some people, this may sound like a gratuitous attempt to have one’s cake and eat it too. And in a way it is. It is an attempt to retain the best parts of the free-will belief system while ditching the worst. President Obama—who has both defended “a faith in free will” and argued that we are not the sole architects of our fortune—has had to learn what a fine line this is to tread. Yet it might be what we need to rescue the American dream—and indeed, many of our ideas about civilization, the world over—in the scientific age.
(Source: The Atlantic)
For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.
Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”
So what happens if this faith erodes?
The sciences have grown steadily bolder in their claim that all human behavior can be explained through the clockwork laws of cause and effect. This shift in perception is the continuation of an intellectual revolution that began about 150 years ago, when Charles Darwin first published On the Origin of Species. Shortly after Darwin put forth his theory of evolution, his cousin Sir Francis Galton began to draw out the implications: If we have evolved, then mental faculties like intelligence must be hereditary. But we use those faculties—which some people have to a greater degree than others—to make decisions. So our ability to choose our fate is not free, but depends on our biological inheritance.
Galton launched a debate that raged throughout the 20th century over nature versus nurture. Are our actions the unfolding effect of our genetics? Or the outcome of what has been imprinted on us by the environment? Impressive evidence accumulated for the importance of each factor. Whether scientists supported one, the other, or a mix of both, they increasingly assumed that our deeds must be determined by something.
In recent decades, research on the inner workings of the brain has helped to resolve the nature-nurture debate—and has dealt a further blow to the idea of free will. Brain scanners have enabled us to peer inside a living person’s skull, revealing intricate networks of neurons and allowing scientists to reach broad agreement that these networks are shaped by both genes and environment. But there is also agreement in the scientific community that the firing of neurons determines not just some or most but all of our thoughts, hopes, memories, and dreams.
We know that changes to brain chemistry can alter behavior—otherwise neither alcohol nor antipsychotics would have their desired effects. The same holds true for brain structure: Cases of ordinary adults becoming murderers or pedophiles after developing a brain tumor demonstrate how dependent we are on the physical properties of our gray stuff.
Many scientists say that the American physiologist Benjamin Libet demonstrated in the 1980s that we have no free will. It was already known that electrical activity builds up in a person’s brain before she, for example, moves her hand; Libet showed that this buildup occurs before the person consciously makes a decision to move. The conscious experience of deciding to act, which we usually associate with free will, appears to be an add-on, a post hoc reconstruction of events that occurs after the brain has already set the act in motion.
The 20th-century nature-nurture debate prepared us to think of ourselves as shaped by influences beyond our control. But it left some room, at least in the popular imagination, for the possibility that we could overcome our circumstances or our genes to become the author of our own destiny. The challenge posed by neuroscience is more radical: It describes the brain as a physical system like any other, and suggests that we no more will it to operate in a particular way than we will our heart to beat. The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.
This research and its implications are not new. What is new, though, is the spread of free-will skepticism beyond the laboratories and into the mainstream. The number of court cases, for example, that use evidence from neuroscience has more than doubled in the past decade—mostly in the context of defendants arguing that their brain made them do it. And many people are absorbing this message in other contexts, too, at least judging by the number of books and articles purporting to explain “your brain on” everything from music to magic. Determinism, to one degree or another, is gaining popular currency. The skeptics are in ascendance.
This development raises uncomfortable—and increasingly nontheoretical—questions: If moral responsibility depends on faith in our own agency, then as belief in determinism spreads, will we become morally irresponsible? And if we increasingly see belief in free will as a delusion, what will happen to all those institutions that are based on it?
In 2002, two psychologists had a simple but brilliant idea: Instead of speculating about what might happen if people lost belief in their capacity to choose, they could run an experiment to find out. Kathleen Vohs, then at the University of Utah, and Jonathan Schooler, of the University of Pittsburgh, asked one group of participants to read a passage arguing that free will was an illusion, and another group to read a passage that was neutral on the topic. Then they subjected the members of each group to a variety of temptations and observed their behavior. Would differences in abstract philosophical beliefs influence people’s decisions?
Yes, indeed. When asked to take a math test, with cheating made easy, the group primed to see free will as illusory proved more likely to take an illicit peek at the answers. When given an opportunity to steal—to take more money than they were due from an envelope of $1 coins—those whose belief in free will had been undermined pilfered more. On a range of measures, Vohs told me, she and Schooler found that “people who are induced to believe less in free will are more likely to behave immorally.”
It seems that when people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions. Consequently, they act less responsibly and give in to their baser instincts. Vohs emphasized that this result is not limited to the contrived conditions of a lab experiment. “You see the same effects with people who naturally believe more or less in free will,” she said.
In another study, for instance, Vohs and colleagues measured the extent to which a group of day laborers believed in free will, then examined their performance on the job by looking at their supervisor’s ratings. Those who believed more strongly that they were in control of their own actions showed up on time for work more frequently and were rated by supervisors as more capable. In fact, belief in free will turned out to be a better predictor of job performance than established measures such as self-professed work ethic.
Another pioneer of research into the psychology of free will, Roy Baumeister of Florida State University, has extended these findings. For example, he and colleagues found that students with a weaker belief in free will were less likely to volunteer their time to help a classmate than were those whose belief in free will was stronger. Likewise, those primed to hold a deterministic view by reading statements like “Science has demonstrated that free will is an illusion” were less likely to give money to a homeless person or lend someone a cellphone.
Further studies by Baumeister and colleagues have linked a diminished belief in free will to stress, unhappiness, and a lesser commitment to relationships. They found that when subjects were induced to believe that “all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules,” those subjects came away with a lower sense of life’s meaningfulness. Early this year, other researchers published a study showing that a weaker belief in free will correlates with poor academic performance.
The list goes on: Believing that free will is an illusion has been shown to make people less creative, more likely to conform, less willing to learn from their mistakes, and less grateful toward one another. In every regard, it seems, when we embrace determinism, we indulge our dark side.
Few scholars are comfortable suggesting that people ought to believe an outright lie. Advocating the perpetuation of untruths would breach their integrity and violate a principle that philosophers have long held dear: the Platonic hope that the true and the good go hand in hand. Saul Smilansky, a philosophy professor at the University of Haifa, in Israel, has wrestled with this dilemma throughout his career and come to a painful conclusion: “We cannot afford for people to internalize the truth” about free will.
Smilansky is convinced that free will does not exist in the traditional sense—and that it would be very bad if most people realized this. “Imagine,” he told me, “that I’m deliberating whether to do my duty, such as to parachute into enemy territory, or something more mundane like to risk my job by reporting on some wrongdoing. If everyone accepts that there is no free will, then I’ll know that people will say, ‘Whatever he did, he had no choice—we can’t blame him.’ So I know I’m not going to be condemned for taking the selfish option.” This, he believes, is very dangerous for society, and “the more people accept the determinist picture, the worse things will get.”
Determinism not only undermines blame, Smilansky argues; it also undermines praise. Imagine I do risk my life by jumping into enemy territory to perform a daring mission. Afterward, people will say that I had no choice, that my feats were merely, in Smilansky’s phrase, “an unfolding of the given,” and therefore hardly praiseworthy. And just as undermining blame would remove an obstacle to acting wickedly, so undermining praise would remove an incentive to do good. Our heroes would seem less inspiring, he argues, our achievements less noteworthy, and soon we would sink into decadence and despondency.
Smilansky advocates a view he calls illusionism—the belief that free will is indeed an illusion, but one that society must defend. The idea of determinism, and the facts supporting it, must be kept confined within the ivory tower. Only the initiated, behind those walls, should dare to, as he put it to me, “look the dark truth in the face.” Smilansky says he realizes that there is something drastic, even terrible, about this idea—but if the choice is between the true and the good, then for the sake of society, the true must go.
When people stop believing they are free agents, they stop seeing themselves as blameworthy for their actions.
Smilansky’s arguments may sound odd at first, given his contention that the world is devoid of free will: If we are not really deciding anything, who cares what information is let loose? But new information, of course, is a sensory input like any other; it can change our behavior, even if we are not the conscious agents of that change. In the language of cause and effect, a belief in free will may not inspire us to make the best of ourselves, but it does stimulate us to do so.
Illusionism is a minority position among academic philosophers, most of whom still hope that the good and the true can be reconciled. But it represents an ancient strand of thought among intellectual elites. Nietzsche called free will “a theologians’ artifice” that permits us to “judge and punish.” And many thinkers have believed, as Smilansky does, that institutions of judgment and punishment are necessary if we are to avoid a fall into barbarism.
Smilansky is not advocating policies of Orwellian thought control. Luckily, he argues, we don’t need them. Belief in free will comes naturally to us. Scientists and commentators merely need to exercise some self-restraint, instead of gleefully disabusing people of the illusions that undergird all they hold dear. Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. “Promoting determinism is complacent and dangerous.”
Yet not all scholars who argue publicly against free will are blind to the social and psychological consequences. Some simply don’t agree that these consequences might include the collapse of civilization. One of the most prominent is the neuroscientist and writer Sam Harris, who, in his 2012 book, Free Will, set out to bring down the fantasy of conscious choice. Like Smilansky, he believes that there is no such thing as free will. But Harris thinks we are better off without the whole notion of it.
“We need our beliefs to track what is true,” Harris told me. Illusions, no matter how well intentioned, will always hold us back. For example, we currently use the threat of imprisonment as a crude tool to persuade people not to do bad things. But if we instead accept that “human behavior arises from neurophysiology,” he argued, then we can better understand what is really causing people to do bad things despite this threat of punishment—and how to stop them. “We need,” Harris told me, “to know what are the levers we can pull as a society to encourage people to be the best version of themselves they can be.”
According to Harris, we should acknowledge that even the worst criminals—murderous psychopaths, for example—are in a sense unlucky. “They didn’t pick their genes. They didn’t pick their parents. They didn’t make their brains, yet their brains are the source of their intentions and actions.” In a deep sense, their crimes are not their fault. Recognizing this, we can dispassionately consider how to manage offenders in order to rehabilitate them, protect society, and reduce future offending. Harris thinks that, in time, “it might be possible to cure something like psychopathy,” but only if we accept that the brain, and not some airy-fairy free will, is the source of the deviancy.
Accepting this would also free us from hatred. Holding people responsible for their actions might sound like a keystone of civilized life, but we pay a high price for it: Blaming people makes us angry and vengeful, and that clouds our judgment.
“Compare the response to Hurricane Katrina,” Harris suggested, with “the response to the 9/11 act of terrorism.” For many Americans, the men who hijacked those planes are the embodiment of criminals who freely choose to do evil. But if we give up our notion of free will, then their behavior must be viewed like any other natural phenomenon—and this, Harris believes, would make us much more rational in our response.
Although the scale of the two catastrophes was similar, the reactions were wildly different. Nobody was striving to exact revenge on tropical storms or declare a War on Weather, so responses to Katrina could simply focus on rebuilding and preventing future disasters. The response to 9/11, Harris argues, was clouded by outrage and the desire for vengeance, and has led to the unnecessary loss of countless more lives. Harris is not saying that we shouldn’t have reacted at all to 9/11, only that a coolheaded response would have looked very different and likely been much less wasteful. “Hatred is toxic,” he told me, “and can destabilize individual lives and whole societies. Losing belief in free will undercuts the rationale for ever hating anyone.”
Whereas the evidence from Kathleen Vohs and her colleagues suggests that social problems may arise from seeing our own actions as determined by forces beyond our control—weakening our morals, our motivation, and our sense of the meaningfulness of life—Harris thinks that social benefits will result from seeing other people’s behavior in the very same light. From that vantage point, the moral implications of determinism look very different, and quite a lot better.
What’s more, Harris argues, as ordinary people come to better understand how their brains work, many of the problems documented by Vohs and others will dissipate. Determinism, he writes in his book, does not mean “that conscious awareness and deliberative thinking serve no purpose.” Certain kinds of action require us to become conscious of a choice—to weigh arguments and appraise evidence. True, if we were put in exactly the same situation again, then 100 times out of 100 we would make the same decision, “just like rewinding a movie and playing it again.” But the act of deliberation—the wrestling with facts and emotions that we feel is essential to our nature—is nonetheless real.
The big problem, in Harris’s view, is that people often confuse determinism with fatalism. Determinism is the belief that our decisions are part of an unbreakable chain of cause and effect. Fatalism, on the other hand, is the belief that our decisions don’t really matter, because whatever is destined to happen will happen—like Oedipus’s marriage to his mother, despite his efforts to avoid that fate.
Most scientists “don’t realize what effect these ideas can have,” Smilansky told me. It is “complacent and dangerous” to air them.
When people hear there is no free will, they wrongly become fatalistic; they think their efforts will make no difference. But this is a mistake. People are not moving toward an inevitable destiny; given a different stimulus (like a different idea about free will), they will behave differently and so have different lives. If people better understood these fine distinctions, Harris believes, the consequences of losing faith in free will would be much less negative than Vohs’s and Baumeister’s experiments suggest.
Can one go further still? Is there a way forward that preserves both the inspiring power of belief in free will and the compassionate understanding that comes with determinism?
Philosophers and theologians are used to talking about free will as if it is either on or off; as if our consciousness floats, like a ghost, entirely above the causal chain, or as if we roll through life like a rock down a hill. But there might be another way of looking at human agency.
Some scholars argue that we should think about freedom of choice in terms of our very real and sophisticated abilities to map out multiple potential responses to a particular situation. One of these is Bruce Waller, a philosophy professor at Youngstown State University. In his new book, Restorative Free Will, he writes that we should focus on our ability, in any given setting, to generate a wide range of options for ourselves, and to decide among them without external constraint.
For Waller, it simply doesn’t matter that these processes are underpinned by a causal chain of firing neurons. In his view, free will and determinism are not the opposites they are often taken to be; they simply describe our behavior at different levels.
Waller believes his account fits with a scientific understanding of how we evolved: Foraging animals—humans, but also mice, or bears, or crows—need to be able to generate options for themselves and make decisions in a complex and changing environment. Humans, with our massive brains, are much better at thinking up and weighing options than other animals are. Our range of options is much wider, and we are, in a meaningful way, freer as a result.
Waller’s definition of free will is in keeping with how a lot of ordinary people see it. One 2010 study found that people mostly thought of free will in terms of following their desires, free of coercion (such as someone holding a gun to your head). As long as we continue to believe in this kind of practical free will, that should be enough to preserve the sorts of ideals and ethical standards examined by Vohs and Baumeister.
Yet Waller’s account of free will still leads to a very different view of justice and responsibility than most people hold today. No one has caused himself: No one chose his genes or the environment into which he was born. Therefore no one bears ultimate responsibility for who he is and what he does. Waller told me he supported the sentiment of Barack Obama’s 2012 “You didn’t build that” speech, in which the president called attention to the external factors that help bring about success. He was also not surprised that it drew such a sharp reaction from those who want to believe that they were the sole architects of their achievements. But he argues that we must accept that life outcomes are determined by disparities in nature and nurture, “so we can take practical measures to remedy misfortune and help everyone to fulfill their potential.”
Understanding how will be the work of decades, as we slowly unravel the nature of our own minds. In many areas, that work will likely yield more compassion: offering more (and more precise) help to those who find themselves in a bad place. And when the threat of punishment is necessary as a deterrent, it will in many cases be balanced with efforts to strengthen, rather than undermine, the capacities for autonomy that are essential for anyone to lead a decent life. The kind of will that leads to success—seeing positive options for oneself, making good decisions and sticking to them—can be cultivated, and those at the bottom of society are most in need of that cultivation.
To some people, this may sound like a gratuitous attempt to have one’s cake and eat it too. And in a way it is. It is an attempt to retain the best parts of the free-will belief system while ditching the worst. President Obama—who has both defended “a faith in free will” and argued that we are not the sole architects of our fortune—has had to learn what a fine line this is to tread. Yet it might be what we need to rescue the American dream—and indeed, many of our ideas about civilization, the world over—in the scientific age.
(Source: The Atlantic)
No comments:
Post a Comment