How the attention economy is subverting our decision-making and our democracy.
It’s not that James Williams, a doctoral candidate at the Oxford Internet Institute’s Digital Ethics Lab (motto: “Every Bit as Good”), had a “God, what I have I done?” moment during his time at Google. But it did occur to him that something had gone awry.
Williams joined Google’s Seattle office when it opened in 2006 and went on to win the company’s highest honor, the Founder’s Award, for his work developing advertising products and tools. Then, in 2012, he realized that these tools were actually making things harder for him. Modern technology platforms, he explained to me, were “reimposing these pre-Internet notions of advertising, where it’s all about getting as much of people’s time and attention as you can.”
By 2011, he had followed his literary and politico-philosophical bent (he is a fan of George Orwell’s 1984 and Aldous Huxley’s Brave New World) to Oxford, while still working at Google’s London office. In 2014, he co-founded Time Well Spent, a “movement to stop technology platforms from hijacking our minds,” according to its website. Partnering with Moment, an app that tracks how much time you spend in other apps, Time Well Spent asked 200,000 people to rate the apps they used the most—after seeing the screen time it demanded of them. They found that, on average, the more time people spent in an app, the less happy they were with it. “Distraction wasn’t just this minor annoyance. There was something deeper going on,” he told me. “That’s why I came over here to start my Ph.D. on that stuff.”
Williams has most recently been in the media spot light for his essay, “Stand Out of Our Light: Freedom and Persuasion in the Attention Economy,” which won the $100,000 Nine Dots Prize and scored him a book deal with Cambridge University Press.
Nautilus caught up with Williams to discuss the subversive power of the modern attention economy.
How do the Internet and social media apps threaten democracy?
Democracy assumes a set of capacities: the capacity for deliberation, understanding different ideas, reasoned discourse. This grounds government authority, the will of the people. So one way to talk about the effects of these technologies is that they are a kind of a denial-of-service (DoS) attack on the human will. Our phones are the operating system for our life. They keep us looking and clicking. I think this wears down certain capacities, like willpower, by having us make more decisions. A study showed that repeated distractions lower people’s effective IQ by up to 10 points. It was over twice the IQ drop that you get from long-term marijuana usage. There are certainly epistemic issues as well. Fake news is part of this, but it’s more about people having a totally different sense of reality, even within the same society or on the same street. It really makes it hard to achieve that common sense of what’s at stake that is necessary for an effective democracy.
How have these technologies transformed news media?
What’s happened is, really rapidly, we’ve undergone this tectonic shift, this inversion between information and attention. Most of the systems that we have in society—whether it’s news, advertising, even our legal systems—still assume an environment of information scarcity. The First Amendment protects freedom of speech, but it doesn’t necessarily protect freedom of attention. There wasn’t really anything obstructing people’s attention at the time it was written. Back in an information-scarce environment, the role of a newspaper was to bring you information—your problem was lacking it. Now it’s the opposite. We have too much.
If you get distracted by the same thing in the same way every day, it adds up to a distracted week, distracted months.
How does that change the role of the newspaper?
The role of the newspaper now is to filter, and help you pay attention to, the things that matter. But if the business model is like advertising, and a good article is an article that gets the most clicks, you get things like click bait because those are the metrics that are aligned with the business model. When information becomes abundant, attention becomes scarce. Advertising has dragged everybody down, even the wealthiest organizations with noble missions, to competing on the terms of click bait. Every week there are these outrage cascades online. Outrage is a rewarding thing to us, because it fulfills a lot of these psychological needs we have. It could be used to help us move forward, but often, they’re used to keep us clicking and scrolling and typing. One of the first books about web usability was actually called Don’t Make Me Think. It’s this idea of appealing to our impulsive selves, the automatic part of us, and not the considerate, rational part.
Tristan Harris, with whom you co-founded Time Well Spent, said tech steers the thoughts of 2 billion people with more influence than the world’s religions or governments. Would you agree?
I think I would agree with that. I don’t know any comparable governmental or religious mechanism that’s anything comparable to the smart phone and social media, in the sense that people give so much attention to it, and it has such a frequency and duration of operation. I think it certainly intervenes at a lower level, closer to people’s attention than governmental or religious systems. I think it’s closer to being like a chemical, or a drug of some sort, than it is to being like a societal system. Snapchat has this thing called Snapstreak, for example, where it says, “Here’s how many days in a row you’ve taken a snapshot photo with someone.” You can brag to your friends how long you’ve gone. There’s a ton of these kinds of methods and non-rational biases—social comparison is a huge one. There’s a guy who wrote a book called Hooked, Nir Eyal, where he teaches designers how to pull a user into a system.
In your essay, you argue that the way these technologies indulge our impulsive selves breaks three kinds of attention necessary for democracy. What are they?
This is more a heuristic that I use. It’s not a scientific argument. First, the “spotlight” of attention is how cognitive scientists tend to talk about perceptual attention. The things that are task-salient in my environment. How I select and interact with those, basically. Second, the “starlight.” If the spotlight is about doing things, the starlight is who I want to be, not just what I want to do. It’s like those goals that are valuable for their own sake, not because they’re instrumental toward some other goal. Also, over time, how we keep moving toward those, and how we keep seeing the connections between the tasks we’re doing right now, and those higher-level or longer-term goals. Third, the “daylight.” In the philosopher Harry Frankfurt’s terms, it’s wanting what you want to want—the domain of metacognition. Basically, if the “spotlight” and the “starlight” are about pursuing some goal, some end, some value, the “daylight” is about the capacities that enable us to discern and define what those goals, those ends, are to begin with.
It’s easy to see how persuasive tech disrupts our “spotlight” of attention. But what about the other two?
I think one way, in general, is by the way it can create habits for us. If you get distracted by the same thing in the same way every day, it adds up to a distracted week, distracted months. Either by just force of repetition, or whatever, it has the effect of making us forget about those stars that we want to live by, or not reflect on them as much. We start taking lower level goals as having inherent value—essentially what pettiness as a phenomenon is. It’s the idea of, if my team wins, it doesn’t matter if the entire political climate becomes more toxic.
How are these technologies affecting our politics?
What we’re seeing, at least across Western liberal democracies, is a pretty consistent move toward populist tendencies. It seems to me that the one thing that all these societies have in common is their major form of media. To me, that suggests that it’s what they all have in common that is amplifying it. It’s not new dynamics, but it’s supercharged in ways that we’ve never been able to supercharge it in the past. It’s hard for me to imagine that this sort of thing would be happening in the same way in the era of the telegraph or newspaper, even in television.
But wasn’t radio criticized by print media for supercharging our anti-democratic tendencies back in the 1930s?
Radio was a huge factor in Hitler’s rise to power. It’s why he put one in every house. I think that’s an interesting comparison. Marshall McLuhan, a Canadian media theorist, talked about this: He said, when a new technology comes out, and we still don’t know how to wrap our heads around it, there’s an initial period where our sensory ratios, our perception, is re-acclimating, a kind of a hypnosis moment. He makes the point that the hypnotic effect of Hitler’s style of oratory was amplified by the hypnotic effect of this new media, which is a type of information overload in people’s lives.
Choice is such a messy thing to dive deep into, because then you realize that nobody knows what it means to choose.
Don’t we get used to new media technologies with time?
If you think about how long we had to come to terms with the dynamics of radio, the telephone, etc., it was almost one to two human generations. As electric media has advanced more and more, the time to reach 150 million users accelerates. I think radio was like 60-something years, maybe, or 70, and television was maybe like 30 or 40. Today, for technology, like an app, to reach 150 million users, it could be a matter of days. I think what happens is that we never actually get to that place of stability and mastery of technology. We’re always in this learning curve of incompetence. We can use it well enough, but not so well that we can master it before the next thing comes along.
Isn’t it our own fault that we’re too easily distracted? Maybe we just need more self-discipline.
That kind of rhetoric implicitly grants the idea that it’s okay for technology to be adversarial against us. The whole point of technology is to help us do what we want to do better. Why else would we have it? I think part of the open door that these industries have walked through is the fact that, when we adopt a new technology, we don’t typically ask “What is it for?” If we were to ask what a smartphone is for, it would almost be a ridiculous question. It’s for whatever it can do now!
Does personal responsibility matter at all?
I don’t think personal responsibility is unimportant. I think it’s untenable as a solution to this problem. Even people who write about these issues day to day, even me—and I worked at Google for 10 years—need to remember the sheer volume and scale of resources that are going into getting us to look at one thing over another, click on one thing over another. This industry employs some of the smartest people, thousands of Ph.D. designers, statisticians, engineers. They go to work every day to get us to do this one thing, to undermine our willpower. It’s not realistic to say you need to have more willpower. That’s the very thing being undermined!
Do you think information technology is on our side?
To the extent that the goal of the design is just to capture and keep our attention, it’s predominantly not on our side. If it’s not even equipped to know what our goals are a lot of the time, I don’t see how it can be. I think that kind of information exchange would be necessary for it to move in the right direction. One standard I use is GPS. If a GPS distracted us in physical space in the ways that other technologies distract us in informational space, no one would keep using that GPS.
How do we get persuasive tech to stop indulging our impulsive selves?
I think that there are a lot of things that need to happen at the level of business model, regulation, corporate company organizational design and operation, prioritization. I think that one of the most important things we can do in the near term is come up with good ways of talking about the nature of the problem, because I think it’s harder to advocate for change without the right language. Sometimes it’s talked about in terms of distraction or attention, but we tend to associate that with more immediate types of attention, not longer-term life effects.
How long will that take?
I don’t think it will happen overnight, because a lot of it involves changing the way we talk about human nature and interaction. So much of the way we talk about it, especially in the U.S., is rooted in discussions of freedom of choice. My intuition, and this is just intuition, is the more we can get away from talking about it in terms of choice and start talking about it in terms of chance—which outcome was preferable and which actually happened—the better. Choice is such a messy thing to dive deep into, because then you realize that nobody knows what it means to choose.
What’s one concrete thing companies could do now to stop subverting our attention?
I would just like to know what is the ultimate design goal of that site or that system that’s shaping my behavior or thinking. What are they really designing my experience for? Companies will say that their goal is to make the world open and connected or whatever. These are lofty marketing claims. But if you were to actually look at the dashboards that they’re designing, the high-level metrics they’re designing for, you probably wouldn’t see those things. You’d see other things, like frequency of use, time on site, this type of thing. If there was some way for the app to say, to the user, “Here’s generally what this app wants from you, from an attentional point of view,” that would be huge. It would probably be the primary way I would decide which apps I download and use.
Are you optimistic?
In terms of individuals working at these companies, I’m still heartened and optimistic, because everybody who’s a designer or engineer is also a user at the end of the day. Nobody goes into design because they want to make life worse. The challenges, generally, are structural, whether it’s about the existing business models of companies or the way in which certain forms of corporate legal structures don’t give people the space to balance some of these more petty, immediate goals with more noble kinds of things. It’s hard to say, in terms of the longer-term of tech evolution, whether it will be optimistic or not. I’m hoping that there will be a point where, if we don’t restrain things or turn the battleship around, we realize the unsustainability of it, from a business point of view but also in our own lives.
(Source: Nautilus)
It’s not that James Williams, a doctoral candidate at the Oxford Internet Institute’s Digital Ethics Lab (motto: “Every Bit as Good”), had a “God, what I have I done?” moment during his time at Google. But it did occur to him that something had gone awry.
Williams joined Google’s Seattle office when it opened in 2006 and went on to win the company’s highest honor, the Founder’s Award, for his work developing advertising products and tools. Then, in 2012, he realized that these tools were actually making things harder for him. Modern technology platforms, he explained to me, were “reimposing these pre-Internet notions of advertising, where it’s all about getting as much of people’s time and attention as you can.”
By 2011, he had followed his literary and politico-philosophical bent (he is a fan of George Orwell’s 1984 and Aldous Huxley’s Brave New World) to Oxford, while still working at Google’s London office. In 2014, he co-founded Time Well Spent, a “movement to stop technology platforms from hijacking our minds,” according to its website. Partnering with Moment, an app that tracks how much time you spend in other apps, Time Well Spent asked 200,000 people to rate the apps they used the most—after seeing the screen time it demanded of them. They found that, on average, the more time people spent in an app, the less happy they were with it. “Distraction wasn’t just this minor annoyance. There was something deeper going on,” he told me. “That’s why I came over here to start my Ph.D. on that stuff.”
Williams has most recently been in the media spot light for his essay, “Stand Out of Our Light: Freedom and Persuasion in the Attention Economy,” which won the $100,000 Nine Dots Prize and scored him a book deal with Cambridge University Press.
Nautilus caught up with Williams to discuss the subversive power of the modern attention economy.
How do the Internet and social media apps threaten democracy?
Democracy assumes a set of capacities: the capacity for deliberation, understanding different ideas, reasoned discourse. This grounds government authority, the will of the people. So one way to talk about the effects of these technologies is that they are a kind of a denial-of-service (DoS) attack on the human will. Our phones are the operating system for our life. They keep us looking and clicking. I think this wears down certain capacities, like willpower, by having us make more decisions. A study showed that repeated distractions lower people’s effective IQ by up to 10 points. It was over twice the IQ drop that you get from long-term marijuana usage. There are certainly epistemic issues as well. Fake news is part of this, but it’s more about people having a totally different sense of reality, even within the same society or on the same street. It really makes it hard to achieve that common sense of what’s at stake that is necessary for an effective democracy.
How have these technologies transformed news media?
What’s happened is, really rapidly, we’ve undergone this tectonic shift, this inversion between information and attention. Most of the systems that we have in society—whether it’s news, advertising, even our legal systems—still assume an environment of information scarcity. The First Amendment protects freedom of speech, but it doesn’t necessarily protect freedom of attention. There wasn’t really anything obstructing people’s attention at the time it was written. Back in an information-scarce environment, the role of a newspaper was to bring you information—your problem was lacking it. Now it’s the opposite. We have too much.
If you get distracted by the same thing in the same way every day, it adds up to a distracted week, distracted months.
How does that change the role of the newspaper?
The role of the newspaper now is to filter, and help you pay attention to, the things that matter. But if the business model is like advertising, and a good article is an article that gets the most clicks, you get things like click bait because those are the metrics that are aligned with the business model. When information becomes abundant, attention becomes scarce. Advertising has dragged everybody down, even the wealthiest organizations with noble missions, to competing on the terms of click bait. Every week there are these outrage cascades online. Outrage is a rewarding thing to us, because it fulfills a lot of these psychological needs we have. It could be used to help us move forward, but often, they’re used to keep us clicking and scrolling and typing. One of the first books about web usability was actually called Don’t Make Me Think. It’s this idea of appealing to our impulsive selves, the automatic part of us, and not the considerate, rational part.
Tristan Harris, with whom you co-founded Time Well Spent, said tech steers the thoughts of 2 billion people with more influence than the world’s religions or governments. Would you agree?
I think I would agree with that. I don’t know any comparable governmental or religious mechanism that’s anything comparable to the smart phone and social media, in the sense that people give so much attention to it, and it has such a frequency and duration of operation. I think it certainly intervenes at a lower level, closer to people’s attention than governmental or religious systems. I think it’s closer to being like a chemical, or a drug of some sort, than it is to being like a societal system. Snapchat has this thing called Snapstreak, for example, where it says, “Here’s how many days in a row you’ve taken a snapshot photo with someone.” You can brag to your friends how long you’ve gone. There’s a ton of these kinds of methods and non-rational biases—social comparison is a huge one. There’s a guy who wrote a book called Hooked, Nir Eyal, where he teaches designers how to pull a user into a system.
In your essay, you argue that the way these technologies indulge our impulsive selves breaks three kinds of attention necessary for democracy. What are they?
This is more a heuristic that I use. It’s not a scientific argument. First, the “spotlight” of attention is how cognitive scientists tend to talk about perceptual attention. The things that are task-salient in my environment. How I select and interact with those, basically. Second, the “starlight.” If the spotlight is about doing things, the starlight is who I want to be, not just what I want to do. It’s like those goals that are valuable for their own sake, not because they’re instrumental toward some other goal. Also, over time, how we keep moving toward those, and how we keep seeing the connections between the tasks we’re doing right now, and those higher-level or longer-term goals. Third, the “daylight.” In the philosopher Harry Frankfurt’s terms, it’s wanting what you want to want—the domain of metacognition. Basically, if the “spotlight” and the “starlight” are about pursuing some goal, some end, some value, the “daylight” is about the capacities that enable us to discern and define what those goals, those ends, are to begin with.
It’s easy to see how persuasive tech disrupts our “spotlight” of attention. But what about the other two?
I think one way, in general, is by the way it can create habits for us. If you get distracted by the same thing in the same way every day, it adds up to a distracted week, distracted months. Either by just force of repetition, or whatever, it has the effect of making us forget about those stars that we want to live by, or not reflect on them as much. We start taking lower level goals as having inherent value—essentially what pettiness as a phenomenon is. It’s the idea of, if my team wins, it doesn’t matter if the entire political climate becomes more toxic.
How are these technologies affecting our politics?
What we’re seeing, at least across Western liberal democracies, is a pretty consistent move toward populist tendencies. It seems to me that the one thing that all these societies have in common is their major form of media. To me, that suggests that it’s what they all have in common that is amplifying it. It’s not new dynamics, but it’s supercharged in ways that we’ve never been able to supercharge it in the past. It’s hard for me to imagine that this sort of thing would be happening in the same way in the era of the telegraph or newspaper, even in television.
But wasn’t radio criticized by print media for supercharging our anti-democratic tendencies back in the 1930s?
Radio was a huge factor in Hitler’s rise to power. It’s why he put one in every house. I think that’s an interesting comparison. Marshall McLuhan, a Canadian media theorist, talked about this: He said, when a new technology comes out, and we still don’t know how to wrap our heads around it, there’s an initial period where our sensory ratios, our perception, is re-acclimating, a kind of a hypnosis moment. He makes the point that the hypnotic effect of Hitler’s style of oratory was amplified by the hypnotic effect of this new media, which is a type of information overload in people’s lives.
Choice is such a messy thing to dive deep into, because then you realize that nobody knows what it means to choose.
Don’t we get used to new media technologies with time?
If you think about how long we had to come to terms with the dynamics of radio, the telephone, etc., it was almost one to two human generations. As electric media has advanced more and more, the time to reach 150 million users accelerates. I think radio was like 60-something years, maybe, or 70, and television was maybe like 30 or 40. Today, for technology, like an app, to reach 150 million users, it could be a matter of days. I think what happens is that we never actually get to that place of stability and mastery of technology. We’re always in this learning curve of incompetence. We can use it well enough, but not so well that we can master it before the next thing comes along.
Isn’t it our own fault that we’re too easily distracted? Maybe we just need more self-discipline.
That kind of rhetoric implicitly grants the idea that it’s okay for technology to be adversarial against us. The whole point of technology is to help us do what we want to do better. Why else would we have it? I think part of the open door that these industries have walked through is the fact that, when we adopt a new technology, we don’t typically ask “What is it for?” If we were to ask what a smartphone is for, it would almost be a ridiculous question. It’s for whatever it can do now!
Does personal responsibility matter at all?
I don’t think personal responsibility is unimportant. I think it’s untenable as a solution to this problem. Even people who write about these issues day to day, even me—and I worked at Google for 10 years—need to remember the sheer volume and scale of resources that are going into getting us to look at one thing over another, click on one thing over another. This industry employs some of the smartest people, thousands of Ph.D. designers, statisticians, engineers. They go to work every day to get us to do this one thing, to undermine our willpower. It’s not realistic to say you need to have more willpower. That’s the very thing being undermined!
Do you think information technology is on our side?
To the extent that the goal of the design is just to capture and keep our attention, it’s predominantly not on our side. If it’s not even equipped to know what our goals are a lot of the time, I don’t see how it can be. I think that kind of information exchange would be necessary for it to move in the right direction. One standard I use is GPS. If a GPS distracted us in physical space in the ways that other technologies distract us in informational space, no one would keep using that GPS.
How do we get persuasive tech to stop indulging our impulsive selves?
I think that there are a lot of things that need to happen at the level of business model, regulation, corporate company organizational design and operation, prioritization. I think that one of the most important things we can do in the near term is come up with good ways of talking about the nature of the problem, because I think it’s harder to advocate for change without the right language. Sometimes it’s talked about in terms of distraction or attention, but we tend to associate that with more immediate types of attention, not longer-term life effects.
How long will that take?
I don’t think it will happen overnight, because a lot of it involves changing the way we talk about human nature and interaction. So much of the way we talk about it, especially in the U.S., is rooted in discussions of freedom of choice. My intuition, and this is just intuition, is the more we can get away from talking about it in terms of choice and start talking about it in terms of chance—which outcome was preferable and which actually happened—the better. Choice is such a messy thing to dive deep into, because then you realize that nobody knows what it means to choose.
What’s one concrete thing companies could do now to stop subverting our attention?
I would just like to know what is the ultimate design goal of that site or that system that’s shaping my behavior or thinking. What are they really designing my experience for? Companies will say that their goal is to make the world open and connected or whatever. These are lofty marketing claims. But if you were to actually look at the dashboards that they’re designing, the high-level metrics they’re designing for, you probably wouldn’t see those things. You’d see other things, like frequency of use, time on site, this type of thing. If there was some way for the app to say, to the user, “Here’s generally what this app wants from you, from an attentional point of view,” that would be huge. It would probably be the primary way I would decide which apps I download and use.
Are you optimistic?
In terms of individuals working at these companies, I’m still heartened and optimistic, because everybody who’s a designer or engineer is also a user at the end of the day. Nobody goes into design because they want to make life worse. The challenges, generally, are structural, whether it’s about the existing business models of companies or the way in which certain forms of corporate legal structures don’t give people the space to balance some of these more petty, immediate goals with more noble kinds of things. It’s hard to say, in terms of the longer-term of tech evolution, whether it will be optimistic or not. I’m hoping that there will be a point where, if we don’t restrain things or turn the battleship around, we realize the unsustainability of it, from a business point of view but also in our own lives.
(Source: Nautilus)
No comments:
Post a Comment