Friday 1 May 2020

What if Covid-19 isn't our biggest threat?

Experts who assess global peril saw a pandemic coming, but they have worse worries for humanity

When eventually the coronavirus crisis begins to recede and we return to an approximation of normality – no matter how socially distanced or how much handwashing it involves – we can expect some kind of international initiative to prevent, or at least limit, the spread of future lethal viruses. As a species we are pretty good at learning from recent experience. It’s what’s known as the availability heuristic – the tendency to estimate the likelihood of an event based on our ability to recall examples.

But as the moral philosopher Toby Ord argues in his new book, The Precipice, we are much less adept at anticipating potential catastrophes that have no precedent in living memory. “Even when experts estimate a significant probability for an unprecedented event,” he writes, “we have great difficulty believing it until we see it.”
A row of Pepper robots developed by SoftBank Group Corp. The humanoid robots can be programmed to do many things – often replacing or augmenting human jobs. Photograph: Bloomberg via Getty Images

This was precisely the problem with the coronavirus. Many informed scientists predicted that a global epidemic was almost certain to break out at some point in the near future. Aside from the warnings of legions of virologists and epidemiologists, the Microsoft founder, Bill Gates, gave a widely disseminated Ted Talk in 2015 in which he detailed the threat of a killer virus. For a while now, a pandemic has been one of the two most prominent catastrophic threats in the government’s risk register (the other is a massive cyberattack).

But if something hasn’t yet happened, there is a deep-seated temptation to act as if it’s not going to happen. If that is true of an event, like this pandemic, that will kill only a tiny fraction of the world’s population, it’s even more the case for what are known as existential threats. There are two definitions of existential threat, though they often amount to the same thing. One is something that will bring a total end to humanity, remove us as a species from this planet or any other. The other, only slightly less troubling, is something that leads to an irrevocable collapse of civilisation, reducing surviving humanity to a prehistoric state of existence.

An Australian based at Oxford’s Future of Humanity Institute, Ord is one of a tiny number of academics working in the field of existential risk assessment. It’s a discipline that takes in everything from stellar explosions right down to rogue microbes, from supervolcanoes to artificial superintelligence.

Ord works through each potential threat and examines the likelihood of it occurring in the next century. For example, the probability of a supernova causing a catastrophe on Earth he estimates to be less than one in 50m. Even adding all the naturally occurring risks together (which includes naturally occurring viruses), Ord contends that they do not amount to the existential risk presented individually by nuclear war or global heating.
A health worker waits to accept an Ebola patient in the Democratic Republic of the Congo, 2018. The virus had a 50% case fatality rate. Photograph: John Wessels/AFP via Getty Images

Most of the time, the general public, governments and other academics are largely content to neglect most of these risks. Few of us, after all, enjoy contemplating the apocalypse.

In any case, governments, as former Conservative minister Oliver Letwin reminds us in his recent book Apocalypse How?, are usually preoccupied with more pressing issues than humanity’s demise. Everyday problems like trade agreements demand urgent attention, whereas hypothetical future ones such as being taken over by machines can always be left for tomorrow.

But given that we’re living through a global pandemic, now is perhaps an opportune moment to think about what can be done to avoid a future cataclysm. According to Ord, the period we inhabit is a critical moment in the history of humanity. Not only are there the potentially disastrous effects of global heating but in the nuclear age we also possess the power to destroy ourselves in a flash – or to at least leave the question of civilisation’s survival in the balance.

Thus Ord believes the next century will be a dangerously precarious one. If we make the right decisions, he foresees a future of unimaginable flourishing. If we make the wrong ones, he maintains that we could well go the way of the dodo and the dinosaurs, exiting the planet for good.

When I speak to Ord over Skype I remind him of the unsettling odds he awards humanity in this life-and-death struggle between our power and our wisdom. “Given everything I know,” he writes, “I put the existential risk this century at around one in six.”

In other words, the 21st century is effectively one giant game of Russian roulette. Many people will recoil from such a grim prediction, while for others it will fuel the anxiety that is already rife in society.

He agrees but says that he has tried to present his modelling in as calm and rational a fashion as possible, making sure to take into account all the evidence that suggests the risks are not large. One in six is his best estimate, factoring in that we make a “decent stab” at dealing with the threat of our destruction.
Tewkesbury Abbey in Gloucestershire surrounded by floodwater in February this year. The increased frequency of floods is thought to be connected to global heating. Photograph: Christopher Furlong/Getty Images

If we really put our minds to it and mount a response equal to the threat, the odds, he says, come down to something more like 100-1 for our extinction. But, equally, if we carry on ignoring the threat represented by advances in areas like biotech and artificial intelligence, then the risk, he says, “would be more like one in three”.

Martin Rees, the cosmologist and former president of the Royal Society, co-founded the Centre for the Study of Existential Risk in Cambridge. He has long been involved in raising awareness of looming disasters and he echoes Ord’s concern.

“I’m worried,” he says, “simply because our world is so interconnected, that the magnitude of the worst potential catastrophes has grown unprecedentedly large, and too many have been in denial about them. We ignore the wise maxim ‘the unfamiliar is not the same as the improbable’.”

Letwin warns of an overdependence on the internet and satellite systems, allied with limited stocks of goods and long supply chains. These are ideal conditions for sabotage and global breakdown. As he writes, ominously: “The time has come to recognise that more and more parts of our lives – of society itself – depend on fewer and fewer, more integrated networks.”

Complex global networks certainly increase our vulnerability to viral pandemics and cyberattacks, but neither of those outcomes qualify as a serious existential risk in Ord’s book. The pandemics he is concerned about are not of the kind that break out in the wet markets of Wuhan, but rather those engineered in biological laboratories.

Although Ord draws a distinction between natural and anthropogenic (human-made) risks, he argues that this line is rather blurry when it comes to pathogens, because their proliferation has been significantly increased by human activity such as farming, transport, complex trade links and our congregation in dense cities.

Yet like so many aspects of existential threat, the idea of an engineered pathogen seems too sci-fi, too far-fetched, to grab our attention for long. The international body charged with policing bioweapons is the Biological Weapons Convention. Its annual budget is just €1.4m (£1.2m). As Ord points out with due derision, that sum is less than the turnover of the average McDonald’s restaurant.

If that’s food for thought, Ord has another gastronomic comparison that’s even harder to swallow. While he’s not sure exactly how much the world invests in measuring existential risk, he’s confident, he writes, that we spend “more on ice-cream every year than on ensuring that the technologies we develop don’t destroy us”.

Ord insists that he is not a pessimist. There are constructive measures to be taken. Humanity, he says, is in its adolescence, and like a teenager that has the physical strength of an adult but lacks foresight and patience, we are a danger to ourselves until we mature. He recommends that, in the meantime, we slow the pace of technological development so as to allow our understanding of its implications to catch up and to build a more advanced moral appreciation of our plight.

He is, after all, a moral philosopher. This is why he argues that it’s vital that, if humanity is to survive, we need a much larger frame of reference for what is right and good. At the moment we hugely undervalue the future, and have little moral grasp of how our actions may affect the thousands of generations that could – or alternatively, might not – come after us.

Our descendants, he says, are in the position of colonised peoples: they’re politically disenfranchised, with no say in the decisions being made that will directly affect them or stop them from existing.

Given everything I know, I put the existential risk this century at around one in six - Toby Ord

“Just because they can’t vote,” he says, “doesn’t mean they can’t be represented.”

Of course, there are also concrete issues to address such as global heating and environmental depredation. Ord acknowledges that climate change may lead to “a global calamity of unprecedented scale”, but he’s not convinced that it represents an actual existential risk to humanity (or civilisation). That’s not to say that it isn’t an urgent concern: only that our survival isn’t yet on the line.

Perhaps the biggest immediate threat is the continued abundance of nuclear weapons. Since the end of the cold war, the arms race has been reversed and the number of active warheads cut from more than 70,000 in the 1980s to about 3,750 today. Start (the Strategic Arms Reduction Treaty), which was instrumental in bringing about the decrease, is due to lapse next year. “From what I hear at the moment,” says Ord, “the Russians and Americans have no plan to renew it, which is insane.”

Sooner or later all questions of existential risk come down to a global understanding and agreements. That’s problematic, because while our economic systems are international, our political systems remain almost entirely national or federal. 

Problems that affect everyone are consequently owned by no one in particular. If humanity is to step back from the precipice, it will have to learn how to recognise its common bonds as greater than its differences.

There are many predictions currently being made about how the world might be changed by the coronavirus. The philosopher John Gray recently declared that it spelt the end of hyperglobalisation and the reassertion of the importance of the nation state.

“Contrary to the progressive mantra,” Gray wrote in an essay, “global problems do not always have global solutions… the belief that this crisis can be solved by an unprecedented outbreak of international cooperation is magical thinking in its purest form.”

But nor can individual countries afford to turn their backs on the world, at least not for long. The pandemic may not engender deeper international cooperation and a keener appreciation of the fact that we are, so to speak, all in it together. Ultimately, though, we will have to arrive at that kind of unity if we’re to avoid far greater afflictions in the future.

(Source: The Guardian)

No comments:

Post a Comment