Before 2020, very few of us spent much time worrying about pandemics. Now, we have all experienced — or are still experiencing — what it's like to have a pandemic affect our daily lives. As of November 2021, COVID-19 has killed over five million people and destroyed tens of trillions of dollars of economic value.
Despite how terrible COVID-19 has been for human health and the world economy, it's possible that a future pandemic could be even more devastating. This is why we think preparing for pandemics is among the best ways we can improve the long-term future.
Biosecurity, broadly defined, is a set of methods designed to protect populations against harmful biological or biochemical substances. This could refer to a wide range of biological risks, but this page is specifically focused on biosecurity that reduces global catastrophic biological risks (GCBRs). A GCBR is a biological event of unprecedented scale that poses a threat to humanity's survival or its long-term potential, such as a pandemic that kills a sizable fraction of the world's population.
Ensuring we are prepared for future pandemics could be a matter of life and death for humanity, making biosecurity a cause with an extremely large scale. Worryingly, much of this cause is neglected and there is significantly more we should be doing — especially as there are potentially tractable solutions we could pursue to make us safer. We therefore think biosecurity is a high-priority cause in which your support could make a significant difference.
There are a number of different types of GCBRs. At the broadest level, we distinguish between two types of pandemics: one is posed by naturally occurring pathogens; the other is posed by human-engineered pathogens. Engineered pathogens can be further distinguished as being either accidentally or intentionally released.
The likelihood and potential for harm of these different kinds of pathogens vary, so we analysed the pandemics they may respectively cause below.
The deadliest event in recorded history was likely a natural pandemic: the bubonic plague ravaged Europe and parts of Asia and Africa between 1346 and 1353, killing an estimated 38 million people — roughly 10% of humans alive at the time.
Natural pandemics pose a small but significant risk of killing a sizable fraction of the world's population, but a much smaller — a hundred times as small — risk of killing everyone alive.
In an informal survey of participants at a 2008 conference about global catastrophic risks, the median respondent estimated that by 2100, a natural pandemic had a 60% chance of killing at least one million people, a 5% chance of killing at least one billion people, and a 0.05% chance of causing human extinction:
This assessment is broadly in line with what independent lines of evidence suggest: that extinction from a natural pandemic is possible, but very unlikely for a couple of reasons.
First, infectious diseases account for only a small fraction of extinctions in non-human animal species: In mammals, there is only one confirmed case of a species going extinct due to a natural pathogen.
Second, if the risk of human extinction from infectious disease is roughly constant over time, the risk of extinction per century must be very low, as humans have been exposed to such risk for 300,000 years, and have not yet gone extinct. A probability of extinction higher than 0.05% per century implies a less than one-in-four chance that our species would have made it as far as we have.
But the above argument doesn't take into account that some aspects of modern living conditions are quite different from those prevailing over most of human history. Some parts of modern living put us at greater risk, but others decrease it:
- Modern living conditions that increase risk include: Greater human population density, much larger domestic animal reservoirs, anthropogenic climate change (which increases the likelihood of zoonotic disease), and global interconnectedness and interdependence (which may lead to greater civilisational fragility).
- Modern living conditions that decrease risk include: Vastly improved hygiene and sanitation; the potential for effective diagnosis, treatment, and vaccination; an understanding of the mechanisms of disease development and transmission; and greater population dispersion, including exceptionally isolated groups like Antarctic scientific researchers and nuclear submarine crews.
Taking all this into consideration, it's unclear whether our cumulative risk has increased or decreased. However, even if the risk has in fact increased, it likely hasn't done so to a degree that would put the above assessment in doubt. Therefore, while we still face considerable uncertainties surrounding the estimation of GCBRs from natural pathogens, our conclusion that risk is relatively low is reasonably robust.
Several researchers within the effective altruism community believe engineered pathogens pose a more serious risk to our biosecurity than natural pathogens. A sufficiently capable group or government could alter a pathogen's disease-causing properties — such as transmissibility, lethality, incubation time, and environmental survival — to increase its potential damage. By contrast, a naturally evolving organism is constrained by selection pressures to strike a balance between its own fitness and that of its host.
These concerns are exacerbated by trends suggesting that opportunities to cause widespread harm with engineered pathogens are becoming increasingly available, such as:
- Increased availability of genetic sequences of dangerous organisms, including viral strains responsible for past pandemics.
- Progress in gene-editing technology.
- Reduced costs of DNA and RNA synthesis.
Such harm may result either from an accidental laboratory release (as a consequence of research on potential pandemic pathogens), or from an intentional release by a hostile group, or agent.
In a presentation at a 2011 conference in Malta, Dutch virologist Ron Fouchier described how his team had successfully created a novel, contagious strain of H5N1, the virus subtype responsible for bird flu. Although H5N1 kills about half of the people it infects, it's fortunately not transmissible between humans. Fouchier told his audience, however, his team "mutated the hell out of H5N1," and then passed this mutated strain through a series of ferrets (animals often used to model influenza transmission in humans). After 10 ferrets, the virus had acquired the ability to spread from one animal to the other, just like seasonal flu.
Such experiments are seriously worrying, primarily because of the possibility of an accidental release. Fouchier's group worked in a biosafety level 3 (BSL-3) laboratory, the level required for work involving microbes with the potential to cause serious or lethal disease through inhalation. But the track record involving past laboratory escapes indicates a probability of accidental release much higher than would be acceptable on any reasonable cost-benefit analysis — Marc Lipsitch and Thomas Inglesby estimate a chance of accidental release in a BSL-3 facility of 0.2% per laboratory-year. Such a "low" probability can translate into very high expected costs if it risks a pandemic comparable to COVID-19, which has a case fatality rate at least ten times lower than H5N1. Indeed, the authors estimate that each laboratory-year of experimentation on virulent, transmissible influenza virus has an expected death toll of 2,000 to 1.4 million people.
Dangerous pathogens can be and have been studied and modified in the context of military research. The Soviet Union’s Biopreparat programme was the largest, most effective, and most sophisticated offensive biowarfare programme in history. It employed tens of thousands of scientists and technicians in a network of clandestine research institutes and production facilities, and stockpiled and weaponised over a dozen viral and bacterial agents — including smallpox and the causative agents of anthrax, plague, and tularaemia. The programme was associated with a significant number of accidental pathogen escapes, including an aerosolised anthrax that killed over 60 people in a nearby town.
If a bioweapons programme of comparable scale existed in the future, Lipsitch and Inglesby's estimates would predict an accidental release of weaponised agents within decades. The Biopreparat escapes failed to result in a global pandemic because the Soviet programme focused almost exclusively on non-pandemic agents. This focus was driven by the limitations of 1980s technology rather than self-restraint. With contemporary biotechnology, pursuit of bioweapons by a resourceful state actor should be regarded as one of the most concerning sources of GCBR.
Various radicalised groups and terrorist organisations have expressed their intent to use bioweapons, or dangerous pathogens for destructive purposes. Some have acted on this intention, such as the doomsday cult Aum Shinrikyo, which killed 13 people and injured thousands in 1995 by mounting a sarin gas attack in the Tokyo Metro.
While the risk of accidental release can be estimated from frequency data from past laboratory escapes, estimating intentional releases would be much more speculative. As Lipsitch and Inglesby note:
Such a calculation would require reliable, quantitative data on a variety of probability assessments: the probability that a person, group, or country intends to release potential pandemic pathogens (PPP); that a person, group, or country has the means to obtain the pathogen, or has the capacity to generate one from published data; and, that a person, group, or country has the means of distributing a PPP in a way that would start an epidemic. Those kinds of data are not presently available, nor will they be in the foreseeable future.
The survey mentioned earlier found that the median respondent believed an engineered pandemic had a 30% chance of killing at least one million people, a 10% chance of killing a billion people, and a 2% chance of causing human extinction by 2100.
A sense of the risk posed by the intentional release of engineered pathogens can also be found on Metaculus, a website which aggregates probability estimates for its users on various questions (and so far it has a reasonably good track record.) As of November 2021, Metaculus makes the following forecasts:
- There is a 17% chance that: a global catastrophe will, by 2100, kill at least 10% of the world's population, within a period of five years.
- There is a 21% chance that: if such a catastrophe occurs, it will be biological in nature.
- There is a 5% chance that: if such a biological catastrophe occurs, it will kill at least 95% of the world's population.
Considered together, these forecasts imply an estimated, unconditional probability of a near-extinction biological event by 2100 of around 0.01%.
We can also estimate the likelihood of intentional deployment of engineered pathogens using statistics about fatalities from terrorism and warfare. In a seminal 1935 paper, Lewis Fry Richardson observed that deadly conflict at all scales — from individual homicides to world wars — can be summarised by a simple model, with timing described by a Poisson process and severity described by a power law. Because fatalities fit this regular pattern, Richardson's model can be used to estimate the probability of events of unprecedented severity.
One study has done just that, and estimates a risk of human extinction from biological terrorism of 0.014% per century, and a corresponding risk from biological warfare of 0.005% per century. However, this method probably underestimates risk for a number of reasons, such as the conservative assumptions made by the authors and the increasing risk expected from the trends noted earlier.
Superficially, biosecurity as typically defined does not appear to be a particularly neglected cause area. Even before the COVID-19 pandemic, the United States federal government allocated around $3 billion USD annually to biosecurity. As Gregory Lewis notes, this figure stands in remarkable contrast to the approximately $10 million spent on AI safety back in 2017.
However, this picture is somewhat misleading for several reasons:
- There is more interest in funding AI safety today, though the area still receives extremely little compared to its scale.
- Biosecurity spending does not equal spending on efforts to reduce GCBRs. As noted above, "biosecurity" may describe interventions aimed at preventing events with little, or no potential to severely harm, or destroy humanity. Within the effective altruism community, biosecurity and AI risk receive comparable levels of funding.
- Certain sub-areas within biosecurity, however, are highly neglected. For example, the Biological Weapons Convention (BWC) of 1972, which is the main disarmament treaty of its kind and is considered to have established a near-universal norm against the use of biological weapons, has been run since 2006 by an Implementation Support Unit with only four employees and a budget smaller than the average McDonald's.
- Until recently, there was a taboo surrounding discussions of biological events involving millions of fatalities. As a result, the field of biosecurity was largely not directly focused on GCBRs.
The bottom line is that biosecurity, and especially GCBRs, receive far less funding than it would if humanity were adequately prioritising its own safety. The cause is therefore neglected.
The tractability of biosecurity seems comparable to our other high-priority causes safeguarding the long-term future, such as AI safety and nuclear security. 80,000 Hours rates biosecurity as "moderately tractable," and experts characterise the reduction of GCBRs as "potentially tractable."
We face two important obstacles to significant progress in biosecurity.
The first is that biosecurity is often dual-use — it can be used to do good as well as to cause harm. Research or development with the potential to cure disease or extend life can frequently also be put to the service of malicious or destructive purposes. This isn't the case with other types of risks. Nuclear programmes, for instance, require facilities for uranium enrichment with no other legitimate use.
- Progress in genetic synthesis makes it increasingly feasible to create organisms just from information about the structure of their DNA or RNA.
- Development in this area does not require complex and expensive equipment (such as nuclear facilities), so a malicious actor only needs to learn about a technology's destructive potential to increase risk substantially. For example, Al-Qaeda decided to initiate a bioterror programme only because, as Ayman al-Zawahiri recounts, "the enemy drew our attention to [bioweapons] by repeatedly expressing concern that they can be produced simply."
There are several promising interventions to reduce GCBRs. Open Philanthropy wrote a comprehensive report on GCBRs, which largely focused on viral pathogens. (Although viruses are not the only organisms capable of posing a GCBR, they are especially worrying because of their transmissibility and virulence and because we have limited treatments against them.) The report finds several promising goals, which if we met, would make us safer:
- More availability of a range of broad-spectrum antiviral compounds with low potential for development of resistance.
- Reducing the time from finding a new pathogen to being able to confer immunity to it (such as through a vaccine) to fewer than 100 days.
- Improving our ability to reliably contain potentially dangerous pathogens.
- Widespread metagenomic sequencing that allows us to reliably detect outbreaks.
Of these, metagenomic sequencing is an especially promising long-term approach to GCBR management. Carl Shulman has argued that implementing continuous and ubiquitous surveillance for new pathogens sufficient to virtually eliminate GCBRs should be affordable to most governments within a few decades, based on trends in the falling costs of DNA sequencing noted earlier.
Other broader interventions to reduce GCBRs have been proposed, including:
- Improving scenario planning for GCBRs.
- Fostering awareness and a culture of safety among biotechnology researchers.
- Improving methods of risk assessment and requiring potentially dangerous research to undergo such assessment prior to approval.
- Developing and strengthening international biosafety norms.
It is unclear whether supporting biosecurity is currently the most promising option for safeguarding humanity's future. You may be one of many who think that AI poses a higher risk of existential catastrophe. As noted earlier, AI safety receives far less funding than biosecurity overall, though it's unclear whether there's much difference between the two if we compare between funding explicitly focused on reducing existential risks.
The case for prioritising biosecurity involves a great deal of speculation and subjective judgement on a number of key questions, including:
- Whether current living conditions increase or decrease GCBRs from natural pathogens compared to levels prevalent during most of our history.
- How likely is it that a group, agent, or state decides to use bioweapons?
- How impactful is work in traditional biosecurity—relative to work focused specifically on GCBRs?
- How serious does a biological event need to be before it threatens global catastrophe?
If you prefer a higher level of certainty that your support is having a positive impact, you may want to support other causes, with better-understood solutions.
Because biosecurity is a fertile ground for information hazards, discussion in this field will inevitably be less transparent than other areas, and charity evaluators may exhibit less reasoning transparency than some donors may prefer.
We recommend donating to Founders Pledge's recommended charities in this area, including:
If you are a US citizen, you may consider supporting Guarding Against Pandemics, which is lobbying Congress to provide further funding for pandemic preparedness. Unlike the organisations listed above, this donation opportunity is exclusively available to small donors in the U.S., and there is a $5,000 USD limit per person. If successful, the campaign would lead to a $30 billion USD pandemic preparedness plan — as a result, donations may be even more impactful because they are significantly leveraged (where each dollar donated may affect how many magnitudes more dollars will be spent). Giving What We Can has not formally evaluated this giving opportunity, hence why it is not on our recommended charities list. We recommend you read more about the organisation to inform your donation.
To learn more about biosecurity, we recommend the following resources.
- Reducing global catastrophic biological risks (80,000 Hours)
- Biosecurity and Pandemic Preparedness (Open Philanthropy)
- Biosecurity (Open Philanthropy)
- Research and Development to Decrease Biosecurity Risks from Viral Pathogens (Open Philanthropy)
- Safeguarding the Future Executive Summary and Giving Recommendations (Founders Pledge)
- Nuclear Threat Initiative's biosecurity programs (Founders Pledge)
- Center for Health Security at Johns Hopkins University (Founders Pledge)
- Pandemic Pathogens (Effective Altruism)
- Technologies to Address Global Catastrophic Biological Risks (Johns Hopkins Center for Health Security)
- Benefits & Risks of Biotechnology (Future of Life Institute)
Please help us improve our work — let us know what you thought of this page and suggest improvements using our content feedback form.
The deadliest event in absolute terms was probably World War II (66 million fatalities), although the second-deadliest event was also a natural pandemic: the 1918 influenza pandemic (60 million fatalities). All fatality estimates are taken from Luke Meuhlhauser’s investigation. ↩︎
Survey responses discussed here and later in the report are useful summaries of expert opinion, but should not be interpreted too literally, as they represent nothing more than the respondents' best guesses. The precise probability estimates given may give the impression of high certainty, when in fact there is considerable disagreement even among experts, as can be seen in the distribution of individual responses (plotted as dots over the shaded bars.) ↩︎
This method of estimation may underestimate risk because of an observation selection effect: we could see ourselves surviving thus far not because the risk is low, but because this is the only observation possible — if we become extinct, there won’t be anyone around to notice it. However, the magnitude of this bias can be estimated by considering that, as risk grows, so does the fraction of observers who should expect to find themselves in a species with a comparatively short evolutionary history. A group of researchers recently examined this issue and concluded that the bias is likely to be very small. ↩︎
Some factors appear to change the risk, but it is not clear in what direction. For example, increased travel frequency is sometimes cited as increasing the risk of a serious pandemic, but may in fact decrease it. ↩︎
The Metaculus algorithm relies on a sophisticated model to calibrate and weight each user, and generates somewhat better forecasts than the median user. It is important to emphasise, however, that these forecasts ultimately depend on the quality of the individual predictions that serve as inputs to it. Although it is reassuring that, historically, Metaculus has performed reasonably well, and even outperformed infectious disease modelers, it should be stressed that these figures are subject to great uncertainty. Moreover, the forecasters are incentivised to predict accurately so they can build their reputation as an accurate forecasters. This means predictions of global catastrophic and existential risks may be biased towards forecasts saying they are extremely likely. The reason, put bluntly, is that if you incorrectly predict these catastrophes won’t occur, nobody will be alive to see that you were wrong. ↩︎
One recent analysis focused on 95 inter-state wars that occurred between 1823 and 2003 and found confirmation for both aspects of Richardson's model. Other studies, however, have been less supportive. ↩︎
Some of these reasons are documented in Gregory Lewis' report. Note that the (plausible) assumption of increasing risk over time contradicts the first component of Richardson's model, and so indirectly makes the model less trustworthy as a method for estimating GCBR. ↩︎
This fact is documented in Toby Ord’s The Precipice and refers to [this budget]. Sadly, it is not unprecedented for an organisation engaged in efforts to protect our species from threats to its survival to struggle with funding. For example, despite playing a very important role in enabling a number of nuclear weapons limitation treaties in the 1960s and 1970s, and in shaping Mikhail Gorbachev's views about nuclear strategy in the 1980s, the Pugwash Conferences on Science and World Affairs was throughout that period apparently almost constantly in danger of insolvency. For more detail, see Open Philanthropy’s writeup on the history of philanthropy. ↩︎
From page 323 of “Global Catastrophic Biological Risks: Toward a Working Definition.” ↩︎