Improving biosecurity and pandemic preparedness

Before 2020, very few of us spent much time worrying about pandemics. Now, we have all experienced — or are still experiencing — what it's like to have a pandemic affect our daily lives. As of November 2021, COVID-19 has killed over 5 million people and destroyed tens of trillions of dollars of economic value.

Despite how terrible COVID-19 has been for human health and the world economy, it's possible that a future pandemic could be even more devastating. This is why we think preparing for pandemics is among the best ways we can improve the long-term future.

Biosecurity, broadly defined, is a set of methods designed to protect populations against harmful biological or biochemical substances. This could refer to a wide range of biological risks, but this page is specifically focused on biosecurity that reduces global catastrophic biological risks (GCBRs). A GCBR is a biological event of unprecedented scale that poses a threat to humanity's survival or its long-term potential, such as a pandemic that kills a sizable fraction of the world's population.

Why is biosecurity important?

Ensuring we are prepared for future pandemics could be a matter of life and death for humanity, making biosecurity a cause with an extremely large scale. Worryingly, much of this cause is neglected and there is significantly more we should be doing — especially as there are potentially tractable solutions we could pursue to make us safer. We therefore think biosecurity is a high-priority cause in which your support could make a significant difference.

What is the scale of biosecurity?

There are a number of different types of GCBRs. At the broadest level, we distinguish between two types of pandemics: one is posed by naturally occurring pathogens; the other is posed by human-engineered pathogens. Engineered pathogens can be further distinguished as being either accidentally or intentionally released.

The likelihood and potential for harm of these different kinds of pathogens vary, so we analyse the pandemics they may each cause below.

Natural pathogens

The deadliest event in recorded history was likely a natural pandemic: the bubonic plague ravaged Europe and parts of Asia and Africa between 1346 and 1353, killing an estimated 38 million people — roughly 10% of humans alive at the time.1

Natural pandemics pose a small but significant risk of killing a sizable fraction of the world's population, but a much smaller — 100 times as small — risk of killing everyone alive.

In an informal survey of participants at a 2008 conference about global catastrophic risks, the median respondent estimated that by 2100, a natural pandemic had a 60% chance of killing at least 1 million people, a 5% chance of killing at least 1 billion people, and a 0.05% chance of causing human extinction.2

This assessment is broadly in line with what independent lines of evidence suggest: that extinction from a natural pandemic is possible, but very unlikely for a couple of reasons.

  • First, infectious diseases account for only a small fraction of extinctions in nonhuman animal species.3 In mammals, there is only one confirmed case of a species going extinct due to a natural pathogen.
  • Second, if the risk of human extinction from infectious disease is roughly constant over time, the risk of extinction per century must be very low, as humans have been exposed to such risk for 300,000 years, and have not yet gone extinct. A probability of extinction higher than 0.05% per century implies a less than one-in-four chance that our species would have made it as far as we have.4

But the above argument doesn't take into account that some aspects of modern living conditions are quite different from those prevailing over most of human history. Some parts of modern living put us at greater risk, but others decrease it.

Modern living conditions that increase risk include:

  • Greater human population density.
  • Much larger domestic animal reservoirs.
  • Anthropogenic climate change (which increases the likelihood of zoonotic disease).
  • Global interconnectedness and interdependence (which may lead to greater civilisational fragility).

Modern living conditions that decrease risk include:

  • Vastly improved hygiene and sanitation.
  • The potential for effective diagnosis, treatment, and vaccination.
  • An understanding of the mechanisms of disease development and transmission.
  • Greater population dispersion, including exceptionally isolated groups like Antarctic scientific researchers and nuclear submarine crews.5

Taking all this into consideration, it's unclear whether our cumulative risk has increased or decreased. However, even if the risk has in fact increased, it likely hasn't done so to a degree that would put the above assessment in doubt. Therefore, while we still face considerable uncertainties surrounding the estimation of GCBRs from natural pathogens, our conclusion that risk is relatively low is reasonably robust.

Engineered pathogens

Several researchers within the effective altruism community believe engineered pathogens pose a more serious risk to our biosecurity than natural pathogens.6 A sufficiently capable group or government could alter a pathogen's disease-causing properties — such as transmissibility, lethality, incubation time, and environmental survival — to increase its potential damage. By contrast, a naturally evolving organism is constrained by selection pressures to strike a balance between its own fitness and that of its host.

These concerns are exacerbated by trends suggesting that opportunities to cause widespread harm with engineered pathogens are becoming increasingly available, such as:

Such harm may result either from an accidental laboratory release (as a consequence of research on potential pandemic pathogens), or from an intentional release by a hostile group or agent.

Accidental release

In a presentation at a 2011 conference in Malta, Dutch virologist Ron Fouchier described how his team had successfully created a novel, contagious strain of H5N1, the virus subtype responsible for bird flu. Although H5N1 kills about half of the people it infects,7 it's fortunately not transmissible between humans. Fouchier told his audience, however, his team "mutated the hell out of H5N1," and then passed this mutated strain through a series of ferrets (animals often used to model influenza transmission in humans). After 10 ferrets, the virus had acquired the ability to spread from one animal to the other, just like seasonal flu.

Such experiments are seriously worrying, primarily because of the possibility of an accidental release. Fouchier's group worked in a biosafety level 3 (BSL-3) laboratory, the level required for work involving microbes with the potential to cause serious or lethal disease through inhalation. But the track record involving past laboratory escapes indicates a probability of accidental release much higher than would be acceptable on any reasonable cost-benefit analysis — Marc Lipsitch and Thomas Inglesby estimate a chance of accidental release in a BSL-3 facility of 0.2% per laboratory-year. Such a "low" probability can translate into very high expected costs if it risks a pandemic comparable to COVID-19, which has a case fatality rate at least ten times lower than H5N1. Indeed, the authors estimate that each laboratory-year of experimentation on virulent, transmissible influenza virus has an expected death toll of 2,000 to 1.4 million people.

Dangerous pathogens can be and have been studied and modified in the context of military research. The Soviet Union’s Biopreparat programme was the largest, most effective, and most sophisticated offensive biowarfare programme in history. It employed tens of thousands of scientists and technicians in a network of clandestine research institutes and production facilities, and stockpiled and weaponised over a dozen viral and bacterial agents — including smallpox and the causative agents of anthrax, plague, and tularaemia. The programme was associated with a significant number of accidental pathogen escapes, including an aerosolised anthrax that killed over 60 people in a nearby town.8

If a bioweapons programme of comparable scale existed in the future, Lipsitch and Inglesby's estimates would predict an accidental release of weaponised agents within decades. The Biopreparat escapes failed to result in a global pandemic because the Soviet programme focused almost exclusively on non-pandemic agents. This focus was driven by the limitations of 1980s technology rather than self-restraint. With contemporary biotechnology, pursuit of bioweapons by a resourceful state actor should be regarded as one of the most concerning sources of GCBR.

Intentional release

Various radicalised groups and terrorist organisations have expressed their intent to use bioweapons, or dangerous pathogens for destructive purposes. Some have acted on this intention, such as the doomsday cult Aum Shinrikyo, which killed 13 people and injured thousands in 1995 by mounting a sarin gas attack in the Tokyo Metro.9

While the risk of accidental release can be estimated from frequency data from past laboratory escapes, estimating intentional releases would be much more speculative. As Lipsitch and Inglesby note:10

Such a calculation would require reliable, quantitative data on a variety of probability assessments: the probability that a person, group, or country intends to release potential pandemic pathogens (PPP); that a person, group, or country has the means to obtain the pathogen, or has the capacity to generate one from published data; and, that a person, group, or country has the means of distributing a PPP in a way that would start an epidemic. Those kinds of data are not presently available, nor will they be in the foreseeable future.

The survey mentioned earlier found that the median respondent believed an engineered pandemic had a 30% chance of killing at least 1 million people, a 10% chance of killing 1 billion people, and a 2% chance of causing human extinction by 2100.

A sense of the risk posed by the intentional release of engineered pathogens can also be found on Metaculus, a website which aggregates probability estimates for its users on various questions (and so far it has a reasonably good track record.) As of November 2021, Metaculus makes the following forecasts:11

Considered together, these forecasts imply an estimated, unconditional probability of a near-extinction biological event by 2100 of around 0.01%.

We can also estimate the likelihood of intentional deployment of engineered pathogens using statistics about fatalities from terrorism and warfare. In a seminal 1935 paper, Lewis Fry Richardson observed that deadly conflict at all scales — from individual homicides to world wars — can be summarised by a simple model, with timing described by a Poisson process and severity described by a power law.12 Because fatalities fit this regular pattern, Richardson's model can be used to estimate the probability of events of unprecedented severity.

One study has done just that, and estimates a risk of human extinction from biological terrorism of 0.014% per century, and a corresponding risk from biological warfare of 0.005% per century.13 However, this method probably underestimates risk for a number of reasons, such as the conservative assumptions made by the authors and the increasing risk expected from the trends noted earlier.14

Is biosecurity neglected?

Superficially, biosecurity as typically defined does not appear to be a particularly neglected cause area. Even before the COVID-19 pandemic, the United States federal government allocated around $3 billion USD annually to biosecurity. As Gregory Lewis notes, this figure stands in remarkable contrast to the approximately $10 million spent on AI safety back in 2017.

However, this picture is somewhat misleading for several reasons:

  • There is more interest in funding AI safety today15, though the area still receives extremely little compared to its scale.
  • Biosecurity spending does not equal spending on efforts to reduce GCBRs. As noted above, "biosecurity" may describe interventions aimed at preventing events with little, or no potential to severely harm, or destroy humanity. Within the effective altruism community, biosecurity and AI risk receive comparable levels of funding.
  • Certain sub-areas within biosecurity, however, are highly neglected. For example, the Biological Weapons Convention (BWC) of 1972, which is the main disarmament treaty of its kind and is considered to have established a near-universal norm against the use of biological weapons, has been run since 2006 by an Implementation Support Unit with only four employees and a budget smaller than the average McDonald's.16
  • Until recently, there was a taboo surrounding discussions of biological events involving millions of fatalities. As a result, the field of biosecurity was largely not directly focused on GCBRs.

The bottom line is that biosecurity, and especially GCBRs, receive far less funding than it would if humanity were adequately prioritising its own safety. The cause is therefore neglected.

Is biosecurity tractable?

The tractability of biosecurity seems comparable to our other high-priority causes working to safeguard the long-term future, such as AI safety and nuclear security. 80,000 Hours rates biosecurity as "moderately tractable," and experts characterise the reduction of GCBRs as "potentially tractable."17

We face two important obstacles to significant progress in biosecurity.

The first is that biosecurity is often dual-use — that is, it can be used to do good as well as to cause harm. Research or development with the potential to cure disease or extend life can frequently also be put to the service of malicious or destructive purposes. This isn't the case with other types of risks. Nuclear programmes, for instance, require facilities for uranium enrichment with no other legitimate use.18

The second is that public discussion of research in biotechnology can often constitute an information hazard — that is, a risk arising from the spread of true information.19 This is for a couple of reasons:

  • Progress in genetic synthesis makes it increasingly feasible to create organisms just from information about the structure of their DNA or RNA.
  • Development in this area does not require complex and expensive equipment (such as nuclear facilities), so a malicious actor only needs to learn about a technology's destructive potential to increase risk substantially. For example, al-Qaeda decided to initiate a bioterror programme only because, as Ayman al-Zawahiri recounts, "the enemy drew our attention to [bioweapons] by repeatedly expressing concern that they can be produced simply."

Potential solutions

There are several promising interventions to reduce GCBRs. Open Philanthropy wrote a comprehensive report on GCBRs, which largely focused on viral pathogens. (Although viruses are not the only organisms capable of posing a GCBR, they are especially worrying because of their transmissibility and virulence and because we have limited treatments against them.) The report finds several promising goals, which if we met, would make us safer:

  • Ensuring more availability of a range of broad-spectrum antiviral compounds with low potential for development of resistance.
  • Reducing the time from finding a new pathogen to being able to confer immunity to it (such as through a vaccine) to fewer than 100 days.
  • Improving our ability to reliably contain potentially dangerous pathogens.
  • Enabling widespread metagenomic sequencing that allows us to reliably detect outbreaks.

Of these, metagenomic sequencing is an especially promising long-term approach to GCBR management. Carl Shulman has argued that implementing continuous and ubiquitous surveillance for new pathogens sufficient to virtually eliminate GCBRs should be affordable to most governments within a few decades, based on trends in the falling costs of DNA sequencing noted earlier.

Other broader interventions to reduce GCBRs have been proposed, including:

  • Improving scenario planning for GCBRs.
  • Fostering awareness and a culture of safety among biotechnology researchers.
  • Improving methods of risk assessment and requiring potentially dangerous research to undergo such assessment prior to approval.
  • Developing and strengthening international biosafety norms.20

Why might you not prioritise biosecurity?

There are even more serious risks

It is unclear whether supporting biosecurity is currently the most promising option for safeguarding humanity's future. You may be one of many who think that AI poses a higher risk of existential catastrophe.21 As noted earlier, AI safety receives far less funding than biosecurity overall, though it's unclear whether there's much difference between the two if we compare between funding explicitly focused on reducing existential risks.

Robustness of evidence

The case for prioritising biosecurity involves a great deal of speculation and subjective judgement on a number of key questions, including:

  • Do current living conditions increase or decrease GCBRs from natural pathogens compared to levels prevalent during most of our history?
  • How likely is it that a group, agent, or state decides to use bioweapons?
  • How impactful is work in traditional biosecurity, relative to work focused specifically on GCBRs?
  • How serious does a biological event need to be before it threatens global catastrophe?

If you prefer a higher level of certainty that your support is having a positive impact, you may want to support other causes, with better-understood solutions.

Transparency

Because biosecurity is a fertile ground for information hazards, discussion in this field will inevitably be less transparent than other areas, and charity evaluators may exhibit less reasoning transparency than some donors may prefer.

What are the best biosecurity charities, organisations, and funds?

You can donate to several promising programs working in this area via our donation platform. For our charity and fund recommendations, see our best charities page.

Learn more

To learn more about biosecurity, we recommend the following resources.

Our research

This page was written by Pablo Stafforini. You can read our research notes to learn more about the work that went into this page.

Your feedback

Please help us improve our work — let us know what you thought of this page and suggest improvements using our content feedback form.