EOD teams detonate expired ordnance in the Kuwaiti desert (2002).  U.S. Navy photo by Photographer’s Mate 2nd Class Aaron Peterson. http://commons.wikimedia.org/wiki/File:US_Navy_020712-N-5471P-010_EOD_teams_detonate_expired_ordnance_in_the_Kuwaiti_desert.jpg

If you are dead, you cannot mount a comeback. If all life on Earth were destroyed by, say, a large comet impact, there would be no revival. Ruin is forever.

The destruction of all life on Earth is not 10 times worse than the destruction of one-tenth of all life on Earth. It is infinitely worse. A fall of 1 foot is not one-tenth as damaging to the human body as a fall of 10 feet, nor is it one-hundredth as damaging as a fall of 100 feet (which is very likely to be lethal). Walking down a stairway with one-foot-high steps, we are typically immune to any damage at all. Thus, we can say in both instances above that the harm rises dramatically (nonlinearly) as we move toward any 100 percent lethal limit.

It is just these properties–scope and severity–that most humans seem blind to when introducing innovations into society and the environment according to a recent paper entitled "The Precautionary Principle: Fragility and Black Swans from Policy Actions." The paper comes from the Extreme Risk Initiative at the New York University School of Engineering and one of its authors, Nassim Nicholas Taleb, is well-known to my readers.

The concepts in the paper are applicable to systemic problems such as climate change. But the paper addresses only two specific issues, genetically modified organisms (GMOs) and nuclear power, to illustrate its main points.

The precautionary principle refers to a policy that demands proof that an innovation in not broadly harmful to humans or the environment before it is deployed. We are referring here to public policy issues, not decisions by individuals. The question the paper tries to answer is: When should this principle be invoked in public policy?

The answer the authors give is surprisingly simple: when the risk of ruin is systemic. That doesn’t mean that they suggest no steps to mitigate risk when ruin might only be local, say, the explosion of a fireworks factory. But, they feel that such an event falls within the realm of risk management. An explosion at one fireworks factory cannot set off a chain reaction around the world. Individuals in and around the plant might be ruined. But all of humanity would not ruined.

In the two examples covered in the paper, GMOs and nuclear power, the authors come to the surprising conclusion that nuclear power on a small scale does not warrant invoking the precautionary principle. Small-scale nuclear power does warrant careful risk management and cost/benefit analysis. Whether the damaged reactors at Fukushima would fall into the category of small-scale nuclear power isn’t clear. Their effects were worldwide, even if small in most places.

GMOs, however, offer a classic case of unforeseeable systemic ruin. We will know we are ruined by this untried technology after the ruin happens (perhaps in the form of famine or widespread human health and/or environmental effects). The authors categorically reject the notion that modern genetic engineering of plants is no more dangerous than traditional selective breeding.

This is because traditional methods are tried on a small scale and only achieve large scale acceptance and use over time if they are successful, that is, demonstrate no drastic side-effects or failures. This mimics nature’s bottom-up approach to evolution; the changes affected this way are gradual, not drastic–and, of course, they don’t involve transferring genetic material from completely different species, say, from a fish into a tomato.

Proponents will say that cross-species transfer of genetic material takes place in nature as well. But its scope is limited and its survivability and evolutionary fitness are tested over long periods during which these changes either thrive or disappear.

The top-down approach of the GMO industry introduces GMO crops everywhere across the world in a short period and combines one risk–untested genetic combinations–with another grave risk–monoculture. The long-term product of these two risks is unknown. But it is rightly categorized as systemic. GMO crops are now deployed worldwide and they can and do also contaminate non-GMO crops and wild plants through pollination.

Crops created through selective breeding have long histories of success and toxicities that are well understood and unlikely to change suddenly. As each new GMO crop is deployed, we cannot know ahead of time whether it will lead to systemic health and/or environment problems because there is little testing and, in any case, the amount of experience we have with GMO crops is far, far shorter than for the products of traditional selective breeding.

With each step we take in the production and deployment of new GMO seeds, we are playing a game of Russian roulette. The first few times we’ve pulled the trigger, we did not get catastrophic systemic effects–not yet, at least. But, since there is a nonzero risk of such effects, the probability of creating catastrophic outcomes becomes certain over time. The risk is virtually 100 percent that we will ultimately reach the chamber in the Russian roulette gene gun that causes catastrophic and widespread damage to humans and/or the environment.

Saying that there is no evidence so far that this will happen is a failure to understand that hidden systemic risk can often only show up on very long time scales. And, of course, when that risk does show up, it’s too late to do anything. Remember: when we manipulate a gene or genes inside a plant, we are not doing just one thing. Without knowing it, we are affecting multiple systems in the plant and in the environment the plant lives in. We are creating multiple possible pathways to ruin.

This is just a short preview of the article cited above. The article is quite accessible to a lay reader and, in places, even entertaining. I encourage you to read the whole thing. It is the most rigorous statement to date concerning the precautionary principle and risk in that it outlines clear criteria for judging when that principle should be invoked and when it should not be.

Photo via Wikimedia Commons. EOD teams detonate expired ordnance in the Kuwaiti desert (2002).  U.S. Navy photo by Photographer’s Mate 2nd Class Aaron Peterson.