back to Mae-Wan Ho | co-globalize | rat haus | Index | Search | tree

( PDF | ASCII text formats )

The following is mirrored with permission from http://www.i-sis.org.uk/prec.shtml


Institute of Science in Society

Science
Sustainability
Society
ISIS logo

Relevant Links:


Use And Abuse Of The Precautionary Principle

ISIS submission to US Advisory Committee on International Economic Policy
(ACIEP) Biotech. Working Group, 13 July, 2000



 

The precautionary principle is accepted as the basis of the Cartegena Biosafety Protocol agreed in Montreal in January 2000, already signed by 68 nations who attended the Convention on Biological Diversity Conference in Nairobi in May, 2000. The principle is to be applied to all GMOs whether used as food or as seeds for environmental release.

The precautionary principle states that when there is reasonable suspicion of harm, lack of scientific certainty or consensus must not be used to postpone preventative action. There is indeed sufficient direct and indirect scientific evidence to suggest that GMOs are unsafe for use as food or for release into the environment. And that is why more than 300 scientists from 38 countries are demanding a moratorium on all releases of GMOs (World Scientists Statement and Open Letter to All Governments).

The precautionary principle is actually part and parcel of sound science. Science is an active knowledge system in which new discoveries are made almost every day. Scientific evidence is always incomplete and uncertain. The responsible use of scientific evidence, therefore, is to set precaution. This is all the more important for technologies, such as genetic engineering, which can neither be controlled nor be recalled.

Dr. Peter Saunders, Professor of Applied Mathematics at King's College London, co-Founder of ISIS, has written an article which shows how the precautionary principle is just codified common sense that people have accepted in courts of law and mathematicians have adopted in the proper use of statistics. It begins to clarify how scientific evidence is to be interpreted in a socially responsible way which is also in accord with sound science.

Dr. Mae-Wan Ho
Director
Institute of Science in Society
C/o Dept of Biological Sciences
Open University
Walton Hall Milton Keynes
MK7 6AA, UK


Key words:  

scientific evidence, burden of proof, statistics, p values, law



Use And Abuse Of The Precautionary Principle

Peter T. Saunders
Mathematics Department, King's College, London


Proponents of biotechnology have been busy attacking the precautionary principle lately. Why? Because it holds the key to protecting health and the environment and require the industry to prove beyond reasonable doubt that a technology or a product is safe before it can be adopted. Peter Saunders, Professor of Mathematics and co-Founder of ISIS shows how the precautionary principle is just codified common sense that people have accepted in courts of law as much as mathematicians have accepted in setting the burden of proof in statistics. But pro-biotech scientists have been abusing science as well as the precautionary principle. A version of this article has been submitted to the US Senate Committee on Biotechnology.


There has been a lot written and said about the precautionary principle recently, much of it misleading. Some have stated that if the principle were applied it would put an end to technological advance. Others argue that it fails to take science properly into account, though in fact it relies more heavily on scientific evidence than other approaches to the problem. Still others claim to be applying the principle when clearly they are not. From all the confusion, you might think that it is a deep philosophical idea that is very difficult for a lay person to grasp [1].

In fact, the precautionary principle is very simple. All it actually amounts to is a piece of common sense: if we are embarking on something new, we should think very carefully about whether it is safe or not, and we should not go ahead until we are convinced it is. It's also not a new idea; it already appears in national legislation in many countries (including the United States), and in international agreements such as the 1992 Rio Declaration and the Cartagena Biosafety Protocol agreed in Montreal in 2000.

Those who reject the precautionary principle are pushing forward with untested, inadequately researched technologies and insisting that it is up to the rest of us to prove that they are dangerous before they can be stopped. At the same time, they also refuse to accept liability, so if the technologies do turn out to be hazardous, as in many cases they already have, someone else will have to pay the costs of putting things right.

The precautionary principle is about the burden of proof, a concept that ordinary people have been expected to understand and accept in the law for many years. It is also the same reasoning that is used in most statistical testing. In fact, as a lot of work in biology depends on statistics, neglect or misuse of the precautionary principle often arises out of a misunderstanding and abuse of statistics.

The precautionary principle does not provide us with an algorithm for decision making. We still have to seek the best scientific evidence we can obtain and we still have to make judgements about what is in the best interest of ourselves and our environment. Indeed, one of the advantages of the principle is that it forces us to face these issues; we cannot ignore them in the hope that everything will turn out for the best whatever we do. The basic point, however, is that it places the burden of proof firmly on the advocates of new technology. It is for them to show that what they are proposing is safe. It is not for the rest of us to show that it is not.

 

The Burden of Proof

The precautionary principle states that if there are reasonable scientific grounds for believing that a new process or product may not be safe, it should not be introduced until we have convincing evidence that the risks are small and are outweighed by the benefits. It can also be applied to existing technologies when new evidence appears suggesting that they are more dangerous than we had thought, as in the cases of cigarettes, CFCs, lead in petrol, greenhouse gasses and now genetically modified organisms (GMOs) [2]. In such cases it requires that we carry out research to gain a better assessment of the risk and, in the meantime, that we should not expand our use of the technology but should put in train measures to reduce our dependence on it. If the dangers are considered serious enough, the principle may require us to withdraw the products or impose a ban or moratorium on further use.

The principle does not, as some critics claim, require industry to provide absolute proof that something new is safe. That would be an impossible demand and would indeed stop technology dead in its tracks, but it is not what is being demanded. The precautionary principle does not deal with absolute certainty. On the contrary, it is specifically intended for circumstances in which there is no absolute certainty. It simply puts the burden of proof where it belongs, with the innovator. The requirement is to demonstrate, not absolutely but beyond reasonable doubt, that what is being proposed is safe.

A similar principle applies in the criminal law, and for much the same reason. In the courtroom, the prosecution and the defence are not on equal terms. The defendant is not required to prove his innocence and the jury is not asked to decide merely whether they think it is more likely than not that he committed the crime. The prosecution must establish, not absolutely but beyond reasonable doubt, that the defendant is guilty.

There is a good reason for this inequality, and it has to do with the uncertainty of the situation and the consequences of taking a wrong decision. The defendant may be guilty or not and he may be found guilty or not. If he is guilty and convicted, then justice has been done, as it has if he is innocent and found not guilty. But suppose the jury reaches the wrong verdict, what then?

That depends on which of the two possible errors was made. If the defendant actually committed the crime but is found not guilty, then a crime goes unpunished. The other possibility is that the defendant is wrongly convicted of a crime, in which case his whole life may be ruined. Neither of these outcomes is satisfactory, but society has decided that the second is so much worse than the first that we should do as much as we reasonably can to avoid it. It is better, so the saying goes, that a hundred guilty men should go free than that one innocent man should be convicted.

In any situation in which there is uncertainty, mistakes will occur. Our aim must be to minimise the damage that results when they do.

Just as society does not require a defendant to prove his innocence, so it should not require objectors to prove that a technology is harmful. It is up to those who want to introduce something new to prove, not with certainty but beyond reasonable doubt, that it is safe. Society balances the trial in favour of the defendant because we believe that convicting an innocent person is far worse than failing to convict someone who is actually guilty. In the same way, we should balance the decision on risks and hazards in favour of safety, especially in those cases where the damage, should it occur, is serious and irredeemable.

The objectors must bring forward evidence that stands up to scrutiny, but they do not have to prove there are serious dangers. The burden of proof is on the innovators.

 

The Misuse of Statistics

You have an antique coin that you want to use for deciding who will go first in a game, but you are worried that it might be biased in favour of heads. You toss it three times, and it comes down heads every time. Naturally, this does nothing to reassure you. Then along comes someone who claims to know about statistics. He carries out a short calculation and informs you that as the "p-value" is 0.125, you have nothing to worry about. The coin is not biased.

Now this must strike you as nonsense, even if you don't understand statistics. Surely if a coin comes down heads three times in a row, that can't prove it is unbiased? No, of course it can't. But this sort of reasoning is being used to prove that GM technology is safe.

The fallacy, and it is a fallacy, comes about through either a misunderstanding of statistics or a total neglect of the precautionary principle -- or, more likely, both. In brief, people are claiming to have proven that something is safe when what they have actually done is to fail to prove that it is unsafe. It's the mathematical way of claiming that absence of evidence is the same as evidence of absence.

To see how this comes about, we have to appreciate the difference between biological and other kinds of scientific evidence. Most experiments in physics and chemistry are relatively clear cut. If we want to know what will happen if we mix copper and sulphuric acid, we really only have to try it once. We may repeat the experiment to make sure it worked properly, but we expect to get the same result, even to the amount of hydrogen that is produced from a given amount of copper and acid.

Organisms, however, vary considerably and don't behave in closely predictable ways. If we spread fertiliser on a field, not every plant will increase its growth by the same amount, and if we cross two lines of maize, not all the resulting seeds will be the same. We often have to use some sort of statistical argument to tell us whether what we have observed represents a real effect or is merely due to chance.

The details of the argument will vary depending on exactly what it is we want to establish, but the standard ones follow a similar pattern.

Suppose that plant breeders have come up with a new variety of maize and we want to know if it gives a better yield than the old one. We plant one field with each of them, and we find that the new variety does actually produce more maize.

That's encouraging, but it doesn't prove anything. After all, even if we had planted both fields with the old strain, we wouldn't have expected to get exactly the same yield in both. The apparent improvement might be just a chance fluctuation.

To help us decide whether the observed effect is real, we carry out the following calculation. We suppose that the new strain is actually no better than the old one.

This is called the "null hypothesis" because we assume that nothing has changed. We then estimate as best we can the probability that the new strain would perform as well as it did simply on account of chance. We call this probability the p-value.

Obviously, the smaller the p-value the more likely it is that the new strain really is better, though we can never be absolutely certain. What counts as a small enough value of p is arbitrary, but over the years statisticians have adopted the convention that if p is less than 5% we should reject the null hypothesis, i.e. we may infer that the new strain is better. Another way of saying this is that the increase in yields is `significant'.

Why have statisticians fastened on such a small value? Wouldn't it be reasonable to say that if there is less than an even chance (i.e. p=0.5) of such a large increase then we should infer that the new strain is better?

No, and the reason why not is simple. It's a question of the burden of proof. Remember that statistics is about taking decisions in the face of uncertainty. It is a serious business advising a company to change the variety of seed it produces or a farmer to switch from one he has grown for years. There could be a lot to lose if we are wrong. We want to be sure beyond reasonable doubt that we are right, and that's usually taken to mean a p-value of 0.05 or less.

Suppose we obtain a p-value of greater than 0.05. What then? We have failed to prove that the new strain is better. We have not, however, proved that it is no better, any more than by finding a defendant not guilty we have proved that he is innocent.

In the example of the antique coin, the null hypothesis was that the coin was fair. If that were the case, then the probability of a head on any one throw would be 0.5 so the probability of three heads in a row would be (0.5)3=0.125. This is greater than 0.05, so we cannot reject the null hypothesis. Thus we cannot claim that our experiment has shown the coin to be biased.

Up to that point, the reasoning was correct. Where it went wrong was in the claim that the experiment has shown the coin to be fair. It did no such thing.

Yet that is precisely the sort of argument that we see in scientific papers defending genetic engineering. A recent report "Absence of toxicity of Bacillus thuringiensis pollen to black swallowtails under field conditions[3] claims by its title to have shown that there is no harmful effect. In the discussion however, the authors state only that there were "no significant weight differences among larvae as a function of distance from the corn field or pollen level." In other words, they have only failed to demonstrate that there is a harmful effect. They have not proven that there is none.

A second paper [4] claims to show that transgenes in wheat are stably inherited. The evidence for this is that the "transmission ratios were shown to be Mendelian in 8 out of 12 lines." In the accompanying table, however, six of the p-values are less than 0.5 and one is 0.1. That is not sufficient to prove that the genes are unstable and so inherited in a non-Mendelian way. But it does not prove they are, which is what was claimed.

The way to decide if the antique coin is biased is to toss it more times and see what happens. In the case of the safety and stability of GM crops, more and better experiments should be carried out.

 

The Anti-Precautionary Principle

The precautionary principle is so obviously common sense that we might expect it to be universally adopted. That would still leave room for debate about how big the risks and benefits are likely to be, especially when those who stand to gain if things go right and those who stand to lose if they do not are not the same. It is significant that the corporations are implacably opposed to proposals that they should be liable for any damage caused by the products of GM technology. They are demanding a one-way bet: they pocket any gains and someone else pays for any losses. It also gives us an idea of how confident they are about the safety of the technology.

What is harder to understand is why our regulators are still so reluctant to adopt the precautionary principle. They tend to rely instead on what we might call the anti-precautionary principle: When a new technology is proposed, it must be approved unless it can be shown conclusively to be dangerous. The burden of proof is not on the innovator; it is on the rest of us.

The most enthusiastic supporter of the anti-precautionary principle is the World Trade Organisation (WTO), the international body whose task it is to promote free trade. A country that wants to restrict or prohibit imports on grounds of safety has to provide definite proof of hazard, or else be accused of erecting artificial trade barriers. A recent example is the WTO's judgement that the European Union's ban on US growth-hormone injected beef is illegal.

By applying the anti-precautionary principle in the past, we have allowed corporations to damage our health and our environment through cigarette smoking, lead in petrol, and high levels of toxic and radioactive wastes that include hormone disrupters, carcinogens and mutagens. The costs in human suffering and environmental degradation and in resources to attempt to put these right have been very high indeed. Politicians should bear this in mind.

 

Conclusion

There is nothing difficult or arcane about the precautionary principle. It is the same reasoning that is used every day in the courts and in statistics. More than that, it is just common sense. If we have genuine doubts about whether something is safe, then we should not use it until we are convinced it is. And how convinced we have to be depends on how much we really need it.

As far as GM crops are concerned, the situation is clear. The world is not short of food. Where people are going hungry it is because of poverty. Hardly anyone believes that there will be a real shortage within 25 years, and a recent FAO report predicts that improvements in conventional agriculture and reductions in the rate of increase of the world's population will mean we will continue to be able to feed ourselves indefinitely.

On the other side, there is both direct and indirect evidence that gene biotechnology may not be safe for health and the environment. The benefits of GM agriculture remain hypothetical.

We can easily afford a five-year moratorium to support further research into improving the safety of gene biotechnology and making it more precise and more effective. We should also use the time to develop better methods of sustainable farming, organic or low-input, which do not have the same potentially disastrous risks.

 

Notes and references

  1. See, for example, S. Holm and J. Harris (Nature, 400 (1999) 398). Compare C.V. Howard & P.T. Saunders (Nature, 401 (1999) 207) and C. Rafffensburger et al. (Nature, 401 (1999) 207-208).

  2. We are now told that in the case of tobacco and lead, many in the industry knew about the hazards long before the public did. It is not always wise to accept broad and unsupported assurances about safety from those who have a very strong interest in continuing the technology.

  3. A.R. Wraight et al (2000), "Proceedings of the National Academy of Sciences" (early edition). Quite apart from the use of statistics, it generally requires considerable skill to design and carry out an experiment to provide a convincing demonstration that an effect does not occur. It is all too easy to fail to find something even when it is there.

  4. M.E. Cannell et al. Theoretical and Applied Genetics 99 (1999) 772-784.



site map home email i-sis The Institute of Science in Society
PO Box 32097, London NW1 OXR
Tel: 44 -020-7380 0908

Material on this site may be reproduced in any form without permission, on condition that it is accredited accordingly and contains a link to http://www.i-sis.org.uk/





back to Mae-Wan Ho | co-globalize | rat haus | Index | Search | tree