The European Food Safety Authority (EFSA) has published guidance on how to incorporate uncertainty in to scientific assessment[i]. On the plus side, this is a thorough attempt to bring objectivity to the description of uncertainty and to minimise subjective opinion. On the negative side it could eliminate the opinion of scientists from the policy debate. Where uncertainty exists, this could result in risk-aversion in policy-making.
As a scientist, I believe it is vital that public policy is underpinned by a foundation of evidence. However scientists must also acknowledge that policy makers look through many lenses when making their decisions and science needs to play its part as one of these lenses. It is therefore important that the relationship between uncertainty in the evidence and risk to policy is understood.
While scientists are used to dealing with the uncertainties inherent within their evidence, these uncertainties present a real tension when being used to underpin the more black and white, yes or no, world of policy . Government departments, like Defra, use evidence to guide rather than to determine policy in areas of uncertainty.
Scientific uncertainty comes in two basic forms – aleatory and epistemic. Aleatory uncertainty is the natural variability in a system and is often irreducible even through research. For example, the yield of wheat per hectare from British farms has a tendency to vary among years. In contrast, epistemic uncertainty is what we don’t know, or gaps in our knowledge and is amenable to being reduced through research. For example, wheat yields from British farms have been, on average, static for about the last decade and we don’t know why. It is important to understand the difference between these forms of uncertainty in the context of evidence assessments for policy making.
This is well illustrated by the recent EFSA document which is aimed mainly at documenting epistemic uncertainty. Evidence assessments are now used widely to produce ‘scientific opinion’ in an attempt to advise policy-makers about the scientific consensus view on a subject. EFSA uses them a lot – e.g. for assessing the safety of pesticides or GM organisms. The Intergovernmental Panel on Climate Change (IPCC) is another body that has done this on a massive scale to provide an assessment of the evidential basis for anthropogenic climate change.
These assessments needed to include opinion because we know that the way evidence is generated through the scientific process is itself subject to aleatory uncertainty. For example, the results from many experimental studies carried out in the fields of psychology and biomedicine are known to be unreliable[ii]. Including just the epistemic component of uncertainty using this literature could produce a biased assessment. Among all the studies done in a particular field, it can be impossible to discriminate the reliable from unreliable studies using systemic, rule-based assessment. In the environmental sciences, where studies are often impossible to replicate and where less reliable inferential methods are often used, this problem is probably even more profound.
Within this context, the EFSA attempt to corral and upgrade the assessment process by being clearer about how uncertainty is being dealt with is commendable. However, nobody should imagine that this will solve the problem about how scientific evidence is used to define the risk associated with food in Europe. Beliefs and values are as prevalent within scientists carrying out assessments as they are in non-scientists. The kind of processes being suggested by EFSA, while necessary, still should not ignore scientific opinion. The EFSA guidance carries the risk of systematising the expression of uncertainty by focussing purely on the state of knowledge, the epistemic component of uncertainty. Recognising the existence also of the aleatory components of uncertainty in scientific assessments is essential. It brings humanity to the discourse between science and society, and science and policy.
[i] Guidance on Uncertainty in EFSA Scientific Assessment, EFSA Scientific Committee, doi:10.2903/j.efsa.20YY.NNNN, http://www.efsa.europa.eu/sites/default/files/consultation/150618.pdf
[ii] Nosek, B.A. et al. Estimating the reproducibility of psychological science. Science: 349 DOI: 10.1126/science.aac4716