Heroes clean-up Salisbury

At the end of last month, I closed a meeting in Defra in London in which we agreed to advise ministers that the final site identified in Salisbury as needing specialist clean-up was safe. This was the final chapter in the follow up to the novichok nerve agent attack on 4 March 2018. For almost a year many dedicated people had laboured to reach this objective and we were closing down one of the most shocking incidents in recent British history.

I am the Chief Scientific Adviser at Defra and it was my job to constitute and oversee the process which would provide assurance that sites were safe to be released back into public use following extensive testing.

On 4 March 2018 Yulia and Sergei Skripal were poisoned by novichok, a liquid nerve agent.

It threatened their lives, and those of other citizens in Salisbury, including Detective Sergeant Nick Bailey. On Saturday 30 June 2018 two people were admitted to Salisbury District Hospital having been taken unwell in a private property in Amesbury. Dawn Sturgess tragically died on the evening of Sunday 8 July, while Charlie Rowley survived thanks to heroic efforts of the first responders and emergency services. Both were further victims of the novichok nerve agent.

I want to tell the story of how we have cleaned up the 12 sites identified across Salisbury and Amesbury as needing some level of specialist decontamination. Public Health England advice is that the risk to the public remains low, with no further cases of illness linked to this incident.

Many people will have heard of COBR, the top emergency response function within government. Alongside COBR is an advisory committee called SAGE (Science Advisory Group for Emergencies) which can be stood up in a matter of minutes to hours depending on the need. This COBR-SAGE structure swung in to action within minutes of confirmation that a nerve agent had been used in Salisbury.

Government is not caught off guard when these things happen. We regularly exercise this system of response because sadly we know this is a threat which exists.

Novichok itself is highly dangerous and difficult to detect. SAGE considered the evidence about novichok to inform it about what the effects were likely to be. This knowledge was critical to our clean-up operation.
Events like the Salisbury poisoning have two phases; the first is the response phase, led by the relevant government department. In this case the Home Office coordinated the immediate response, after which it fell to Defra to marshal the recovery, including the clean-up.

We brought together all the expertise which can be brought to bear on this difficult problem including military and civilian capabilities. This exists to protect public health and is always ready to swing in to action. It was last used operationally in 2006 when Alexander Litvinenko was poisoned with polonium-210, a radioactive material.

The activity was supervised by a body known as the Decontamination Science Assurance Group (DSAG) which I chaired, bringing together experts in chemistry, statistics, toxicology and public health. DSAG’s role was to assure ministers that each contaminated site was clean and could be released back to public use.

The use of a nerve agent in a British city is unprecedented and required a process to be established from the outset to make sure each site could be declared safe. Our priority was to get South Wiltshire back to normal as quickly as possible, but we would not compromise on safety and took a highly precautionary approach to our decontamination process. This involved testing for the presence of novichok, cleaning and re-testing for presence. This was repeated until we saw no more novichok. The procedure was technically challenging and rigorous, often involving experimenting with different cleaning methods and understanding how the chemical interacted with different surfaces and how it moved around.

To create even more rigour I asked an independent observer from outside government and outside the immediate circle of experts to audit and challenge our decisions and also to observe how we worked.
Dstl had the technical ability to detect extremely small quantities of novichok using methods which were independently verified by the Organisation for the Prevention of Chemical Weapons (OPCW). Our ability to detect novichok got progressively better and this allowed us to develop a detailed picture of the patterns of contamination.

While we obviously wanted to minimise disruption, the thoroughness of our procedures led to some of the most contaminated locations being substantially deconstructed. This was necessary to comply with our exacting standards of cleanliness.

In total we decontaminated 12 sites across Salisbury and Amesbury. The process also included the decontamination of personal property and vehicles. The methods we used placed the protection of public health as the absolute priority.

It says a lot for the efficiency of the standard barrier and cleaning methods used to protect first responders from the NHS who attended the casualties and treated them later in hospital that none of them suffered any ill-effects from the agent.

I was often tempted to join those doing the work on the ground to carry out the testing but, in practice, this would have been a bad idea. DSAG needed to maintain a distant and objective relationship with those undertaking cleaning. DSAG’s priority was to safeguard the standards.

At the centre of this, was our military-led taskforce, carrying out the cleaning and sampling with the utmost professionalism. Mostly clad in extremely uncomfortable suits which gave them the protection they needed to work with this highly toxic substance, these heroes worked through the heat of the summer of 2018 cleaning and sampling, cleaning and sampling. Both the response and recovery operations were supported by the significant expertise of Dstl, including the laboratory technicians who have analysed literally many thousands of samples in high containment laboratories, again clad in protective gear.

There are many unseen heroes in this story. From the first responders who had the insight to understand what they were beginning to see, to the nurses and doctors who worked tirelessly to save lives and the military and civilian experts who did the chemical analysis and cleaning. It is hard to overstate my admiration for these people who swung in to action as a team. If those who carried out the despicable act of deploying a nerve agent in the middle of an iconic and beautiful English city thought they could break the spirit of those who would respond then they were seriously mistaken.

We have learned a lot as a result of this experience. It should never have happened but the good message is that we will be even better prepared if there is a next time, and we will share our knowledge through the OPCW in ways which help others also to resist this kind of belligerent attack.


The “feed the world” dilemma: a philosophical tilt at the ethics of researching pesticides

Research scientists are often faced with ethical dilemmas because there are almost always both up- and down-sides to each scientific discovery. For example, research on viruses can result in better treatments but sometimes it could be used to make the viruses themselves even more deadly. This issue of “dual use” of research has been a point of debate for a long time. Regulation sometimes comes to the rescue but it is a blunt instrument which can only act as a broad guide to ethical decision-making in research.

Researchers understandably try to focus on the good that the results of their work can do but is there enough attention given to the potential for bad outcomes? Those who study basic science are probably least sensitised to these issues because they are arm’s length from the use of research, but surely all research cannot be a universal good as the advocates for basic research often argue?

This interesting subject came to my attention recently when I received an e-mail from a leading researcher who was worried about the morality of his research. In his case, he was asking me: “Is it ethical for a scientist to work on developing new pesticides?” To my shame, I had never thought about this question before either in the context of pesticides or anything else. I was operating under the assumption that since my research had the potential to produce good then it must be good, and I had absolved myself of further moral responsibility. He was asking me, in this case, because I have expressed views about pesticides which have questioned their current pattern and scale of use (http://science.sciencemag.org/content/357/6357/1232; https://www.nature.com/articles/s41559-018-0557-8).

Pesticides are often in the news for bad reasons, whether as the close relations of chemical weapons like ‘Novichok’, as possible carcinogens like glyphosate or because of suspected or actual environmental impacts like neonicotinoids. The dilemma about pesticides arises because they also bring advantages for wider society. Pesticides could be a totemic test of the morality of research.

Researchers are often fêted for their capacity to make discoveries which lead to the good changes in society but do they carry some responsibility for the bad outcomes too? Many, I am sure, would say that this is going too far. Apart from potentially stifling the free-rolling creativity of science which powers economic growth (widely assumed to be a “good”), researchers have no way of knowing how their research will be used in future so how can they be held accountable? Discoveries which are seen as “goods” in the present often come back to haunt us. Was the invention of the internal combustion engine really a “good”? I am sure that various classes of pesticides, like organochlorines, were seen as a great innovation when they were developed but over-use and over-reliance on them as they were built in to the economy turned them in to a blight.

There is something deeply personal about the response to my correspondent. Here was a researcher who really cared and I wanted to try and help. What were his own motivations for researching new pesticides? Was the research the source of a lucrative contract from a chemical company? Or was it just another route for sustaining his flow of publications to tick boxes in the academic career ladder? Or was he trying to satiate his own intellectual vanity? Or was he genuinely convinced that his new discoveries would help people and the planet?

Most of us would look at this set of possibilities and probably settle quickly on the last one as the only motivation which is completely morally defensible even if all the others could be defensible as routes to achieving universal goods as well. But they only really work if we see ourselves as cogs in the wheels of a much larger, tursted system which we do not control and which expects us to behave by its moral code. But does this absolve us from our individual responsibilities?

Looking at this based on its philosophical roots, research scientists are probably mostly consequentialists at heart in that they try to maximise the utility derived from their endeavours. For example, in the UK, those who use animals in research are obliged by the licencing process to face up to the ethical question about balancing the welfare of the animals involved with the benefits coming from the research. This, “for the greater good” approach, is a typically utilitarian view of the world.

Interestingly, few other people who use animals for nutrition are also obliged by society to be confronted by these moral questions.

As somebody who once held an animal research licence within the UK system, I was always concerned about getting the right balance. Even if researchers are utilitarian at heart, are they honest in their assessments of how their actions will, on balance, lead to good outcomes? Is it not arrogant to think that we can make an informed ethical choice in the present without true knowledge of what is going to happen in the future? Is the personalised utility often linked to how scientists are rewarded really correlated with the utility in the long term at the level of wider society? I suspect not, or, at least not very often.

This is why the consequentialism, which is so strong within the ethical foundations of science, needs to be built upon a duty to follow ethical standards. For example, we need to know what is “good” and we have a duty to work towards this. Otherwise, the temptation will be for researchers to rationalise about the future use of their discoveries without much consistency or justification and, as Robert Pirsig has said, rationality is a ghost. The end does not always justify the means and in some circumstances the end may never be justified whatever the means. In the current reward structure it is difficult for researchers to admit that “the right thing” might be to ignore the discovery altogether, not publish the scientific paper, not talk about it at all and to decide that some areas of research are off-limits because of the risks of letting a genie out of its bottle are too great. This is associated in particular with “dual-use” but even non-nefarious use of research could have negative impacts.

In the context of pesticides, the grand counter-claim of the big agri-chemical companies to this kind of conservatism is that they feed the population of the world, largely through their support for chemically-based agriculture. This is built upon scientific research; the research budgets of some of these companies can be greater than those of developed nations. This has a strong positive moral resonance with the idea of research in the service of humanity. For them, the end – feeding the world – is the ultimate in utility but are these companies making ethical decisions along the way? I suggested to my correspondent that he could apply four tests to understand whether the method being used to deliver the moral imperative to feed the world was itself moral. Perhaps these kinds of tests could apply to any organisation funding research, not just those involved in agri-chemicals.

The first test is to ask how much profit those companies which are developing new pesticides might be willing to forego if they were to learn of a new technology which replaced their chemicals and which rendered them unnecessary. This is not completely hypothetical. There are other emerging technical and business models for food production, using other technologies like intensive horticulture, which could greatly reduce the use of chemical pesticides. Are the companies investing their profits in these alternative paradigms rather than inventing new chemicals?

The second test is to ask how directed the company is towards a pre-specified outcome. In other words, is the company open to new and surprising results which could change the course of their search for a better moral outcome? For example, if their interest is in a new generic test of pesticide toxicity to animals, would they be as open to having the researcher test some of their currently marketed pesticides, which might have gone through older and less advanced screening? There is risk in this for the company that some of those older product might be found to be unsafe, but they should still want to know.

Then the third test, which follows from the second, concerns the willingness of the company to declare openly, and publicly, contra-indications from new research which places doubts on one of their products already on the market.

The fourth test is a question for the researcher himself. Is there is a strong argument to justify the research, beyond the vanity of the research itself. This means carefully following through potential resulting scenarios. For example, will the research genuinely benefit people in most need or will it add to the growing distance between rich and poor? This is a very hard question to answer for any researcher focussed on basic processes but I think it is important and all researchers should continuously challenge themselves on this front. Would the researcher be prepared to defend robustly the research in front of a sceptical set of inquisitors?

I am sure others could find a different and perhaps better set of tests, but the point is that they need to develop specific tests to understand whether the pathway to the outcome of “feed the world” is moral, irrespective of the virtuous nature of the outcome.

One additional, and worrying, aspect of pesticide research involves what could be called negative consequentialism. This is where, no matter what the evidence might tell us, researchers have decided that the costs of pesticides to the environment and human health are too great and all they then need to do is design the experiments to show this. This questionable moral position is often driven by influencers who are rarely scientists but for whom negative consequentialism is deeply doctrinal. Some researchers are taken in by this and the moral imperative of “do no harm” can be more attractive to them than “feed the world”. In this case one virtue is being allowed to undermine another mainly because, in both cases, the utility of both is undermined by the moral pathways used to achieve those virtues.

In the end, individual scientists have to make their own ethical choices and be held accountable for them. As Sartre said “Man is condemned to be free; because once thrown in to the world he is responsible for everything he does.” Scientific research really is at the extreme and very raw end of existentialism.

With every innovation comes a risk and only those who are closest to the innovations – i.e. the researchers themselves – are able to assess the risks, but they rarely do so. Scientists cherish the ideals of being the arbiters of good science (something known as the Haldane Principle); in the same way, they need to assess the risks posed by science. Is this lack of explicit consideration of the risk itself unethical? In animal and some biomedical research it is seen in this way, so why not also in all other areas of research? Why should animal sentience or human embryos be the only stimulus for cost-benefit assessment? The community of researchers studying gene drives have demonstrated a strong sense of moral responsibility for their work by considering a priori what risks could emerge from their work and how they could be managed. Perhaps we need more of this.

A question still in my mind is how much residual responsibility should researchers, or more likely the professional bodies or institutions which represent the research community, have for calling out unethical uses of the technologies they have originated? They are very good at selectively emphasising the “goods” coming from scientific research[1] but do not take on responsibility for the “bads”. Like agri-chemical companies, it is not in their interests to do this, but is this a morally-justifiable position? Are the institutions which represent or teach science just as culpable?

So, are pesticides intrinsically bad? Probably not. It is how we develop and use them that is often bad. Perhaps this is a challenge to the regulatory process, but if ethically-founded research scientists had a greater say in how their technologies are used, then we might be in a better place than we are now. Asking whether it is ethical to research new pesticides, or anything else for that matter, will set us on a better moral pathway.

[1] See for example: The Scientific Century: securing our future prosperity, Royal Society Policy document 02/10, March 2010 DES1768, ISBN: 978-0-85403-818-3

The UK’s Latest Climate Projections

Ian L Boyd

We sometimes have a tendency toward trivialising important issues, while on the other hand it can be difficult to transmit the true gravity of some problems.

The latest Met Office analysis has shown that temperature over the most recent decade (2008-2017) has been on average 0.3 °C warmer than the 1981-2010 average and 0.8 °C warmer than the 1961-1990 average. These seem like very small amounts, because as we all know temperature can fluctuate by much larger amounts than this in just a single day. But when we hear this means nine of the ten warmest years having occurred since 2002, suddenly the gravity of this becomes a lot more real.

Temperature effects tend to be cumulative. Putting this in simple terms, it might be difficult to tell the difference in temperature between two cups of tepid water where one is only 0.8 °C warmer than the other. However, if you immersed the roots of two identical plants in these two cups, there would be a significant difference in the growth rate of the plant in the warmer water.

That’s because the rate of many chemical reactions in nature are temperature dependent – and even small temperature changes applied over a long time can make big differences.

This example rings true as we publish the UK Climate Projections 2018 (UKCP18), the most-up-to date picture of how our climate could change over the next century – and the first update in nearly ten years.

These projections suggest that if we go on much as we are now, then by 2070 warming is likely to be 0.9 °C to 5.4 °C in summer, and 0.7 °C to 4.2 °C in winter. Again, this does not seem like much but it also shows that in future winters will be wetter, summers will be drier and weather extremes will be more common and possibly more severe.

If we use the hot summer of 2018 as our benchmark, there was a <10% chance of seeing this between 1981 and 2000. The chance has already increased to 10-20%. By mid-century this could be ~50%.

It is tempting to pick the lower or higher value in the range of these estimates, depending on how one feels about climate change. My advice is not to do this. The lower and higher values in these ranges are the least likely to occur. The most likely lie somewhere in the middle.

The project which produced these projections has used the Met Office Hadley Centre supercomputer to run simulations of climate both forward and backward in time. Comparison with the historical climate, which has been measured independently, suggests how good the calculations are at tracking climate and this provides an insight in to the reliability of projections to the end of the century.

The calculations are applied to a 12 km2 grid covering the surface of the planet. This is necessary because of the connectedness of the climate across the globe; it’s impossible to accurately calculate the climate for the UK without also calculating the climate elsewhere as well.

The calculations are then repeated many times to create an ensemble of projections and this is what produces the range in expected values. The randomness in the climate means that we cannot be sure of exactly how the climate is going to evolve but we can be fairly certain that it will remain within a particular range.

Of most significance is that the backwards projections of climate are reasonably consistent with the climate as we have measured it through the past 100 years. Challenging the calculated outcome with this real measurement of climate increases confidence that the calculations are valid. It follows logically that projections into the future should also have similar levels of validity.

It is comforting, although not terribly surprising, therefore, that the calculations end up making a reasonable prediction of climate trends and the emergent message is that the climate is warming and will go on warming. The main caveat is that we have to assume that the basic processes we use in the calculations are the same in the future as they have been in the past.

There is lots of debate among climate scientists about whether the physics and chemistry are correctly represented in these calculations. For example, there is uncertainty about how to include the effects of declines on Arctic sea ice. As in many other communities of scientists, climate scientists are involved in constant debate about these details and this leads to incremental refinements in the calculations.

These debates happen at all levels and include individuals who fundamentally disagree with each other as well as institutions which carry out the calculations slightly differently from each other because their own scientists hold a different view from those in other institutions. The good thing about these debates and disagreements is that they add to the rich picture of possible futures within the projections.

I was the chairman of the Board which provided governance oversight of the project which produced the latest climate projections for the UK. This board appointed a peer review panel with a chairman – Sir Brian Hoskins – who was very challenging towards the Met Office and its methods. He, in turn, made sure his panel was composed of climate scientists who could understand the complexities of the calculations but also challenge the projections where needed. It is not in the character of these scientists to hold back if they disagree about a technical or philosophical point in the methods.

I was keen to see the Met Office having its feet held to the fire and the Peer Review Panel did this very well. This robustness of underlying process and unfettered peer criticism is another reason why I can have high confidence in the projections.

Many different climate calculations have now been done, but, in spite of all the debate and disagreement on details, all these calculations project warming trends in to the future. There is something important in this level of consistency across a diverse range of global scientific expertise.

This idea of consensus emerging in the context of divergent views is quite powerful. It suggests robustness to the projections of climate. Of course, it is possible that all climate scientists are cut from the same intellectual cloth and are, therefore, blind to other possibilities. But my experience of the process is that this is unlikely. Alongside these other independent assessments, UKCP18 is saying that the UK, as well as the rest of the world, is facing an increasingly difficult climate challenge. We need to adapt to this more quickly than most people realise and this also involves changing our lifestyles to use less energy and, therefore, produce much less greenhouse gas like CO2 and methane.

These projections show us a future we could face without further action, and will help businesses, industry, investors, local authorities and individuals plan for these changes and make decisions accordingly. We need to take heed, use them and adapt.


Reward and recognition for quality in Science.

I was recently invited to sit on a panel at a conference about research cultures run by the Royal Society.

The debate among the panellists showed general agreement about what a good research culture involves. The word “openness” was used a lot as was “trust”.

But the discussion exposed a paradox about quality in science. Whereas competitiveness was being praised as the driver of scientific inventiveness, a consequence of this competition – gaming, domineering, mendacious and sometimes dishonest behaviours – was being rejected. These behaviours compromised the openness and trust of science.

It was good to see these issues surfacing but I observed another problem connected with the paradox. Few people at the meeting seemed to clock the absence of government scientists, which are a large section of the scientific community and which I represent. This suggested that science has a diversity problem which goes beyond BAME and gender bias. I suspect the commercial sector was equally under-represented.

Like many professions, science is full of narcissism and tribalism. It is dominated by self-defined elites and has institutions which are intentionally structured to pass this culture on down the generations. Prizes and other badges of recognition are handed out as stereotype reinforcement. The system is designed to sustain strong discrimination as a badge of quality.

Is this discrimination actually functional? To address this question perhaps we could start by agreeing what quality looks like. As the philosophical argument goes, you know quality when you see it. So who is making the judgement about quality?

The concept of “excellence” is used almost universally in science as the basis for decisions about who and what is supported. Indeed, influencing who defines the badge of excellence becomes important in science. Positioning one’s-self professionally closer to this centre of control is part of the game.

This concept of excellence is at the root of the quality definition problem. It says that only scholarship itself has self-recognition; only people who know can know, or (turning this around) those who are not classed by their peers as scholars themselves cannot recognise scholarship and are therefore ignorant about quality. It is mainly scientists of a certain type – the non-government, non-industry type – who judge the quality produced by scientists of all types.

I think many people would see this as being wrong in principle. There is a suspicion, for example, that the reward system is rigged to benefit some types of scientists as opposed to all types. For example, the current system for allocating public funding to scientific research is based mainly on a peer-based judgement of quality made by those who benefit from the funding. Arguably, if this existed in the commercial sector it would be branded as corrupt.

There are perhaps two different, but equally plausible, models for how to deliver quality in science. One of these models, which is generally the model rewarded by our current culture, involves peer-recognition of high scholarly quality and ground-breaking discovery.

The other model is much more egalitarian. There is a significant group of scientists delivering huge public good through their daily activities. They often operate in the government or public service but are rarely recognised. Others in industry do the same but are delivering public goods through the market.

These models – one involving the elitism of peer recognition and the other public service – could be seen as opposite ends of a continuum which also captures the contrasting motivation of scientists to generate mostly private as opposed to public goods. The whole spectrum is important.

However, the reward system, and the incentives for scientists, favour the elites – the competitive science entrepreneurs, or those who had a lucky break – while the rest of the talent needed to support the social benefits from science is undervalued. How could we change this?

First, we could ensure that all people, irrespective of their employment or background should have a right to apply for public money to conduct high quality research. It may come as a surprise that, no matter how brilliant your ideas might be, you would have to be a signed up member of an elite to qualify for access to those opportunities. Strong vested interests lock out others from funding and from the judgement about what constitutes quality. Even professional scientists who work for the public service are largely excluded.

Second, we need to be better at recognising and rewarding quality in science in its many forms. This will involve traditional performance metrics as well as peer recognition. But it needs to cover those who are at the start of their research careers (when most people have some of their best ideas), deliver synthesis, organise and fund research, stitch together research in to innovative solutions, or who make sure scientific knowledge is properly integrated in to the policies and actions of governments. We need a public debate about what actually constitutes quality in science.

Science needs to be a truly diverse profession providing equality of opportunity for everybody and this is very far from being the case at present. People should be judged on the quality of their ideas irrespective of who they are.

Reform needs to start with our institutions. A significant corrective intervention is needed to re-orientate these institutions toward supporting scientists of all types. This should include the learned societies but it needs to also involve government funding bodies and universities.

This isn’t about lowering standards. It is about broadening the church of science; it is about recognising that “excellence” reflects a lot of different qualities and is not purely self-defined by an elite. Society should be the judge of who is good and worthy of recognition in science, not just scientists themselves.

Can government scientists speak freely?

This is a speech delivered on 31st October 2018 to the Science Media Centre at the Wellcome Collection, London.

I want to start by stating categorically that if anybody thinks I am here to defend the suppression of free speech by anybody, including government scientists, then they would be dead wrong.

I stand here as somebody who works for government and who is unencumbered by any such constraint. What I am saying here has not been through any government filter.

But I want to provide a reasoned argument as to why government scientists need to be careful about what they say in public and why, in general, they are careful.

Government employs scientists to help it understand how to develop better policies and to help those policies function well. Some scientists perform particular operational tasks, such as carrying out fish stock assessments, whereas others have a broader, more advisory role.

A few are synthesisers, networkers and organisers who draw on the knowledge in the scientific literature and the skills in the wider scientific community. Many do research, because this is important as a way of sustaining skills and continuously improving the knowledge base upon which policies sit. Some even sit at the very top of the Civil Service.

Like any employer government expects its scientists to abide by certain rules of behaviour. As in any work environment scientists have both a contractual obligation to their employer, and a social obligation to those with whom they work. They need to balance the need for candour internally with candour externally to the work environment. Scientists need to know how to build trust on both sides of this divide.

These are also moral judgements based on the balance between duty towards one’s institution, and one’s colleagues, and duty towards informing the wider public about one’s work. This duty is balanced differently among industry, academia and government, depending on whether scientists are generating public or private goods.

In all cases it comes down to an individual morally-based judgement about whether it is right to speak openly about one’s work.

This type of balancing of commitments is part of our social contract, as John Locke or Jean-Jacques Rousseau would have explained it. Like all citizens, scientists accept obligations which restrict individual freedoms so that they can work for the good of civil society. Where scientists are prevented, against their better judgement, from speaking then the social contract has been broken. But it can be equally broken if they divulge information which leads to bad outcomes, even if they were unintentional.

The UK places no constraints on government scientists speaking about their work other than for those who hold security clearances or during pre-election periods. Scientists are only expected to abide by the Civil Service Code defined by integrity, honesty, objectivity and impartiality.

Like it or not, government scientists are linked to a political process which is strongly scrutinised. In this environment, some issues can be magnified or twisted. How should scientists view the risks of this happening and how should they then mitigate those risks?

My own view, based on experience, is that one needs to be very cautious indeed. For scientists in government there is a very high risk that what they say will be used to manufacture a case against current government policy. This most likely politicises government scientists and potentially sets them against their own employer.

If government scientists could be sure that their views would be reported as straight, unabridged pieces then I am sure many more would step forward to talk openly. But the probability of this happening is quite small.

As a general rule, practicing scientists need to stay out of politics. This applies as much to government as non-government scientists. Scientists occupy a special position as the custodians and communicators of knowledge. If that knowledge is interpreted as advocacy for one view or another then the messages from science will not be heard by those who would benefit most from listening.

Being listened to, and believed, especially behind closed doors in government requires trust that the harsh messages sometimes being delivered will not reach the public domain.

Government employs communication professionals to advise about the interaction between government issues and the public. My advice to any scientist is to listen to them because they know a lot more about this than we do.

I can see why this dynamic could be portrayed as “gagging scientists”, but those who say this are unable to put themselves in the position of the government scientist, or are actively trying to dredge for dirt, or have little concern for the wider perturbations which could materialise (including misrepresentations of the true message), or they care little about the human cost involved. For most people there is nothing worse than being at the centre of a political storm and I would not wish that on any fellow scientist.

I have a duty of care towards those who work for, and with, me and that duty means I have to be very cautious in my advice about how fellow scientists should engage with the press. I don’t think this is “gagging”; it is just good sense.

I am not denying that sometimes in government a line is crossed between supporting scientists to make their own judgements and forcing them to keep quiet because it is not in the interest of government to hear them speaking. I am also not denying that some other countries have systems of control which I would not agree with and which I would refuse to work within, but that is not the case in the UK.

Of course, there is a danger that systematic risk-aversion can become oppressive, pervasive and rooted in institutionalised cultures. Some of this does exist but I think its effect depends hugely on the quality and confidence of the leadership of those institutions.

Overall, government institutions find a good balance between supporting scientists’ freedom to speak, if they want to, and holding them to account if they break the rules of integrity, honesty, objectivity and impartiality. I am impressed by the way in which government scientists in the UK listen to advice, intelligently assess the risks and the moral arguments and come to their own decisions about how to behave.



Peering in to the Arctic from a Defra Perspective


IB Svalbard 2

A photo of me in Ny Ålesund.


I spent a few days last month in Svalbard, bringing back memories of past Arctic experience doing research in Iceland and Alaska. Svalbard had been a place where many of my colleagues had worked. The outpost of Ny Ålesund, where I was headed, lies at nearly 80oN, tucked in to a fjord on the north-western coast of the archipelago. It is the most northerly permanently inhabited settlement in the world and was the starting point for many early 20th Century expeditions to reach the North Pole.

Svalbard, NY Alesund IMG_3267_small

An aerial photo of Ny Ålesund.




Svalbard, NY Alesund IMG_3329_small

Many of the buildings in Ny Ålesund are of historical importance.



Ny Ålesund isn’t somewhere just anybody can stay. The settlement is an international research station built within an early 20th Century coal mining settlement. Although cruise ships visit and disembark passengers for an hour or two, it takes an invitation from one of the 11 countries which have a presence at Ny Ålesund if you want to stay there. Surrounded by mountains, glaciers and the Arctic Ocean, Ny Ålesund is a microcosm of both the Arctic environment and Arctic politics.

Norway has sovereignty over the islands but, through the Svalbard Treaty of 1920 other countries, including the UK, have a right of access. The strategic importance of the Arctic is such that lots of countries want to have a foothold there even if they have no land or coastline which impinges on the Arctic itself. 

The politics of the Arctic stand out as one of the major drivers for the Ny Ålesund community but the scientific research being done there is at least as important and it is a common objective which brings together all the national interests. The natural gregariousness and openness of scientists makes for a relaxed and welcoming atmosphere.

Being close to the North Pole, Ny Ålesund acts as a location used to download information from the polar orbiting satellites monitoring the Earth’s surface. These tell us about atmospheric pollution, forest cover, ocean currents and much more. On a mountain above the station is a building perched precariously on a sharp ridge which can only be reached by a small cable car. This is used to sample truly clean air – one of very few places on Earth where this is possible. And further around the bay is a couple of radio dishes pointing to the heavens to measure the regular beat of pulsars in the distant universe. These act like beacons which are reference points used to measure the drift of the continents to an accuracy of millimetres.

Then there are the glaciers nearby which are retreating fast under both the influence of climate change and also because the glaciers in Svalbard go through regular multi-decadal cycles of surge and retreat. But their walls of blue ice are a dominant presence at Ny Ålesund – as are icebergs calving off them which then float past down the fjord past the settlement.

The fjord, which once froze over most winters, is now influenced by warm water from the Atlantic pushing its way north in to the Arctic so it rarely freezes over these days. Harbour, bearded and ringed seals, and beluga whales, eat the polar cod which enter the fjord to feed on the rich food sources resulting from the effects of fertilization of the ocean by glacial dust. These are all subjects of study to unravel how the Arctic is changing as the climate gets warmer.

On-shore, the permafrost, which stretches to depths of many tens of metres is also warming and showing signs of melting. As the soils are churned by freeze-thaw for the first time in millions of years they release more greenhouse gasses thus exacerbating the process of global warming. The immediate practical consequence at Ny Ålesund is that the foundations of buildings are beginning to move. It is perhaps ironic that the reason which first brought people to this remote spot – coal mining – are also the reason they keep coming back in the present day – to observe the effects of burning these fossil fuels on the planet.

By the time I visited towards the end of August, the bright summer flowers of the tundra had largely passed over and had been cropped flat by the unusual, short-legged, variety of wild reindeer which inhabit Svalbard (see photo). The many waterbirds – geese, ducks and waders like purple sandpipers – which breed in these parts had largely departed for their wintering grounds, many around Britain, and were a reminder of the close connection which exists between this cold, treeless landscape and the biodiversity of Britain. The barnacle geese which breed on the offshore islands in the fjord are the very same ones as we value so much at the Carlaverock nature reserve on the Solway Firth.

Svalbard, NY Alesund IMG_3281_small

One of the wild reindeer in Svalbard looking very healthy at the end of the Arctic summer. Svalbard reindeer are very much smaller in stature than those elsewhere.


The strength of the links between Svalbard and Britain go even deeper. How we respond to the challenge of climate change is going to depend on what happens here in the Arctic. British scientists are investigating the effects of warming on this delicate ecosystem. This includes experiments to understand how the tundra will change as the Artic warms up, potentially releasing more greenhouse gas in to the atmosphere. The west coast of Svalbard is especially good as a sentinel because it is warming unusually rapidly. This is because warm Atlantic water coming through the Iceland-Faroes Channel far to the south is making its way further north and starting to bathe the west coasts of Svalbard around Ny Ålesund.  When the manager of the UK’s research station first started coming to Ny Ålesund rain was unknown – all precipitation fell as snow. Nowadays rain is common.

Svalbard, NY Alesund IMG_3296_small

The UK’s Arctic Research Station in Ny Ålesund.




Svalbard, NY Alesund IMG_3294_small

The UK’s Arctic Research Station is operated by the Natural Envrionment Research Council.




There could be nowhere better to understand how climate change will change the Arctic. Sediment and ice cores taken locally place the current climate trends in the long-term historical context and show how unusual they are.

All this is affecting the economy of the Artic as well. A tourist industry is beginning to turn Longyearbyen, the frontier town which is the capital of Svalbard, in to a thriving centre for tourism. Only a few weeks before I visited a Danish shipping company announced that it would be testing the idea of sending cargo vessels through the north-east passage, something which can only happen because of the retreat of the fields of Arctic sea ice. 

The icon of the Arctic, the polar bear, is also likely to be affected and this is something people really care about. At Ny Ålesund, polar bears are respected if not feared. I had the mixed fortune to run in to a mother and cub (see photo). This is the ultimate predator. It is truly at the top of the Arctic food chain where humans are relegated to second place – which is a powerful message me as a representative of a species so dominant elsewhere.

polar b

Polar bears in Ny Ålesund.


By now, I hope it should be clear why Svalbard is important to Defra. The credibility of the UK as a leader in tackling climate change needs to be underpinned both by research examining its effects and presence of the UK in the international fora where decisions are made. Moreover, important components of the biodiversity of Svalbard are shared with the UK. And I have yet to meet anybody in the UK who does not care about the fate of polar bears. At present, they seem to be doing well in Svalbard partly because of reduced hunting but they are being confined to land more than in the past which means they hunt birds and their eggs. This will probably eventually affect those populations.

This is why we should care about Svalbard and the Arctic and why the UK needs to remain interested in its future. The Foreign and Commonwealth Office, together with the Natural Environment Research Council, maintain the UK’s research station at Ny Ålesund but I think Defra should also be fully engaged too as it develops a more expansive leadership role in international environmental stewardship in future.


Seeking Hi-Brazil

In some senses we all seek a promised land, like the medieval fable of Hi-Brazil, a phantom paradise island. For some, this might look something like “Love Island”, a fly-on-the wall reality show currently screening on UK television (I’m pleased to confess I’ve never watched it). For others, like me, it represents something a little grittier.

In my view the operation of government, in terms of how it comes up with ideas about how to fix problems, experiments with solutions, evaluates the outcome and then modifies the solution based on experience, is much the same as the scientific process. Building the scientific process in to government and making it the backbone of how government functions is my Hi-Brazil.

Like all promised lands it will be fictional and like the medieval mariners who hunted for Hi-Brazil, CSAs like me also hunt, mostly in vain, for their nirvana. But even CSAs can sometimes spy a distant land through a thick fog and can start to believe that it might actually exist. What does this land look like? On this land, there is a completely harmonious, seamless relationship between academia and government.

By way of confirmation of the sighting of this land, The Institute for Government (IFG), with part sponsorship from the Arts and Humanities Research Council, recently published a report called “How government can work with academia”. It looked at how government can improve the way it uses academic evidence and expertise in informing policy.

It is always helpful to have an external view such as that given by IFG on how the policy-science interface is working. Those looking in from the outside are often well placed to offer the challenge to government. The report was based on interviews in 10 government departments and it shed light on what works, what doesn’t work and it makes a series of recommendations. Overall, Defra fairs very well from the report and is seen as an exemplar in a number of areas including: its use of structured and responsive expert networks and committees, its systems for managing university relationships, and its approach to bringing in secondments to deliver valuable work (incl. evidence statements) including ensuring they develop insights into policy.

This is all very encouraging and is a tribute to the hard work and a gradually shifting sense of shared responsibility between the people who work in Defra – including those from my office and other scientists, analysts, social scientists and economists who are working in the same teams as policy professionals. This embedded model, which places specialists at the heart of policy-making and empowers them to have an equal share of the responsibility for policy development and delivery, is beginning to shine through in terms of better relationships with academia.

It takes years, perhaps decades, to turn around the massive ship of government, by changing cultures and ensuring that diversity of expertise is valued and built in to decision making. This shift isn’t the same in all parts of government but I think it’s particularly strong in Defra.
For example, Defra’s engagement with the new organisation responsible for overseeing the health of research in Britain, UK Research and Innovation, together with its component Research Councils, has influenced the way scientists are thinking about how they might address questions which are both scientifically interesting but which also address the issues vital to making better policy. Individuals from Defra, who have a strong sense of what great science looks like, participate in Research Council programme expert/advisory groups, knowledge exchange network events and other fora. They also work closely with senior researchers to shape their own ideas and those of the research community.

Gone are the days (I hope) when Defra’s representatives came to these discussions with an agenda. Those sitting on the policy-academia interface now include both academics and the embedded specialists who work in Defra. They are exploring cognitively complex issues which eventually leads to co-design of research and policy; one alongside the other rather than one subservient to the other. We have moved in this direction recently on pollinators, valuing natural assets and there will be more on this in landscape decisions and air quality in the near future.

This is all about having interactive, engaging and influential conversations. The short-term rewards for academics can be pathways to impact which they can exploit within the Research Excellence Framework, but the end point is much more significant. This involves better outcomes supported by a very influential set of thinkers and networkers who feel that they are part of delivering those outcomes. The outcomes are valued.

I still come across some crusty academics who see their role as a battle to keep government in check and policy specialists who just see academics as mendacious meddlers. But they are fewer and further between now and getting rarer all the time. While there is still much to do to make government think and function much more like a scientific process, I see the IFG report as a partial endorsement of progress. My Hi-Brazil is still partially in the fog but not as much as in the past.