Category Archives: environment

Tough choices around the costs and benefits of nanotechnology

A new report on a public dialogue on nanotechnologies has been published today, 26 May.

Technological innovation depends on science, both to provide the innovation itself and assurance that its benefits outweigh its costs. But when does an innovation become a risk? For most of the long pathway from an innovation emerging to its mainstream adoption in our lives, we tend to focus on the benefits. Only at the eleventh hour can some of the costs become apparent. But does it have to be that way? In my view, greater investment in understanding the basic science of risk and its communication is much needed in advance and to head-off this problem.

Nanotechnology is grounded in an understanding of how materials behave at very small sizes, and has had a long lead time. In 1857, Michael Faraday investigated the action of light on very thin films of gold and noticed that the fluid used to wash these films became ruby red, deducing that this was suspended gold. The particles were about 50 nanometres in diameter – about 1/2000th the width of a human hair. The fact that they were red, rather than gold-coloured, shows how nanomaterials can behave differently to larger pieces of the same material.

Compared with larger particles, nanoparticles can interact differently with light, have different electrical properties, or different chemical reactivities. Their surface area is huge compared to their volume, and most of their mass interacts directly with the outside world. This is what makes them so reactive. The small size of these particles also offers the opportunity for them to get to places where other particles simply could not reach, such as inside individual cells of organisms.

Nanoparticles derive from of a range of metals, alloys and compounds. They have application in everything from medicine to helping integrated circuit designers increase memory storage capacity on computer chips. Nanotechnology is becoming an integral part of our lives and we hardly know it.

The potential of nanotechnology is enormous, but what are the risks? If nanoparticles are capable of entering cells or disappear in to the environment never to be recovered, how can we be sure all the benefits that using them can bring will not rebound on us with some negative impact? It’s also one thing to produce nanoparticles intentionally and to control their release but it’s quite another to produce them unintentionally, as a by-product of some other process.

There is a clear need to understand what people think about these issues and where challenges exist. It is the combined role of government, industry, researchers, and NGOs to not only communicate science to a broad audience, but to engage citizens in a dialogue and capture what we understand to be the potential benefits and the costs of these technologies. People are often content to pay for initial research into technologies like ‘nano’ because they understand where the benefits might lie. It is much harder to persuade people to fund research to understand what the downsides of the technology might be even when the uncertainties can be truly daunting.

A new, qualitative public dialogue commissioned by Defra and carried out in conjunction with (and co-funded by) the organisation Sciencewise, as well as industry, seeks to find out how comfortable people are with specific applications of nanotechnology. By focusing on nano-based products, such as sunscreens and paints, the deliberation process sought to explore the motivation behind people’s views and perceptions.

The report, released today (26 May) highlights the importance of communicating to the consumer what is in a product. People like to know what they’re buying, and don’t like to be forced to consume ‘by stealth’. Nanoparticles have been used in sunscreens for many years but these are one of the applications that consumers are wary of. Citing a lack of clarity over what the product contains, there were concerns that something used on the skin, especially of young children by their parents, could be taken up by the body. It was also thought that nanoparticles from sunscreens could enter watercourses and behave in unknown ways.

This negative opinion of nanoparticles in sunscreens, stemmed largely from the fact negatives were not sufficiently balanced out by positives (prevention of skin cancers). Consumers couldn’t reason why nanoparticles were more efficacious in blocking UV rays. This revealed a deficit of understanding about why nanoparticles are effective in such a product.

Nanoparticles can also be used for remediating contaminated land and this raised the perception of risk. While participants agreed the purposes of removing contamination were worthwhile, there was a concern that they would remove one deleterious substance while replacing it with another, even if there is nothing to validate their concerns in this case. It was felt the future impact was difficult to predict. Lesson learned from the use of CFCs was important in people’s view. CFCs were once ubiquitous in refrigeration and used as aerosol propellants, but subsequently discovered to be the main cause of stratospheric ozone breakdown.

Participants were much more positive and accepting about the use of nanoparticles in paints and coatings, especially if new properties, such as being antimicrobial or more durable could be introduced. Their perceptions over disposal were no greater than they would have for other non-nanoparticle-containing paints, which often require careful disposal. The onus was seen as being on the consumer to read product labels and advice and dispose of waste paints properly. Likewise, nanoparticles used as a fuel additive to reduce emissions were welcomed. In this case pollution from cars was perceived as such a large problem that any risks of reduction using nanotechnology were, in the view of the participants, compensated by the benefits.

The judgement of participants identified the responsibility for dealing safely with nanotechnologies, like any technology, as being shared between government, industry and the individual. Outside this triangle, NGOs provide scrutiny. Crucial to any dialogue, however, are robust and clear channels of communication that serve not only to educate audiences, but also seek their voice when formulating matters of policy and regulation.

One issue that does concern me, however, is the extent to which we have the capacity to control the uptake of new technologies such as nano-based paints and sunscreens. The Montreal Protocol showed for CFCs that it is possible for global concerted action to be taken when presented with overwhelming evidence of negative impact. But in cases where evidence of potential damage is lacking, or where there are significant asymmetries between the winners and losers concerned with a new technology, the power of profit motivations could overwhelm any wish to be precautionary. If only we invested as much in environmental science as we do in developing new technologies we might be in a better position to judge where the costs and benefits of those technologies lie, and to design the use of new technologies in ways that maximise their pay-off.

These kinds of open dialogues provide rich and nuanced insights for scientists, industrialists and regulators around how much more work they need to do to communicate what is known and what is not about the risks and benefits of emerging technologies. Honesty in this communication is vital. Ideally, we need to be able to communicate information to people in ways that can allow them to make informed decisions and choices. When the costs and benefits are too difficult to express in these ways, government needs to adopt precaution and regulate based upon information derived from similar dialogues.

 

UK’s Cutting-edge science informs government response to ash dieback

The government often has to deal with difficult problems, and ash dieback disease has been no exception. Ash Dieback is a fungal disease likely to have arrived in the UK from a mixture of infected planting material and spores blown over from infected trees in continental Europe.

The pathogen causing this disease, Hymenscyphus fraxineus, was not formally identified until after it began seriously affecting trees in Eastern Europe in the early 1990s. Even then little was known about the pathogen that might help develop management strategies. Meanwhile, the disease continued to spread across Europe before being identified in East Anglia in 2012. Its arrival and the subsequent public interest demonstrates that trees, woodlands and forests hold a special place in our nation’s hearts.

There are an estimated 126 million ash trees in British woodlands over half a hectare in size, and many more in our parklands, hedgerows and cities. Ash is the 3rd most prevalent broadleaved species in GB woodlands, at 9%, and the fifth most prevalent of all trees at 4%. The economic benefit of forests is estimated to be £1bn to the UK economy, with even greater environmental and social benefits. As one of our native trees, ash is an important part of the forest ecosystem, supporting a huge range of biodiversity from lichens and mosses to invertebrates and birds. Forty-six species are only found on ash trees. So protecting ash trees is about more than just protecting a single species.

After the disease was discovered, Defra worked with the Biotechnology and Biological Sciences Research Council, to establish two research projects to improve our understanding of it. The experience in Europe showed that some trees were more susceptible to the disease, developing symptoms and dying more quickly, while others were less affected. This gave hope that some of the trees in the UK might be tolerant to the disease and their identification became one of Defra’s commitments in response to the disease.

The Nornex project, which published its final report last Friday (22 April) used molecular approaches to improve not only our understanding of the disease, but of the ash tree itself. The research has meant we have been able to develop genetic markers that signal tolerance to the disease, just as quality of plumage can signal biological fitness in birds. The tolerance was assessed using a selection of 182 Danish ash trees, scored for visual signs of disease which was then assessed against the extent to which specific genes were active. Three genetic characteristics appear to be important signals of resistance. Variability in susceptibility may be caused by how two genes interact.

This new knowledge is a great step forward and illustrates the benefits that cutting-edge science can bring to real-life problems, made all the more impressive considering the project ran for only about two years. One of the huge advances that has made this possible is the reduced time needed to sequence a genome, from years to hours and at a fraction of the cost; and the open and collaborative approach taken by the research team.

The project also made use of a Facebook game, Fraxinus, designed to use human pattern recognition skills to identify DNA sequence variations. The game was played more than 63,000 times and resulted in many reliable new sequence variants.

The research, led by Professor Allan Downie from the John Innes Centre in Norwich, was delivered by a consortium including: the University of York; the Genome Analysis Centre at the University of Exeter; Fera Science Ltd; the University of Copenhagen; Forest Research; the Sainsbury Laboratory; East Malling Research; the Forest and Landscape Institute Norway; and the University of Edinburgh.

The Nornex project’s research report can be found here: http://oadb.tsl.ac.uk/?page_id=964

A forensic approach to the environment

Nearly one quarter of the UK’s net worth is accounted for by the environment and so understanding how we assess it, understanding its benefits as well as the risks, is vital to preserving it.  This process – which we call ‘environmental forensics’ – was the subject of my recent contribution to the Government Chief Scientific Advisor Sir Mark Walport’s annual report on forensics.

In my chapter I summarised the recent work under the National Ecosystem Assessment and the Natural Capital Committee in improving how we evaluate the environment. We all carry the costs of the environmental decisions of those around us – for example we eat, drink and breathe other people’s pollution on a daily basis. At the same time we rarely think when driving our cars or firing up the wood burning stove that our actions could lead to premature deaths. We take the benefits without thinking of the costs. That is why regulation is important. Without it there would be large asymmetry between the private benefits gained from the environment and the public costs. It has become the responsibility of governments to sustain an appropriate balance between these public and private costs and benefits. But as government are often reluctant to place cost burdens on those who cast votes, we need a mechanism that transfers responsibility for paying the costs to the individuals who benefit.

The rationale for setting environmental standards and measuring compliance is strongly driven by the concept of equity. Around half of air pollutants in the UK come from other parts of Europe – and, of course, the UK contributes to the air quality problems of other European countries. Water contaminated by sewage washed out to sea has the potential to contaminate seafood which could be distributed widely through the food chain. The choices people make about how to dispose of waste can have widespread effects, sometimes with long time lags between the release of pollutants and the ultimate effect, and this has become an issue driving global politics when it comes to different national responses to the need to reduce carbon emissions.

Government regulation to prevent the misallocation of environmental resources is therefore a very blunt instrument. Regulation has spawned an industry in environmental data measurement. The UK is mandated to measure an immense amount of information about everything from the chemistry of rivers to the number of birds on farmland and the noise emitted by human activity in the ocean. Efforts to focus attention on only measuring those features of the environment which matter has been hampered by a lack of underlying knowledge of how these relate to the benefits gained from the environment. The rationale for actions like this hinges on the risk-avoidance approach commonly used today. This approach suggests that changes caused by human presence need to be avoided even if the changes lie within the normal range of natural variability.

Seen in this context, the direction of travel in environmental forensics towards measuring and controlling more and more – at finer and finer levels of detail just in case this might be important in future – is clearly untenable.

The need for the measurement or monitoring of environmental indicators was initially driven by a sincere search for those surrogate indicators within the environment which most effectively represented societal valuation. But this has gradually mutated in to a process of measurement and reporting of data as an end in itself.

In future, the balance needs to shift towards risk and market based methods. New technology has the capacity to drive this change because it puts the power of information in the hands of individuals so they can make informed decisions.

There will always be a need for regulation and statute in this field and a strong role for government, but the nature of environmental forensics needs to change. The current system is arguably unaffordable in the future.  Technological innovation will come to the rescue to some extent by delivering more precise data at the point it affects behavioural choices.

The down side associated with the interpretative nature of decision making needs to be addressed through sophisticated information delivery processes. Micro-innovation at the source of environmental variables needs to be matched by macro-economic innovation to build market-based solutions. Internalising the economic costs of alternative actions for the environment and accounting for these, including the provision of the forensic evidence to support this method, is most likely to be the way forward.

Earth Observation: on the cusp of a revolution

New technology  tends to trickle in to our lives. It arrives with an explosion of excitement and promise but a steady journey then ensues as the much vaunted tech becomes developed and ubiquitous enough to transform our expectations and truly revolutionise our world. When it comes to satellites and the data we get from them, we have made stunning progress on many such journeys, with pause-able high definition TV and navigation systems on phones now very much the norm. However, after its beginnings in the 70s, the Earth Observation journey – the journey to use data from above clouds to revolutionise our understanding of our planet – is so far less travelled. But this may be about to change…

A few TV sets ago I took the plunge and installed a satellite dish on my roof (mine is discretely hidden behind a vigorous Clematis montana). Satellite TV was new and exciting but in truth, when I plugged the dish into my TV and turned it on, the fundamentals hadn’t changed – it was still more or less the same experience but with more channels and marginally better picture quality. But now, in 2015, the massive increases in the data we can get from satellites, coupled with vastly increased data flow on the internet has meant our TV watching experience has been transformed – it’s now the norm to have hundreds of channels of high definition pictured beamed to our TVs, we can pause and rewind live TV and we can catch up on programmes ‘on demand’ whenever we want. While the fanfare came as satellite dishes were fist installed on our roofs, it is far more recently that satellites have ‘revolutionised’ our TV watching.

It’s just the same with satellite navigation. In the early days it was just a privileged few who could (just about) rely on sat nav systems built in to their high end cars to get them from A to B.  But now, in 2015, the sat navs most of us have built in to our smart phones have capabilities far exceeding the original cumbersome in-car systems, from telling us when the next bus is coming to integrating live traffic information to tell us at each turn the current quickest route to our destination. To me, the revolution really came when sat navs became ubiquitous, reliable and highly featured, not when they first arrived on the scene.

So satellites have steadily transformed how we access information and how we get around, but communications and positioning are just two of the three major functions supplied by satellite space technologies. The third is observation, and this is an area where we haven’t yet seen the same sort of seismic shift in capability, the same revolution.

Generally known by the jargon term Earth Observation, or just EO, this revolution is one about using data from satellites and even from unmanned aerial vehicles (drones) to help us understand more about our world.  The journey began with the launch of the  US ‘Landsat’ system of satellites in 1972. Once positioned, it began collecting pictures of the surface of the planet that gave an eagle-eye view of what covered the surface of the Earth – crops, grasslands, forests, lakes, rivers, mountains  and ice  – like we’d never seen before.  The possibilities, and opportunities opened up by this data seemed limitless, providing invaluable information about natural resources, land, roads and infrastructure to help us build capabilities in the most efficient ways possible and help us to protect tour environment.

But while the journey started in 1972, we’ve been struggling ever since to know how to deal with this avalanche of data and to turn it in to useful information. We have launched more and more EO satellites in the belief that, one day, our ability to assimilate and process all the data that they chuck at us will catch up. Now, finally, I think we have. The reason? A willingness to share.

A willingness to share

In the past, the only way to access the information within the data transmitted from EO satellites was to obtain a digital image, often by paying a lot of money for it, and then give it to another kind of techno-geek to process the information it contained. This was expensive, and the end result did not always answer the need. However, the world of EO has changed. Thanks partly to enlightened attitudes on the part of those now responsible for operating these EO satellites, most of the data from them is now being made free at the point of use. For example, all the data from the new Copernicus satellite system funded by the EU, the updated Landsat system funded by the US is now freely available to anybody who wants it and they are taking a similar approach in China. While previously this data would have just lead to the problem of data overload (or ‘data poisoning’ as I sometimes call it), the simultaneous revolution of cloud computing enables the multiple petabytes of data that emerge from these systems (Copernicus chucks around 8 terabytes of data at us each day) to be stored on-line and beCloudSat_-_Artist_Concept available, anywhere in the world, at the press of a button.

The new culture of ‘sharing’ has of course not emerged solely as a result of ‘caring. ’ The market, including many small companies but also some of the big international aerospace and data companies, are latching on to new business models for delivering the data. In the past, when one accessed an image of the surface of the Earth most of the data you bought was irrelevant and would be thrown away. In the near future the user will only need to pay for the data they actually use. This could reduce the cost of the same piece of information by many thousands of times. The development of new apps will mean that there will be many more users so, rather than charging a very small number of specialist users a lot of money for access to the information, the business model is for those supplying the services to recover their costs by spreading micro-payments across many millions of users – payments so small that each individual user will hardly notice them. Information that probably cost many tens of thousands of pounds to produce in the past and was in the hands of just a few people, will costs fractions of a penny in the future and be in the hands of millions of people.

A simple change of attitude and approach has turned EO on its head. While the Earth Observation revolution may have officially started in the 70’s, I think it is now, thanks to the new spirit of openness, that Earth Observation data can truly start to revolutionise our understanding of our world.