Facebook Icon Twitter Icon Youtube Icon Linked Icon Git Icon
Welcome to
SESYNC
The National Socio-Environmental Synthesis Center (SESYNC) is dedicated to accelerating scientific discovery at the interface of human and ecological systems. We support new interdisciplinary collaborations that pursue data-driven solutions to pressing socio-environmental problems. SESYNC features a range of services from project inception through results dissemination, including supporting the team science process, meeting planning and facilitation, travel and logistical support, and cyberinfrastructure resources. SESYNC is funded by an award to the University of Maryland from the National Science Foundation. Learn more about SESYNC.

SESYNC Welcomes New Advisory Board

 

SESYNC Welcomes New Advisory Board

The National Socio-Environmental Synthesis Center (SESYNC) recently welcomed a prominent group of academic, nongovernmental, and corporate leaders to its external advisory board. As leaders in social and natural sciences and in their respective disciplines, SESYNC's advisory board provides guidance and support to the Center and its mission.

The External Advisory Board members include:

High Atmospheric Carbon Dioxide Levels Threaten Coral Reefs and People

High Atmospheric Carbon Dioxide Levels Threaten Coral Reefs and People

As atmospheric carbon dioxide (CO2) levels rise, very few coral reef ecosystems will be spared the impacts of ocean acidification or sea surface temperature rise, according to a new analysis. The damage will cause the most immediate and serious threats where human dependence on reefs is highest.

A Co-creation Challenge: Aligning Research and Policy Processes

November 3, 2016

Katrin Prager, guest contributor and member of SESYNC's Co-creative Capacity synthesis team.

A Co-creation Challenge: Aligning Research and Policy Processes

By Katrin Prager

This blog post originally appeared in the Integration and Implementation Insights blog (http://I2Insights.org) as “A co-creation challenge: Aligning research and policy processes," and is reposted with the author’s permission.

 

How does the mismatch between policy and research processes and timelines stymie co-creation? I describe an example from a project in Sachsen-Anhalt state in Germany, along with lessons learnt.

The project, initiated by researchers, aimed to use a more participatory approach to developing agri-environmental schemes, in order to improve their effectiveness. Officers from the Agricultural Payments department of the Sachsen-Anhalt Ministry for Agriculture were invited to participate in an action research project that was originally conceived to also involve officers from the Conservation department of the same ministry, farmer representatives and conservation groups.

An initial meeting with the Agricultural Payments officers, to determine the focus of the participatory study, identified a problem with payments for grazing special conservation areas as their key concern. They needed to find a way to maintain payments to shepherds to graze special conservation areas. Shepherds relied on these agri-environmental payments to earn a living from grazing management, but recent changes in regulations no longer allowed payments for grazing in protected areas – where paradoxically the benefit of grazing for conservation outcomes was highest.

However, the research team had no expertise with legal issues and scheme design. What we could offer was a tool to optimise the allocation of budgets. Even though this could not help the government officials with maintaining payments to shepherds, the officials recognised potential benefits of the tool for a different problem, namely in anticipated negotiations with farmers’ associations to redistribute and reduce agri-environmental scheme budgets in the next planning period. This seemed to be their key motivator to cooperate with the researchers and to make internal budget figures available.

The researchers had to compromise by allowing the workshop participants and timeline to be determined by the ministry. The Agricultural Payments department needed the negotiation process with farmer representatives to be undertaken shortly after the project started in order to meet the timelines for scheme revisions set by the European Commission, the federal ministry and the state ministry.

This impacted the research process which aimed to combine facilitated communication with a highly structured mathematical model in a series of workshop meetings. The facilitated communication was intended to support fairness and transparency in the process, and to resolve any potential conflicts. The purpose of the mathematical tool was to structure and visualise the issue (budget allocation), scrutinise different scenarios, and therefore increase the transparency and efficiency of the process.

There was simply no time for the initially planned analysis of the ex ante situation and relevant stakeholders, nor for running more than two joint workshops. This meant that many decisions had already been taken before the first workshop, such as selecting the individual measures to consider in the model and setting some restrictions (eg., upper and lower budget limits per measure). However, scheme objectives and further model restrictions were jointly discussed and agreed at the first workshop, and weightings for the model were developed through a Delphi-style exercise.

Instead of genuine co-creation, the process was shaped by the particularities inherent in bureaucratic organisations especially at the state level. As well as the restrictions already described, the flow and distribution of information from the ministry was poor and impacted on what could be entered into the model; power issues played out in terms of what information ministry staff shared at the workshops and with whom; and the hierarchical, sector-oriented focus of the bureaucracy meant that some relevant stakeholders (especially the Conservation department and conservation groups) were not invited to participate.

Nevertheless, initially the participants were satisfied with the process and this can be attributed to the facilitation generating a level playing field during workshops and the transparency afforded by the mathematical model. However, beyond the workshops, disappointment set in for the non-governmental stakeholders as the usual power structures came into play with ministry officials choosing to disregard the recommendations produced at the workshops.

We concluded that research is not set up to accommodate the requirements of policy making, in at least three ways:

  1. First, there may be an inherent difficulty in matching the actual needs of policy makers (in this case the payments to shepherds) with the interests and expertise of researchers,
  2. Second, researchers usually cannot deliver evidence in short time frames, often days, and
  3. Third, introducing the ‘ideal’ inclusive communication process may clash with timelines and an unwillingness to cross departmental boundaries.

We learnt that co-creation between researchers and bureaucratic organisations needs supportive gatekeepers and the opportunity for longer term involvement so that trust can be built and opportunities for mutually beneficial co-creation can be seized.

It is extremely useful if researchers are able to recognise power structures and their impact on co-creation, although there will be cases where there is little researchers can do to mitigate this impact. For example, the relevant stakeholders are unlikely to be motivated to contribute to a co-creation process for which the initiators have already decided the result.

Have you found ways to align research and policy processes, and to create the necessary flexibility in research project funding? I’d love to hear about them.

For more information:

Prager, K. and Nagel, U. J. (2008). Participatory decision making on agri-environmental programmes: A case study from Sachsen-Anhalt (Germany). Land Use Policy, 25, 1: 106-115.

Biography: Katrin Prager is a senior social scientist at the James Hutton Institute in Aberdeen, Scotland. She is involved in inter- and transdisciplinary research on agri-environmental policy making and implementation, collaborative landscape management, community engagement and farmer adoption of conservation practices. Katrin investigates these topics through the lens of institutional analysis, knowledge management, adaptive capacity and organisational behaviour. She is a member of the Co-Creative Capacity Pursuit funded by the US National Socio-Environmental Synthesis Center (SESYNC).

Associated Project: 

Should I Trust that Model?

November 3, 2016

Val Snow, guest contributor and member of SESYNC's Core Modelling Practices synthesis team.

Should I Trust that Model?

By Val Snow

This blog post originally appeared in the Integration and Implementation Insights blog (http://I2Insights.org) as “Should I Trust that Model," and is reposted with the author’s permission.

 

How do those building and using models decide whether a model should be trusted? While my thinking has evolved through modelling to predict the impacts of land use on losses of nutrients to the environment – such models are central to land use policy development – this under-discussed question applies to any model.

In principle, model development is a straightforward series of steps:

  • Specification: what will be included in the model is determined conceptually and/or quantitatively by peers, experts and/or stakeholders and the underlying equations are decided
  • Coding: the concepts and equations are translated into computer code and the code is tested using appropriate software development processes
  • Parameterisation: here the values that go into the equations are determined by a variety of methods
  • Testing: the model is compared against data using any of a wide range of metrics, the comparisons are examined and the fitness of the model for the intended purpose or scope is decided. Bennett and colleagues (2013) give an excellent position on the variety of statistical approaches that can be used for this purpose.

In reality, of course, these steps do not take place in an orderly progression, there are many loops backward, some of the parameterisation and testing occurs in parallel to the coding and the first step is often re-visited many times.

It is mostly assumed that assessment of ‘trust’ or ‘confidence’ in a particular model should be based on the metrics or statistics resulting from the comparison of the model outputs against experimental datasets. Sometimes, however, the scope of the testing data and whether the model has been published in a good journal are also taken to imply confidence in the model. These criteria largely refer to that last testing step and this focus is understandable. Of the steps above, testing is the one mostly readily documented against accepted standards with the results made available externally. However, even with a quantitative approach to testing, Bennett and colleagues note that the actual values of the statistics that are considered to be acceptable are a subjective decision.

While I agree with the approach and need for quantitative testing, the testing results themselves have very little to do with my confidence or trust in a model. My confidence will evolve over time as I become more familiar with the model. By the time I am prepared to make any statements about the specific reasons for my degree of trust, the reasons for that trust will largely have become tacit knowledge – and that makes it very difficult for me to explain to someone else why I have confidence (or not) in that model.

Here I have attempted to tease out the factors that influence my confidence in a model. I should note that my trust in the models I have been involved in developing, or that I use at an expert level, can fluctuate quite widely and wildly over time so, for me, the process of developing trust is not a linear process and is subject to continual revision. I assess four key areas concerning the model using a range of questions, as follows:

Area 1. The nature of the problem domain: Are the ‘correct’ outputs even measureable? How mature is the science community’s understanding and agreement of the conceptual and quantitative processes that must be included in the model? What constraints and deliberate assumptions have been included? Will these assumptions likely constrain error or allow (or even encourage) it to blossom?

Area 2. Software development and parameterisation: Who did the work and do I have a favourable opinion of their other modelling activities? What documented software development processes did they use? Do they use a reliable version control system and can I compare older versions of the model to the current version? Is the documentation sufficient and sufficiently well-presented that I can, for the large part, understand the workings of the model and its implementation assumptions? If I need more detail can I (or can I get someone else to) dive into the code to understand more detail? How open/transparent does the process appear to be? Can it be readily reviewed by others?

Area 3. Developer’s testing: What have the developers done with respect to testing? Does it feel robust (eg., basic things like not reusing data used for parameterisation, but also have they delved into and explained reasons for poor performance)? Have they relied mostly on reporting statistical values or are there extensive graphs that are appropriate for the domain of the model?

Area 4. User’s experience: Is the model user interface set up in such a way that I can investigate the model’s behaviour as inputs, settings and parameters are changed? When I do this investigation, how often does the model ‘surprise’ me? How many of those are “Wow!” surprises (meaning I thought the model would be unlikely to behave well but it did), how many are surprising surprises (the model outputs can be rationalised and even make sense once investigated) and how many are “Really!?!” surprises (the model outputs do not make sense in any way that I can explain and/or they seem to be in conflict with the developer’s testing or documentation)? When I get the last type of surprise: is the model constructed in such a way that I can understand the extent to which that surprise will flow through to outputs that matter or are the effect of any such surprises likely to be minimised or cancelled out by the way the model is constructed?

These questions are how I develop trust in a model. Do my questions align with your criteria or have I missed critical points? Do you have a completely different process for developing trust in a model? My approach is probably strongly tuned by my experience with mechanistic or process-based models (where the model is intended to represent an expert’s opinion of how the system works rather than being driven by data). Given that, if you work with a different type of model, does your approach to developing trust work differently? Might you place more reliance on comparison to data? I’d value your thoughts.

Reference:

Bennett, N. D., Croke, B. F. W., Guariso, G., Guillaume, J. H. A., Hamilton, S. H., Jakeman, A. J., Marsili-Libelli, S., Newham, L. T. H., Norton, J. P., Perrin, C., Pierce, S. A., Robson, B., Seppelt, R., Voinov, A. A., Fath, B. D., Andreassian, V., (2013). Characterising performance of environmental models. Environmental Modelling & Software, 40: 1–20. Online (DOI): 10.1016/j.envsoft.2012.09.011

Biography: Val Snow is a systems modeller at AgResearch in New Zealand and comes from a soil physics and agricultural science background. Her research focuses on the development and use of simulation models to support technological innovation in pastoral agricultural systems and assessment of the impacts of land use. Application areas include land use policy, future farming systems, greenhouse gas mitigation and climate change adaptation.

Associated Project: 

Could Climate Change Keep Kids Out of School? A Q&A with Environmental Sociologist and Demographer, Heather Randell

November 1, 2016

SESYNC Postdoctoal Fellow, Heather Randell

Could Climate Change Keep Kids Out of School? A Q&A with Environmental Sociologist and Demographer Heather Randell

BY LISA PALMER

SESYNC FELLOW FOR SOCIO-ENVIRONMENTAL UNDERSTANDING

Education is seen as a key tool for building resilience to climate change in the developing world. But new research shows that climate change could also make it harder to keep kids in school and ensure they get the best out of their time in the classroom. 

Heather Randell, a postdoctoral fellow at the National Socio-Environmental Synthesis Center (SESYNC) studies the relationships between environmental change, development, and human health and wellbeing. Her research focuses on the social processes underlying migration, the links between development and rural livelihoods, and the social and health impacts of environmental change. 

In the November issue of Global Environmental Change, Randell and co-author Clark Gray of the University of North Carolina at Chapel Hill, published the results of a study on how climate variability competes with schooling in Ethiopia and could lower adaptive capacity for generations. “Investments in education serve as an important pathway out of poverty,” they write, “yet reduced agricultural productivity due to droughts or temperature shocks may affect educational attainment if children receive poorer nutrition during early childhood, are required to participate in household income generation during schooling ages, or if households can no longer pay for school-related expenses.” 

SESYNC fellow Lisa Palmer talked with Randell about the study, why it’s important, and what comes next. The Wilson Center published an edited excerpt of their conversation on the New Security Beat blog

Associated SESYNC Researcher(s): 

Pages

Subscribe to SESYNC RSS