Chris BarrettPlanning exercises are often designed to explain how things should work in the perfect world rather than understanding how things really work. Further, models often do not address what decision makers need or want to know.
One of the meeting goals is to understand how our research does and does not address the policy makers' issues.
We would also like to understand how the focus of policy changes, often independent of rationality (e.g., anthrax morphed into smallpox) and understand how policy makers react to different presentations of information (e.g., when information is presented as mortality vs survival, you get different responses).
Richard HatchettDHHS is planning to establish a modeling hub that will incorporate, aggregate, and coordinate various models and data. This suggests that modeling could have a far bigger impact in the future, but this does depend on our ability to speak "policy."
The most helpful parts of the Targeted Layered Containment or Combined Scenario analyses were the sensitivity analyses, the intermodel comparisons, and the iterative work to understand how the models were operating. Bob Glass's simple models were very helpful for their speed and in refining the questions. Historical analyses were used to test some hypotheses suggested by modeling. Telling stories was very helpful, especially when real people were involved.
In the future, presentations may focus on responses at different organizational levels - individual, community, and international.
Sally PhillipsModels can condense mass quantities of difficult data into simple representation, generate graphical depictions, and allow no cost planning scenarios. Models cannot give the right answer, predict correct scenario consequences and actions, accommodate all possible confounding variables, especially human behavior, guarantee that the right actions will be set into policy over the best action, or assure that interpretations will be accurate
There are several reasons for doing model research:
Bioterrorism and Epidemic outbreak Response Model (BERM) was developed for New York City in response to anthrax. The tool provides information for mass prophylaxis calculations - how many people needed, staffing, timing, length of campaign, etc. Such tools allow people to play with the numbers and allow for allocation of scarce resources and other planning activities.
The National Mass Patient, Regulating, and Tracking System is for disasters that require a large Federal response (e.g., who is affected, what type of patients and where are they going, how many people in shelters).
John StermanFor most models of complex systems, validation is impossible (build confidence through iterative testing). Modelers must strike a balance between scope and detail.
Modeling may be protective rather than reflective; models may be used to prove a point, hide motives, selective use of data, support preconceptions and cover them up, or to promote the authority of the modeler and build client dependence.
Testing models at extreme parameter values may expose assumptions that bias the model.
Example: economic models of energy production that assumed a fixed fraction of energy delivered by some sectors, even when their cost was reduced to 0.
Examining modeling assumptions is crucial, continuous work. On several occasions, whole communities of modelers have been shut out of the policy process for years because of bad assumptions in some models. These assumptions had not come to light during internal or external reviews. Braithwaite also emphasized similar points.
Reflective modeling promotes inquiry, exposes assumptions, motivates range of empirical tests, challenges preconceptions and supports multiple viewpoints, and involves the community of stakeholders. It empowers the clients and enhances their capabilities.
It might be very important to parameterize a set of equation based models for vest pocket.
Perhaps the most common among abuses are situations where mathematical models are constructed with an excruciating abundance of detail in some aspects, whilst other important facets of the problem are misty or a vital parameter is uncertain to within, at best, an order of magnitude.
It makes no sense to convey a beguiling sense of reality with irrelevant detail when other equally important factors can only be guessed at.Robert May Science 303:790-793
Steven BankesIn engineering, the specification of the system boundary and model boundary are both under our control. In classical practice, we choose boundaries carefully to facilitate predictive analysis. In policy analysis, the boundary of the system is generally imposed on us, is open, and is analytically difficult. In general, the intersection of situations where decisions make a difference and situations where we can accurately forecast is empty.
Validation, strictly understood, fails as a strategy for two reasons:
Our tendency is to ignore information we understand poorly, akin to looking for keys where the light is good. In doing so, we make other, often implicit, assumptions about the behaviors of those pieces.
Inference is key to understanding the role of modeling, and an analysis of uncertainty is important. In the policy arena, uncertainties are usually large and the implications of policies are often unstudied, controversial, and data free. Sensitivity analyses are not enough because they can miss important phenomena and underestimate the uncertainties.
A helpful approach is to ask "What is the universe of stuff we want to know and don't?" Defining the unknowns can help us think more explicitly about the universe of real world situations we want to address.
One can also delineate the ways that the model could fail and use this information to refine the model or the question or develop a challenge set of issues. Robust software should allow for testing of robustness and failure modes. Every project should include a section on how to break the model / argument.
A trick for presenting material is to bound the unknown variables and assess where the value matters. For example, the value of a life is between x and y, but with respect to some question, the only place it matters for this study is above y-1.
Karl BraithwaiteBraithwaite has helped scientists create presentations for policy makers, especially around the Clean Air Act and auto emission standards. Policy makers are often confused by the plethora of models that groups create to advance their own interests. Developing good presentation skills is critical if one is to have an impact.
Pick something you can score on at the start. Figure out where you can have an impact and throw yourself into it.The following guidelines for scientists interacting with policymakers were taken from a book by John Kingdon - Agendas, Alternatives, and Policies.
It is impossible to get on the policy agenda until there is an alignment of several important interests. A window of opportunity occurs when the problem, policy interests, and politics are aligned. Progress requires a champion to advocate for the cause.
This process can be very frustrating to scientists.
Understand that there is a lot of homework on the stances and responsibilities of many groups, including Congress, Federal agencies, interest groups, media, and trusted advisors. The Iron Triangle refers to a coalition of congressional subcommittee, agency, and advocacy group. More recently, we see the emergence of advocacy coalitions which include businesses, associations, and others.
Prepare a 1 page summary, a 3 page briefing, and appendices. The legislative aid reads the 1 page summary, the senator reads the 3 page briefing, and appendices add credibility
The person being briefed should be asking tough questions. Bring them up yourself if he does not.
Do your homework to ensure you know the positions of the people you are briefing and prepare what they need to know.
These may be the most important part of the presentation.
States have considerable power, and one must be aware of what various levels of government and various agencies really can do.
Braithwaite's handout is an appendix to these minutes.
MIDAS's connection to policy and decision makers is vital, although we need to ensure that there is a safe haven inside MIDAS for basic science, social networks, computational and statistical problems.
Is it time to create a meta-model that consolidates strategies and can be run quickly to address scenarios?
After discussion, there was a feeling that this project would be quite difficult and expensive, though the time is right to start. Points include: