© CC BY-SA connect.euranet

“Truth speaks to power, and power implements”

is still how many scientists understand their role (if they talk to decision makers at all). However, we are aware that in a knowledge society, expertise is dispersed and knowledge no academic privilege any more, let alone a monopoly (if it ever was). Science has some knowledge to offer, but not truths, and to be relevant the knowledge must be able to answer questions or solve problems not defined by science but by the society at large, and the stakeholder groups within it. When producing policy briefs, three (or four) ‘C’s are important: Content relevance, comprehensible form and caution & caveats.

Content relevance

Policy brief must answer to questions society, politics or administration asks, and answers them in the context where they come from. Abstract models with weak links to real world processes are no suitable tool to generate advice for application in reality, while less sophisticated methods more grounded in reality may offer better results. Transparency of advice generation and legitimacy of the assumptions made, tools used and partners involved is critical.

Comprehensible form

Decision makers are usually highly trained intelligent experts, even if many scientists do not recognise that (as their qualification is different from what academics recognise). Policy briefs have be written to meet their demands: in plain language, without jargon, not longer than 1 to max. 2 pages (for junior staff and assistants, background papers with extended information should be available), and in the opposite order of scientific publications. They start with explaining the relevance of the findings (the discussion section), present the data (results section), explain the method (method section) and the context (introduction section). Design is important, the brief presented must catch the eye and distract from the message, and reference to additional sources for those who to get deeper into the issue (or send someone there).

Caution

Caution is important when giving advice – high ranking decision makers are usually experienced multi-criteria assessors, adequate to their responsibilities, while scientists come from a more narrow background and are usually not held responsible. Scientists have to reconsider what they say in this context, in particular regarding the role of type 1 errors (claiming a false positive) and type 2 errors (stating a false negative). While type 1 errors, detecting an effect that is not present, is what science is most concerned to avoid, it does not care as much about the fact that this may generate type 2 errors, failing to detect an effect that is present. For scientific purposes it may be good enough to say that something is not proven with sufficient certainty to treat it as absent, but in politics where precaution is a mandatory principle, type 2 errors are at least as dangerous as type 1 errors. Scientists have to take that into account (as for instance the IPCC and IPBES try to do, with some – but not complete – success) when issuing policy advice.