Home · Level 1 · Session 5
🎯 Level 1 of 3 · Session 5 of 5

Level 1 · Foundations

The Output

Writing the Evidence Brief

How do you turn a literature review into a policy brief that a decision-maker will read, trust, and act on — in the time they actually have?

The brief-to-decision gap, revisited

Session 1 named the brief-to-decision gap as one of the four critical bottlenecks in WHO India's evidence pipeline. This is where reviews go to die: a thorough 40-page synthesis that no one in a position to act on it will read before the window closes. The evidence brief is not a shorter version of the review. It is a different document with a different purpose, a different reader, and a different measure of success.

A policy brief succeeds when a decision-maker who reads only the first half-page understands the issue, the evidence, and the recommended action — and when the rest of the document exists to defend that recommendation if challenged. Structure, not volume, is what gets evidence used.

The anatomy of a WHO India evidence brief

The structure below is calibrated for health financing and UHC briefs directed at senior government, Ministry, or WHO country-office audiences. Each section has a strict function. The brief generator at the end of this session uses this template.

Issue
The policy problem — one sentence
State the problem the decision-maker faces, not the research question you answered. Frame it in terms of what is at stake: lives, equity, fiscal pressure, programme performance.
e.g. "Despite PM-JAY coverage of 500M beneficiaries, 55% of enrolled households in rural Uttar Pradesh report continuing out-of-pocket expenditure exceeding 10% of household income at inpatient contact."
Evidence
What the literature shows — three to five findings
Numbered, plain-language findings drawn directly from your reviewed studies. Each finding should state the direction of effect, the population, the setting, and the quality caveat if material. No citations in this section — those go in the appendix.
e.g. "1. Community-based health insurance reduces catastrophic expenditure by 30–45% among informal workers in comparable LMIC settings (6 studies; moderate quality)."
Gaps
What the evidence does not tell us
One short paragraph. State explicitly what questions remain unanswered and why — not as a weakness but as a guide to where further investment is warranted and where the recommendation rests on inference rather than direct evidence.
e.g. "No studies directly measure financial protection outcomes for scheduled tribe populations under PM-JAY. Evidence is extrapolated from analogous sub-Saharan African contexts."
Options
Two or three actionable policy options
Not a single recommendation. Decision-makers distrust single-option briefs — they read them as advocacy. Present two or three options with trade-offs stated: cost, equity impact, implementation complexity, political feasibility. The F dimension of PECO-F lives here.
e.g. "Option A: Expand PM-JAY outpatient coverage (higher cost, highest equity gain). Option B: Strengthen claims settlement at district level (lower cost, moderate equity gain, faster implementation)."
Recommendation
One preferred option with conditions
A single, clear, conditional recommendation — not a hedge. State the preferred option, the conditions under which it is recommended, and the monitoring indicator that would signal it is working. This is the section that gets read first and quoted later.
e.g. "Given the fiscal envelope and state-level implementation capacity, Option B is recommended as a first step, with a 12-month review of claims settlement rates before considering Option A."
Methods
Search protocol log — one paragraph or table
The protocol log from Session 4. Databases searched, date ranges, search strings, number of papers screened and included. This is what makes the brief defensible when the methodology is challenged. Attach as an appendix or final section.
e.g. "PubMed (147 results, 18 included), Cochrane (9, 4 included), HTAIn (6 reports, 3 included). Date: November 2024. Full log attached."

The four fatal errors

These are the mistakes that most commonly prevent a well-researched brief from reaching the decision it was written for:

📚
Leading with the literature
Opening with "A systematic review of 47 studies found…" is the single fastest way to lose a senior reader. Lead with the policy problem. The evidence is the defence, not the headline.
🌍
Unadapted LMIC extrapolation
Citing CBHI evidence from Rwanda or Ghana without acknowledging India-specific constraints — federal structure, informal economy scale, state capacity variance — destroys credibility with anyone who knows the context.
⚖️
Missing the equity dimension
Aggregate findings that do not disaggregate by income quintile, gender, caste, or geography tell a policymaker nothing about distributional impact. The F element of PECO-F must appear in the Options section.
🔒
The single undisclosed option
A brief that presents only one option without acknowledging alternatives reads as advocacy, not evidence synthesis. It loses the trust of the reader and the credibility of WHO as a neutral technical body.

Generate your draft brief

🎯 AI brief generator — paste your findings, get a structured first draft

Fill in the fields below with your policy context and key findings. The generator will produce a structured WHO India evidence brief first draft using the anatomy above — calibrated for a Ministry or WHO country-office audience. Edit and verify before use.

Policy question The decision the brief must inform
Target audience Who will read and act on this brief
Key findings from your review Paste 3–6 findings in plain language — numbers, populations, direction of effect. The more specific the better.
Key evidence gap What the literature does not answer
Equity and feasibility constraints Budget, implementation, political, equity considerations
🎯 Draft evidence brief — review and edit before use
AI-generated first draft. Verify all findings against source studies before circulation. Do not cite without human review. Search protocol log from Session 4 should be attached as appendix.
✅ Level 1 complete
You have the foundations.
Level 2 is where it gets harder.
You can now frame a health financing question using PECO-F, choose the right tools for the job, build a reproducible search protocol, and structure an evidence brief that decision-makers will use. Level 2 — Exploration — is about the harder skills: spotting bias in the evidence you retrieve, prompting AI for complex UHC comparisons, and critically appraising cost-effectiveness studies under LMIC conditions.
Begin Level 2 — Exploration →

🎯 Level 1 takeaway

The full Level 1 arc is: diagnose your pipeline bottlenecks (Session 1) → choose tools by job not by name (Session 2) → frame the question with PECO-F (Session 3) → translate to a reproducible search protocol (Session 4) → convert findings to a decision-ready brief (Session 5). Each session feeds the next. The brief generator in this session uses everything that came before it. Keep the search protocol log from Session 4 — it becomes the methodology appendix that makes this brief defensible.