How fast can a credible evidence review be produced with AI assistance — and what exactly must you never cut, no matter how little time you have?
The 48-hour reality
In a WHO country office operating in India's policy environment, evidence requests do not arrive with comfortable timelines. A Joint Secretary's office requests a brief before a Thursday morning meeting. A state health secretary wants evidence on a financing reform before a budget submission next week. A donor asks for a rapid synthesis before a mission debrief. These are not exceptional circumstances — they are the normal operating tempo of the work.
A rapid review produced under these conditions is not a lesser review. It is a different kind of review, with explicit scope limitations and explicit quality commitments. The difference between a rapid review that is credible and one that is not is not the time available — it is whether the team knew in advance what they could and could not cut, and disclosed the limitations honestly.
AI tools change the calculus significantly. Tasks that previously took days — abstract screening, data extraction, first-draft synthesis — now take hours. But the ceiling of what is credible does not change: a rapid review is only as defensible as its weakest unacknowledged assumption. Speed bought by hiding limitations is not speed — it is deferred liability.
What you can compress — and what you cannot
Full systematic reviewRapid review (48h)
3–6 monthsPre-registered, dual screening, full grey lit, GRADE
2–4 weeksStructured search, single screener, key grey lit
1 weekFocused search, AI screening, targeted grey lit
None — this step takes 30 minutes with AI and cannot be skipped
Database search
5–7 databases, dual independent searches
PubMed + Dimensions.ai + HTAIn manual check only
Lower recall — some relevant studies missed. Must be disclosed.
Abstract screening
Dual independent screening, adjudication
Single screener with AI pre-screening; human reviews all AI flags
Higher false-negative risk. Acceptable if AI pre-screening prompt is well-specified.
Full-text review
All included studies read in full
AI-assisted extraction from abstracts + key sections; human verifies top 5
Miss methodology details not in abstract. Must be noted in limitations.
Quality appraisal
Full 10-question economic appraisal (Session 3)
Three critical questions only: cost perspective, equity, informal economy
Less detail on uncertainty and transferability. Three-question minimum is non-negotiable.
Equity analysis
Full disaggregation by all relevant dimensions
Equity check required — cannot be cut
None — a brief without equity analysis is not a WHO brief, regardless of timeline
Synthesis
Full narrative synthesis with meta-analysis if appropriate
Narrative synthesis only; no pooling under time pressure
Less precision on effect size. Acceptable — point estimates without pooling are honest.
Hallucination check
All statistics verified against primary sources
All cited statistics verified — cannot be cut
None — an unverified statistic in a circulated brief is a liability with no timeline excuse
Disclosure note
Full methods section
Disclosure note required — cannot be cut
None — takes 3 minutes; the rapid review's credibility depends on it
The 48-hour protocol — hour by hour
This is the compressed workflow for a WHO India health financing rapid review with a 48-hour deadline. Every step is time-boxed. Overrunning any block means the next block must be compressed further — not skipped.
H0–2Hours 0–2
Question scoping
Lock the PECO-F
Write out all five PECO-F elements. Do not begin searching until the question is precise. If the request is vague, spend 20 minutes clarifying with the requestor — it will save four hours later. Use the PECO-F reframe tool from Session 3 if needed.
PubMed with Boolean string (Session 4 builder), Dimensions.ai with LMIC filter, HTAIn manual browse. Set a result limit: take the top 50 most relevant from PubMed, all results from HTAIn. Log date, source, result count in the protocol table as you go.
Cut: Embase, Cochrane, WHO IRIS — note as limitation
H5–9Hours 5–9
AI screening
Paste abstracts — AI pre-screens, human reviews flags
Paste all abstracts into Claude with a structured screening prompt specifying your PECO-F inclusion criteria. Ask it to return: Include / Exclude / Uncertain for each, with one-line rationale. Review all "Include" and "Uncertain" yourself. Do not accept "Exclude" without spot-checking 10% of that set.
Tool: Claude with structured screening prompt · human review of flagged abstracts
H9–14Hours 9–14
Extraction & appraisal
Extract key data; apply three-question appraisal
For each included study: extract intervention, population, comparator, outcome, effect direction, and quality note using Elicit or Claude. Then apply the three non-negotiable appraisal questions from Session 3: cost perspective, equity disaggregation, informal economy. Studies failing all three are cited only with explicit caveats.
Tool: Elicit or Claude · Session 3 appraisal prompt
Cut: Full 10-question appraisal — three-question minimum only
H14–20Hours 14–20
Synthesis & brief draft
Generate first draft; run equity check
Use the Session 5 brief generator with your extracted findings. Before accepting the draft, run the equity check: are findings disaggregated? Is the India-specific gap named? Then run the bias detector from Session 1 on the AI's synthesis. Revise the draft based on flags before proceeding.
Cut: No statistical pooling — narrative synthesis only
H20–26Hours 20–26
Verification
Verify every cited statistic — no exceptions
For each specific number in the draft brief, find the primary source and confirm it. This step cannot be compressed. A hallucinated statistic in a brief that reaches MoHFW is not a methodological limitation — it is a credibility crisis. Remove any number you cannot locate in a primary source within 10 minutes of looking.
A second person reads the brief with fresh eyes — not for content expertise but for logical coherence, undisclosed assumptions, and anything that reads as more certain than the evidence supports. Attach the disclosure note from Session 1, adapted for this rapid review. State the search limitations explicitly in the methods note.
Human peer review · Session 1 disclosure template
H30–48Hours 30–48
Buffer
Contingency, formatting, delivery
Eighteen hours of buffer. Use it for requestor clarifications, formatting to WHO India house style, senior sign-off, and translation needs. Do not use it to run a more comprehensive search than disclosed — that changes the methods note and creates inconsistency. The rapid review is what it is; the limitations note makes it honest.
Buffer is structural — protect it, do not borrow against it
Four things you never cut — at any speed
⚡
PECO-F scoping. A rapid review without a locked question is not faster — it is uncontrolled. It produces a longer search, more irrelevant results, and a brief that cannot be defended because no one agreed on what it was answering. Thirty minutes upfront saves six hours of rework.
⚖️
Equity check. "We didn't have time to disaggregate" is not an acceptable WHO India brief limitation. A brief that aggregates across income quintiles, gender, and geography and presents the result as evidence for policy affecting India's most marginalised populations has violated its institutional mandate regardless of how fast it was produced.
🔍
Hallucination verification. Every specific statistic, every cited cost-effectiveness ratio, every named coverage figure must be traceable to a primary source. There is no time pressure that justifies an unverified number in a brief that will be used in a government meeting. If you cannot verify it in ten minutes, remove it.
📋
Disclosure note. The rapid review's limitations — databases not searched, studies not appraised in full, pooling not conducted — must be named in the methods note. A brief without a disclosure note is not a rapid review; it is an undisclosed incomplete review. The disclosure is what makes the speed defensible.
Generate your rapid review protocol
⚡ Rapid review protocol generator
Enter your policy question, deadline, and available resources. The generator will produce a time-boxed, step-by-step protocol for your specific rapid review — including what to cut, what to keep, and the disclosure language for your limitations note.
Policy questionIn plain language — will be converted to PECO-F in the protocol
DeadlineHow much time do you have?
Team availableWho can work on this?
AudienceWho receives this brief?
Existing evidenceWhat do you already have on this topic?
⚡ Rapid review protocol — time-boxed and ready to run
🎯 Key takeaway
A rapid review is not a compromised review — it is a scoped review with transparent limitations. AI compresses abstract screening, data extraction, and first-draft synthesis from days to hours. The four non-negotiables — PECO-F scoping, equity check, hallucination verification, and disclosure note — cannot be traded for speed at any deadline. What can be compressed is database breadth, appraisal depth, and synthesis format. Every compression must be named in the limitations note. A rapid review with honest scope limitations is more useful to a decision-maker than a full review that arrives after the decision has been made. Session 4 addresses the hardest challenge: stress-testing a polished recommendation against India's specific contextual realities before it circulates.