Full updated WHO India Session 1 HTML β€” v4 chain notation + PENTAD as named phases html 🌱 The Existing Process Β· Level 1 Session 1
Home Β· Level 1 Β· Session 1 Ukubona LLC β†— Automation β†—
🌱 Level 1 of 5 · Session 1 of 5

Level 1 Β· Foundations

The Existing Process

How Evidence Becomes Policy β€” and Where It Doesn't

What is the current pipeline for literature review at WHO India, and which steps are consuming time that AI could return to you?

βˆ‚
v4 Β· TMVES compression chain Β· five phases
The statistical backbone of this curriculum β€” from raw world-model ΞΈα΅— through composite loss, gradient policy, federated correction, to prognosis loop L(θᡗ⁺¹). Skip if equations aren't your thing. Open if you want to see why.
For the curious
β–Ύ

Where were you headed?

In our first session, before we looked at a single tool or pipeline diagram, we asked one question: when you set off on a literature review, what is your destination?

The answers were spread across the full length of the evidence pipeline β€” and every single one was correct. They just weren't all the same destination. That gap is what this curriculum is designed to close.

Click the answer that most closely matched yours.

Session 0 Β· Your answer
"When you set off on a literature review β€” what is your destination?"
A database search result
PubMed hits, Scopus exports, a set of titles to read
Phase I Β· Tensor Β· ΞΈα΅—
A screened list of papers
Titles and abstracts triaged for relevance
Phase II Β· Matrix Β· Lβ‚€+Ξ£wα΅’Β·Lα΅’
An extracted dataset
Key findings, numbers, methods pulled from papers
Phase III Β· Vector Β· f(σ²,Ξ»,Ξ΅)
A synthesis or summary
What the literature says, in aggregate
Phase IV Β· Eigenmode Β· Ξ³|Ξ΅_FGT|Β²
A policy brief or recommendation
A decision-ready document for leadership
Phase V Β· Scalar Β· L(θᡗ⁺¹)
Where your destination sits in the compression chain
Your boss pointed here: the scalar β€” a policy brief or recommendation. That's Phase V: L(θᡗ⁺¹), the prognosis loop. Everything else is a step on the way, not the destination. This session maps exactly what happens between where most people stop and where decisions get made.
Why this matters Every answer in that first session was honest. But the further left your destination sat, the more work was being left on the table β€” work that AI can now do. The pipeline below shows where time disappears, and the rest of this curriculum shows you how to get it back.

The pipeline, honestly described

Every evidence-to-policy cycle at WHO India moves through a recognisable sequence of steps. The process is not broken β€” it has produced genuine results, from informing PM-JAY design to shaping national NCD guidelines. But it is slow, and the slowness is not random. It concentrates at specific, identifiable points that are exactly where AI tools offer the most leverage.

Here is the pipeline as it typically runs for a health financing or UHC evidence brief:

Primary pipeline Β· five-phase compression chain Β· TMVES
I Β· Tensor
ΞΈα΅—
Policy question defined
Database search
II Β· Matrix
Lβ‚€+Ξ£wα΅’Β·Lα΅’
Title / Abstract screening
III Β· Vector
f(σ²,Ξ»,Ξ΅)
Full-text retrieval
IV Β· Eigenmode
Ξ³|Ξ΅_FGT|Β²
Data extraction
Synthesis
Brief drafted
V Β· Scalar
L(θᡗ⁺¹)
Policy decision
Red steps β€” Database Search (I), Title/Abstract Screening (II), and Data Extraction (IV) β€” account for 60–70% of total review time. They map directly to Phases I, II, and IV of the TMVES chain. Session 2 maps 20 AI tools against exactly these three steps. The algebra terms are optional reading; the red blocks are not.
Session 2 tool map β†’

Deep dive: the v4 compression loss chain ΞΈα΅— β†’ Lβ‚€+Ξ£wα΅’Β·Lα΅’ β†’ f(σ²,Ξ»,Ξ΅) β†’ Ξ³|Ξ΅_FGT|Β² β†’ L(θᡗ⁺¹) β€” five phases mapping each pipeline stage to a statistical operator and an AI class

βˆ‚
Deep dive: the v4 compression loss chain
ΞΈα΅— β†’ Lβ‚€+Ξ£wα΅’Β·Lα΅’ β†’ f(σ²,Ξ»,Ξ΅) β†’ Ξ³|Ξ΅_FGT|Β² β†’ L(θᡗ⁺¹) Β· five phases Β· ex-ante / ex-post separation
Five phases Β· algebra
β–Ύ
ΞΈα΅— β†’ Lβ‚€+Ξ£wα΅’Β·Lα΅’ β†’ f(σ²,Ξ»,Ξ΅) β†’ Ξ³|Ξ΅_FGT|Β² β†’ L(θᡗ⁺¹)

Five layers of AI in evidence work

Before mapping tools to tasks, it helps to know what kind of AI you're dealing with. The taxonomy below β€” drawn from how AI actually enters health systems β€” maps each layer to its role in a literature review pipeline. Read it against the pipeline above: World AI and Perception AI operate at the search end (Phases I–II); Agentic AI handles extraction (Phase III); Generative AI produces the synthesis (Phase IV); Embodied AI audits the whole chain for bias, equity, and compliance (closing the loop into Phase V).

AI Layer Taxonomy Β· Evidence Synthesis
From embodied wisdom to the world's raw data β€” five layers, five roles
β–Ύ
Layer Role in Evidence Synthesis Β· v4 Phase
Embodied AI Audits the entire workflow β€” bias detection, equity checks, PRISMA compliance Β· Phase V Β· Scalar Β· L(θᡗ⁺¹)
Generative AI Synthesizes findings into reports, policy briefs, and insights Β· Phase IV Β· Eigenmode Β· Ξ³|Ξ΅_FGT|Β²
Agentic AI Automates screening, extraction, and repetitive tasks Β· Phase III Β· Vector Β· f(σ²,Ξ»,Ξ΅)
Perception AI Ingests specific literature across journals, languages, and grey sources Β· Phase II Β· Matrix Β· Lβ‚€+Ξ£wα΅’Β·Lα΅’
World AI Understands general science across all domains Β· Phase I Β· Tensor Β· ΞΈα΅—

No standard classification exists β€” this taxonomy is inspired by how AI actually enters health system procurement and workflow. Full reference β†’

The red steps account for roughly 60–70% of total review time in a standard health economics evidence review. They are also the steps most likely to be done inconsistently under time pressure, which means the quality of the final brief is often determined not by the synthesis but by what got dropped during screening.

The four bottlenecks specific to WHO India

Generic reviews of "how literature review works" are not what this session is for. The friction that WHO India financing and economics staff experience is specific. Here are the four most consequential pressure points:

01
The grey literature gap
A significant share of the evidence most relevant to Indian health financing β€” NSSO household surveys, state NHA accounts, NITI Aayog working papers, HTAIn assessments β€” never reaches PubMed or Cochrane. Standard database searches systematically miss it. Screening 200 journal abstracts while missing the 2023 HTAIn report on PM-JAY equity outcomes is not a search strategy; it is a selection bias.
02
The WEIRD-data bias
The indexed literature over-represents high-income country health systems. A search for evidence on community health insurance and catastrophic expenditure returns studies from the Netherlands, Canada, and South Korea before it surfaces comparable work from Kerala, Andhra Pradesh, or Rwanda. Without active filtering, AI tools inherit this bias β€” and can amplify it by defaulting to the most-cited results.
03
The clinical-economics confusion
Most literature review tools β€” and many default AI prompts β€” are designed around clinical questions. Searching for "impact of health insurance on health outcomes India" returns RCT literature on specific diseases rather than economic evaluations of insurance coverage, financial risk protection, or utilisation equity. The question framing determines the evidence retrieved. This is addressed directly in Session 3.
04
The brief-to-decision gap
Even well-conducted reviews regularly fail to reach decision-makers in usable form. A 45-page systematic review of health financing in LMICs does not serve a Joint Secretary at MoHFW under a two-day deadline. The translation step β€” from evidence synthesis to a one-page brief calibrated to budget constraints, political context, and equity commitments β€” is where most reviews stall. This is the subject of Session 5.

What AI can and cannot fix

It is worth being precise about where AI tools intervene in this pipeline β€” and where they do not.

AI accelerates: database search construction, title and abstract screening, full-text data extraction, structuring a synthesis across multiple studies, drafting a first version of a brief. A task that takes two researchers three weeks of screening can often be reduced to a focused afternoon of prompt refinement, AI-assisted screening, and human verification.

AI does not replace: the judgment call about which question matters, the contextual knowledge that a particular Indian state has no functional secondary care infrastructure, the equity lens that asks who benefits and who is excluded, the political reading of what a Ministry will act on. These remain yours. This curriculum is about using AI to protect your time for the parts that only you can do.

A 2023 analysis of India's National Health Programme guidelines found that systematic review evidence was formally cited in fewer than half of NHP guideline documents reviewed β€” not because the evidence did not exist, but because the pipeline from production to decision was too slow and too fragmented to be useful at the moment of decision.

That gap is the reason this curriculum exists. The tools in the next session are not there to make literature review more academic β€” they are there to close the distance between evidence and the decision that needs it, before the window closes.

Audit your own process

πŸ§ͺ Self-audit β€” where does your review time actually go?

For a recent or ongoing evidence brief, estimate how much of your total time falls at each stage. This takes 90 seconds and generates a personalised read on where AI will give you the most back.

Defining and scoping the question Clarifying the policy context, agreeing on the research question, setting inclusion criteria
Database search & grey literature hunt Building search strings, running searches, tracking down reports and government documents
Title / Abstract screening Reading through results to determine relevance β€” typically the single largest time sink
Data extraction Pulling numbers, methods, and findings from included papers into a usable format
Writing the synthesis and brief Turning evidence into a usable document for decision-makers

🎯 Key takeaway

The existing process is not the enemy β€” it is the baseline. The three steps that consume the most time β€” database search, title/abstract screening, and data extraction β€” correspond to Phases I–III of the TMVES chain: ΞΈα΅— β†’ Lβ‚€+Ξ£wα΅’Β·Lα΅’ β†’ f(σ²,Ξ»,Ξ΅). These are precisely the steps Session 2 maps to 20 AI tools. The goal of this curriculum is not to replace your judgment but to compress the mechanical steps so that more of your working week is spent on the parts that require it: defining the right question, applying the equity lens, and translating findings into something a decision-maker can act on β€” Phase V.