What are the ethical limits of AI in evidence-based policy β and what does WHO's own ethical framework require of you when AI touches a decision that affects a billion people?
Level 3 assumes you can build a search, appraise a study, and write a brief. The Integration question is different: how do you use these tools responsibly when the stakes are institutional, the populations are vulnerable, and the political constraints are real? Ethics is not the last item on the checklist β it is the frame inside which all the technical skill operates. This session gives you that frame in concrete, applicable terms for WHO India work.
The stakes are asymmetric
When a WHO India evidence brief informs a health financing decision β an expansion of PM-JAY, a revision to state insurance benefit packages, a resource allocation across districts β the consequences are not evenly distributed. A recommendation that optimises for cost-efficiency at the aggregate level can simultaneously reduce financial protection for the poorest quintile. An AI tool that systematically under-retrieves evidence on scheduled tribe populations will produce a brief that is technically defensible and distributionally unjust.
The ethical obligation is not simply to be accurate. It is to be accurate for the populations most at risk of being left out of the evidence base β which, in India, are the poorest households, women, rural populations, and marginalised communities whose health-seeking behaviour and financial vulnerability are least represented in the published literature AI tools search.
WHO's six ethical principles β applied to India
WHO's 2021 Ethics and Governance of Artificial Intelligence for Health framework establishes six principles for AI use in health. They are not aspirational statements β they are operational requirements that apply directly to how this team uses AI for evidence synthesis.
01
Transparency
WHO principle: AI processes must be explainable
Anyone who receives a WHO India brief should be able to understand how the evidence was retrieved, what tools were used, and what their limitations are. AI-assisted synthesis that is not disclosed is not transparent.
India application: Every brief using AI-assisted retrieval must include a methods note stating which tools were used and their known limitations for LMIC evidence. The protocol log from Session 4 is this disclosure.
02
Accountability
WHO principle: Human responsibility cannot be delegated to AI
The brief's author is accountable for every finding and recommendation, regardless of which tool produced the first draft. "The AI said so" is not a defence when a recommendation causes harm. Human review and sign-off are non-negotiable.
India application: No AI-generated finding should appear in a WHO India brief without human verification against a primary source. The brief generator in Session 5 produces a first draft β not a final document.
03
Inclusiveness
WHO principle: AI must not perpetuate exclusion
AI tools that fail to retrieve evidence on marginalised populations do not just miss them β they actively perpetuate their exclusion from policy by making it appear the evidence does not exist. Absence of evidence is not evidence of absence.
India application: Every brief must explicitly state which populations are and are not covered by the retrieved evidence. If evidence on ST/SC populations, women in informal employment, or rural households is absent, that absence must be named β not silently treated as a zero.
04
Equity
WHO principle: AI must actively advance health equity
Equity is not served by neutral evidence synthesis. It requires actively seeking disaggregated evidence, weighting recommendations toward populations facing structural disadvantage, and flagging when aggregate findings mask distributional harm.
India application: The F dimension of PECO-F is an equity obligation, not an optional add-on. A brief on health insurance that does not disaggregate by income quintile has not met WHO's equity standard regardless of its methodological rigour.
05
Sustainability
WHO principle: AI use must be sustainable and context-appropriate
Tools and workflows must be usable by the team over time β not dependent on a single expert, not priced out of reach, not requiring infrastructure that India's health system cannot sustain. Evidence workflows that break when a staff member leaves are not sustainable.
India application: The surveillance plans and protocol templates built in this curriculum are designed to be transferable, documented, and team-maintained β not locked to any one person's prompt library or paid subscription tier.
06
Data Protection
WHO principle: Health data must be protected
Pasting sensitive health data β individual patient records, programme beneficiary lists, unpublished government data β into commercial AI tools may violate data protection obligations. The tool's training and storage practices determine the risk.
India application: Only publicly available or explicitly cleared data should be pasted into Claude, GPT, or Gemini for WHO India work. NSSO microdata, beneficiary records, and ministry working documents require explicit data governance clearance before AI processing.
Five lines AI must not cross
Beyond the six principles, there are five specific uses of AI in evidence synthesis that are ethically off-limits for WHO India work β not because the technology cannot do them, but because doing them would violate accountability, equity, or transparency obligations that WHO carries as a normative body.
β
AI as sole author of a policy recommendation
A recommendation that reaches a Ministry or WHO country office must carry human analytical judgment β not just a prompt output. AI can draft; it cannot decide. The moment a brief's recommendation is written by AI without substantive human revision, accountability has been delegated to a system that cannot be held responsible.
WHO India context: this applies even when the AI output looks authoritative, cites plausible sources, and is formatted correctly. Plausibility is not accuracy. Format is not review.
β
Undisclosed AI use in a submitted brief
Submitting an AI-assisted brief without disclosing the AI contribution violates WHO's transparency principle and, in some contexts, may violate institutional integrity policies. Disclosure does not require apologising for using AI β it requires naming the tools and their role honestly.
WHO India context: the methods note template in the disclosure box below satisfies this requirement. It takes 90 seconds to complete and should be standard in every AI-assisted brief.
β
Treating AI silence on a population as evidence of absence
If an AI tool returns no evidence on a specific population β scheduled tribes, women in informal employment, migrants β that is a retrieval gap, not a research finding. Stating "no evidence was found" without noting the retrieval limitations implies the literature was adequately searched when it was not.
WHO India context: for any brief that will inform resource allocation across India's demographic diversity, the phrase "evidence is limited" must be followed by an explanation of why β retrieval tool limitations, publication gaps, or genuine absence of research.
β
Using unverified AI-generated statistics in a circulated brief
AI tools hallucinate specific numbers β cost-effectiveness thresholds, coverage rates, expenditure percentages β with the same confident tone they use for verified facts. An unverified statistic in a brief that reaches a Ministry is a liability that cannot be recalled once circulated.
WHO India context: every specific number in a brief β a percentage, a cost ratio, an incidence figure β must be traceable to a named, locatable source. If it cannot be found, it must be removed before circulation.
β
Allowing AI efficiency to compress the equity review
The speed of AI-assisted retrieval creates a new pressure: because a search can be done in hours instead of days, there is less time allocated to the equity review that should follow it. Speed is not an excuse for skipping disaggregation. A faster brief that omits equity analysis is not better β it is faster and worse.
WHO India context: the time saved by AI retrieval should be reinvested in equity analysis, not absorbed by the next deliverable. This is a team norm, not a methodological suggestion.
AI disclosure β standard methods note
This template satisfies WHO's transparency requirement. Copy it into the methods section or appendix of any AI-assisted brief before circulation. Adapt the bracketed fields.
π Standard AI disclosure note β copy and adapt
AI assistance disclosure
This evidence brief was produced with AI-assisted literature retrieval and synthesis. The following tools were used: [list tools β e.g. Elicit for structured extraction, Claude for document interrogation and first-draft synthesis]. AI-generated content was reviewed and verified by [name/role] against primary sources before inclusion. All specific statistics cited in this brief have been traced to named sources; unverified AI-generated figures were excluded.
Known limitations of AI retrieval for this brief: [e.g. grey literature not indexed by Elicit; indexing lag of approximately 4β6 weeks; LMIC evidence under-represented relative to OECD literature]. These limitations are addressed through supplementary manual searches of HTAIn, NHA, and WHO IRIS (see search protocol log, Appendix [X]).
Human accountability: The findings, evidence gaps, and recommendations in this brief reflect the analytical judgment of the named author(s). AI tools assisted retrieval and drafting; they did not determine the recommendation.
Stress-test your planned AI use
𧬠Ethical stress-test β describe how you plan to use AI
Describe a specific way you are planning to use AI in a WHO India evidence synthesis task. The tool will audit it against WHO's six ethical principles and the five red lines β and identify any obligations you need to address before proceeding.
AI tool(s) you plan to useWhich tools, for which steps
Population affected by the policyWho will the brief's recommendation affect?
Planned AI use β describe specificallyWhat will AI do, what will humans do, what will be disclosed?
𧬠Ethical audit β WHO principles applied
π― Key takeaway
The ethical frame is not a constraint on good work β it is what makes the work defensible when it is challenged. WHO carries normative authority in India's health system. That authority depends on the trust that WHO evidence is produced with transparency, human accountability, and genuine attention to equity. AI tools that accelerate retrieval and synthesis are assets inside that frame. Outside it β undisclosed, unverified, equity-blind β they are liabilities. The disclosure note, the five red lines, and the stress-test in this session are the minimum ethical infrastructure for AI-assisted evidence work at WHO India. Session 2 takes the next question: how do you embed this into the institutional workflows of a country office?