Home Β· Level 2 Β· Session 3
πŸ”­ Level 2 of 3 Β· Session 3 of 5

Level 2 Β· Exploration

Critical Appraisal

Reading Studies That Read Back

How do you appraise AI-retrieved health economics evidence rigorously β€” and use AI as a sparring partner rather than an authority?

Clinical appraisal is not enough

Most critical appraisal training in public health is built around clinical questions: is the randomisation adequate, is blinding maintained, is the outcome validated? These are important. But for health economics and financing work, they are necessary and insufficient. A study that passes every GRADE or CONSORT criterion can still be worthless for a WHO India brief if its cost perspective is wrong, its discount rate is not reported, its outcomes are not transferable, or its equity assumptions are hidden.

This session introduces the additional appraisal layer that health economists need β€” the questions that GRADE does not ask but that determine whether a cost-effectiveness finding, a financial risk protection estimate, or a health insurance evaluation is usable for policy in India.

Clinical appraisal asks
Is this study internally valid?
Was the intervention well-defined? Was randomisation adequate? Was follow-up complete? Were outcomes measured consistently? These questions establish whether the study's findings are credible within its own context.
Economic appraisal also asks
Is this study usable for India?
What cost perspective was used β€” societal, payer, household? Were costs adjusted for Indian price levels? Was the informal economy accounted for? Were equity weights applied? Is the intervention transferable to a federal LMIC with fragmented insurance coverage?

The ten appraisal questions for health economics studies

These ten questions cover both the internal validity layer (shared with clinical appraisal) and the economic transferability layer (specific to financing and UHC work). For each retrieved study, work through these in order. AI can help you answer them quickly β€” the tool at the end of this session does exactly that β€” but the questions themselves are yours to ask.

#QuestionWhat to look forIndia-specific flag
01 Is the research question clearly stated? Explicit population, intervention, comparator, outcome β€” not buried in the introduction
02 What type of economic evaluation is this? Cost-effectiveness, cost-benefit, cost-utility, cost analysis only? Full evaluations compare costs AND outcomes. Partial analyses are weaker evidence for allocation decisions. Check HTAIn reference case requires full CEA or CUA for priority-setting submissions
03 What is the cost perspective? Societal (all costs), payer (government/insurer), or household (out-of-pocket)? The perspective determines what costs are included and whose financial protection is being measured. Critical Payer-perspective studies systematically miss household OOP β€” the primary financial risk measure for WHO India work
04 Are costs reported with unit prices and quantities separately? Bundled cost figures cannot be transferred to another setting. Unit prices Γ— quantities allows adjustment to Indian price levels. Critical Without this, no cost transferability to India is possible
05 Is a discount rate reported for future costs and outcomes? Costs and outcomes occurring in future years must be discounted to present value. Missing discount rate = uninterpretable long-run estimates. Check India's HTAIn reference case recommends 3% annual discount rate
06 Is uncertainty quantified? Sensitivity analysis, probabilistic analysis, or confidence intervals. A single point estimate with no uncertainty range is not a reliable basis for policy.
07 Are equity outcomes reported? Are results disaggregated by income quintile, gender, geography, or caste? Aggregate findings hide distributional impact β€” who gains and who is excluded. Critical India's NHP equity mandate requires distributional evidence; aggregate findings are insufficient for most WHO India briefs
08 Does the study account for the informal economy? In settings where 80–90% of workers are informal, studies assuming formal employment-linked coverage miss the primary target population for UHC expansion. Critical India's informal sector is ~90% of the workforce β€” studies ignoring this are not transferable to PM-JAY or CBHI contexts
09 Is the comparator realistic for India? The counterfactual matters. "No intervention" is rarely the real alternative β€” it competes with the existing mix of ESIS, CGHS, state schemes, and out-of-pocket care. Check Studies comparing to "no coverage" overestimate incremental benefit relative to India's fragmented existing coverage
10 Are implementation and scale-up costs included? Efficacy-study costs underestimate real-world costs. Claims processing infrastructure, beneficiary identification, provider empanelment, and fraud control are material in India's context. Check PM-JAY implementation evidence shows claims settlement costs adding 15–25% to programme cost in high-volume states

A worked appraisal β€” PM-JAY financial protection study

Here is how the ten questions apply to a representative type of study that commonly appears in AI-retrieved health financing searches for India:

πŸ“„ Representative abstract β€” apply the ten questions
"This study evaluates the impact of Pradhan Mantri Jan Arogya Yojana (PM-JAY) on out-of-pocket health expenditure among enrolled households in three Indian states. Using difference-in-differences estimation with NSSO 75th round data, we find that PM-JAY enrollment is associated with a 22% reduction in inpatient out-of-pocket expenditure (p<0.01). Subgroup analysis shows larger effects in urban households. The study uses a government payer perspective and reports costs in 2018 Indian rupees. No discount rate is applied as the study period is two years. Sensitivity analysis is not reported."
Q01 β€” Question stated?
Yes. Population (enrolled PM-JAY households), intervention (PM-JAY), comparator (difference-in-differences implies pre/post), outcome (inpatient OOP) are all identifiable.
Q02 β€” Evaluation type?
Partial β€” cost analysis only. No outcome measure beyond OOP reduction. Cannot be used for cost-effectiveness comparison or DALY-based priority-setting.
Q03 β€” Cost perspective?
Critical gap Government payer perspective. Misses household OOP for outpatient care, informal payments, and travel costs β€” the dominant financial burden for poor households.
Q07 β€” Equity outcomes?
Critical gap Subgroup analysis shows larger urban effects β€” but no income quintile disaggregation. Cannot determine whether the poorest BPL households benefit.
Q08 β€” Informal economy?
Partial PM-JAY targets BPL households, but the study does not report whether enrolled households are formal or informal sector β€” critical for transferability to non-PM-JAY CBHI contexts.
Q06 β€” Uncertainty?
Critical gap No sensitivity analysis. The 22% estimate is a point estimate with no range. Cannot assess robustness to model assumptions.
Overall usability
Useful as directional evidence that PM-JAY affects inpatient OOP. Not usable for equity analysis, cost-effectiveness comparison, or outpatient financial protection conclusions. Cite with explicit caveats on perspective and equity disaggregation gaps.

Use AI as the appraiser

πŸ”­ Economic appraisal tool β€” paste any abstract or study description

Paste the abstract or a description of a health economics study. Add the policy context you are using it for. The tool will apply all ten appraisal questions and flag the critical gaps for WHO India use.

Your policy context
Study country / setting
πŸ”­ Economic appraisal β€” 10-question verdict

🎯 Key takeaway

Clinical appraisal answers "is this study valid?" Economic appraisal for WHO India work adds: "is this study usable here?" The three critical flags β€” cost perspective, equity disaggregation, and informal economy accounting β€” disqualify more retrieved studies than any clinical quality criterion. AI can apply the ten questions faster than any human reviewer, but you must supply the questions. The appraisal tool in this session is the fastest way to triage a set of retrieved studies before synthesis. Session 4 takes the studies that pass appraisal and asks when their numbers can legitimately be pooled.