The cash benchmark
Rees Calder · 27 April 2026 · 7 min read
Before GiveDirectly existed, the implicit benchmark for charity evaluation was "better than nothing." A programme that improved some outcome for some people was considered successful. The bar was low, and most charities cleared it without much scrutiny.
GiveDirectly changed the question. The new benchmark: does this programme do more good per dollar than simply handing the dollar to a poor person? That reframing has quietly revolutionised how the most rigorous corners of the giving world evaluate impact. And most charities fail the test.
The benchmark in practice
IDinsight, a data analytics organisation that works with GiveWell and other evidence-based funders, ran a landmark study in 2019 comparing how recipients in Kenya valued different interventions versus cash. The methodology was elegant: ask recipients how much cash they'd need to receive to be equally happy as receiving the programme intervention.
The findings were uncomfortable. Recipients valued cash transfers at roughly 80-90 cents per dollar (the gap being transaction costs and uncertainty). They valued most programme interventions at 60-80 cents per dollar. In plain terms: for the majority of development programmes, recipients would prefer to just receive the money.
This doesn't mean all programmes are worthless. Some interventions, particularly health interventions with large externalities (bed nets, vaccines, deworming), were valued higher than their cost because recipients can't easily purchase public health infrastructure with cash. But the general finding held: the default assumption that expert-designed programmes outperform cash is wrong more often than it's right.
The Uganda experiment
The most rigorous head-to-head comparison comes from Blattman, Fiala and Martinez, published in the American Economic Review in 2020. The Ugandan government randomly assigned young adults to receive either cash grants (roughly $400) or vocational training programmes (costing roughly $1,000 per participant including programme overhead).
At 4 years: Cash recipients had higher earnings, more assets, and better subjective wellbeing than training recipients. Cash was both cheaper and more effective.
At 9 years: Training recipients had caught up on most economic measures. The two groups converged. But critically, the training programme cost 2.5x more per participant. Even at convergence, cash was more cost-effective because it achieved similar long-term outcomes at a fraction of the price.
The paper's concluding line is worth quoting: "Programs should be benchmarked against cash transfers, and the default assumption should be that cash is more cost-effective unless proven otherwise."
What passes the benchmark
Three categories of interventions that consistently outperform cash.
Public health interventions with externalities. Bed nets don't just protect the person sleeping under them. They reduce the mosquito population, protecting neighbours too. Vaccines create herd immunity. Deworming reduces environmental contamination. These positive externalities mean the social return exceeds the private return, which is exactly the condition under which expert-directed spending should beat individual cash decisions.
GiveWell's 2024 analysis estimates that their top health charities produce 5-10x more welfare per dollar than GiveDirectly. The gap is large enough to survive substantial uncertainty in the cost-effectiveness estimates.
Information interventions that correct beliefs. Some people make suboptimal decisions because they lack information, not money. A study by Dupas (2011, American Economic Review) found that providing Kenyan parents with information about the relative returns to different school types changed their educational investment decisions in ways that improved children's outcomes. Cash alone wouldn't have corrected the information gap.
Coordination goods. Roads, wells, electricity grids, legal systems: these require collective investment that individual cash recipients can't coordinate. No amount of individual cash transfers builds a functioning road network. The CGD's "Cash or Condition?" working paper series (2018-2023) argues that the strongest case for programme-based aid is precisely in areas where collective action problems prevent individuals from purchasing the relevant goods.
What fails the benchmark
Most vocational training programmes. The evidence from multiple RCTs (McKenzie, 2017, World Bank Research Observer) shows that traditional classroom-based vocational training produces minimal long-term earnings gains relative to cost. Cash transfers to start businesses produce comparable or better results at lower cost.
Many microfinance programmes. The six landmark microfinance RCTs synthesised by Banerjee, Karlan and Zinman (2015, American Economic Journal) found modest positive effects on business creation but no measurable impact on income, consumption, or welfare. Given that microfinance involves lending (with interest) while cash transfers are gifts, the comparison strongly favours cash for welfare improvement.
Most community development programmes with high overhead. Any programme where 40%+ of costs go to staff, offices, vehicles, and administration faces a steep hill. If only 60 cents of every dollar reaches beneficiaries in the form of services, and those services are worth 80 cents on the dollar to recipients, the effective value is 48 cents per dollar donated. Cash transfers, delivering 83-88 cents per dollar, win easily.
The philosophical tension
The cash benchmark creates a genuine tension in the charity sector.
For donors: It simplifies decision-making enormously. If you can't evaluate a programme's impact relative to cash, just give cash. GiveDirectly becomes the default, and the burden of proof shifts to programme-based charities to demonstrate they beat it.
For charities: It raises the bar. Claiming "we help people" is no longer sufficient. The question becomes "do we help people more than cash would?" This makes some charity leaders deeply uncomfortable, because the honest answer for many programmes is "we don't know."
For the sector: It creates a healthy competitive pressure. Charities that can demonstrate they beat the cash benchmark have a powerful fundraising message. Those that can't are incentivised to either improve their cost-effectiveness or redirect their resources. The Center for Global Development has argued that widespread adoption of cash benchmarking would improve the effectiveness of the aid sector more than any other single reform.
What this means for you
Default to cash if unsure. If you can't evaluate a charity's cost-effectiveness relative to GiveDirectly, give to GiveDirectly. You're guaranteed to deliver 83-88 cents of every dollar directly to someone in extreme poverty. That's a floor, not a ceiling, but it's a very good floor.
Fund what beats cash. GiveWell's top recommendations (Against Malaria Foundation, Malaria Consortium, Helen Keller International, New Incentives) all beat the cash benchmark by estimated margins of 5-10x. If you want to maximise impact, these are the current best bets.
Ask the question. When evaluating any charity, ask: "Has this programme been compared to cash transfers?" If the answer is no, that's not disqualifying, but it should reduce your confidence in claimed impact. The charities that welcome this question are usually the ones worth funding.
One sentence
The most important question in charity evaluation isn't "does this work?" but "does this work better than just giving the money to poor people?" The answer changes where you should donate.
Sources used: IDinsight recipient valuation study for GiveWell (2019), Blattman, Fiala and Martinez "The Long-Term Impacts of Grants on Poverty" (American Economic Review, 2020), Dupas "Do Teenagers Respond to HIV Risk Information?" (American Economic Review, 2011), McKenzie "How Effective Are Active Labour Market Policies in Developing Countries?" (World Bank Research Observer, 2017), Banerjee, Karlan and Zinman "Six Randomized Evaluations of Microcredit" (American Economic Journal, 2015), CGD "Cash or Condition?" working paper series (2018-2023), GiveWell cost-effectiveness analysis and GiveDirectly comparison (2024). Full links in the planning doc.