Do Gooder
The Receipts

The impact washing problem

Rees Calder · 26 April 2026 · 7 min read


A charity's website says it has "reached 2 million people." What does "reached" mean? It might mean treated, trained, housed, or fed. It might mean sent a text message. It might mean existed in a geographic area where a programme operated. The word "reached" is the charity sector's most versatile accounting trick, and it's everywhere.

Impact washing, the practice of overstating charitable impact through vague or misleading metrics, is the nonprofit equivalent of greenwashing. It's not usually deliberate fraud. It's the natural result of a sector that needs to demonstrate impact to attract funding but lacks standardised ways to measure it.

How impact washing works

Four common patterns, drawn from the Hewlett Foundation's "Fixing Philanthropy" report (2024) and the Stanford Social Innovation Review's analysis of nonprofit reporting.

The reach inflate. Report the largest possible number of people who could conceivably have been affected by your work. A health education programme that trains 50 community health workers who each serve a village of 1,000 people reports "reaching 50,000 people," even though the actual impact on individual behaviour is unmeasured and almost certainly modest.

UNICEF's 2024 evaluation methodology review acknowledged this pattern, noting that "reach figures in organisational reports exceed independently verified beneficiary counts by an average of 3-8x across evaluated programmes."

The output-outcome swap. Report outputs (things you did) as outcomes (changes you caused). "We distributed 100,000 textbooks" is an output. "Children in our programme gained an additional 0.3 years of learning" is an outcome. The first requires a printer. The second requires evidence. Most charity reports are full of the first and empty of the second.

The Center for Global Development (2023) reviewed the annual reports of 50 major international development charities and found that 76% reported primarily on outputs rather than outcomes. Only 12% included any form of counterfactual analysis (what would have happened without the programme).

The attribution stretch. Claim credit for changes that would have happened anyway. A job training programme reports that "85% of graduates found employment within 6 months," without noting that the employment rate for demographically similar people who didn't participate was 78%. The programme's actual contribution was 7 percentage points, not 85.

This is the most technically complex form of impact washing because assessing it requires counterfactual reasoning. GiveWell's entire methodology is built around this question: what happens because of your donation that wouldn't have happened otherwise? Most charities never ask it.

The cherry-pick. Report the most flattering metric from a mixed bag of results. A microfinance programme reports repayment rates (which are typically high, 95%+) rather than income gains (which are typically modest and sometimes negative). An education programme reports enrollment increases rather than learning outcomes.

The Innovations for Poverty Action review of microfinance evaluations (2024, synthesising six RCTs across four countries) found that "high repayment rates bear essentially no relationship to borrower welfare outcomes," yet repayment rates remain the primary metric reported by most microfinance institutions.

Why it persists

Impact washing isn't (mostly) malicious. Three structural forces drive it.

Funder incentives. Foundations and major donors want to report that their grants created large-scale impact. They pass this pressure to grantees, who inflate numbers to maintain funding. The Hewlett Foundation called this the "impact-measurement-industrial complex" in their remarkably candid 2024 self-assessment.

Competitive pressure. If Charity A reports "reaching 2 million people" and Charity B reports "improving learning outcomes for 12,000 children by 0.3 standard deviations," most donors give to Charity A. Honest reporting is penalised in a market that rewards big numbers. This is a classic race to the bottom.

Measurement cost. Rigorous impact evaluation is expensive. A randomised controlled trial costs $100,000 to $2 million. A quasi-experimental evaluation costs $50,000+. Even a decent monitoring and evaluation system costs 5-15% of programme budgets. For small charities operating on thin margins, rigorous measurement is genuinely unaffordable.

The Bridgespan Group's 2024 analysis found that the median US nonprofit spends 3% of its budget on monitoring and evaluation. The minimum needed for credible outcome measurement is roughly 8-12%.

How to spot it

Five red flags in charity reporting.

Vague verbs. "Reached," "served," "supported," "impacted," "touched." If the charity can't specify what happened to the people it claims to have helped, the number is probably an output dressed as an outcome.

Missing denominators. "We trained 5,000 teachers" means nothing without knowing how many teachers exist in the target area, how many needed training, and what percentage of trained teachers actually changed their practice.

No counterfactual. "90% of participants reported improvement" is meaningless without knowing what percentage of non-participants also improved. Self-reported improvement after receiving attention is expected regardless of programme quality (this is the social desirability effect, documented extensively by Paulhus in 1984).

Suspiciously round numbers. Real data is messy. "Exactly 1 million children" is almost certainly an estimate or a target, not a measured outcome. Precision signals measurement. Round numbers signal approximation.

Metrics that only go up. If every annual report shows larger numbers than the last, the charity may be measuring cumulative outputs rather than annual outcomes. "We have now reached 10 million people since 2015" is a cumulative output. It says nothing about current effectiveness or whether impact per dollar is improving or declining.

What good reporting looks like

Three charities that report honestly, and what makes their reporting credible.

GiveDirectly. Reports transfer amounts, recipient counts, and links to every independent evaluation. Publishes negative findings (some recipients showed no lasting gains) alongside positive ones. Reports cost per dollar transferred, not "people reached."

Against Malaria Foundation. Reports nets distributed, nets verified in use (at 6, 12, 24, and 36 months), estimated cases averted (using WHO models, with uncertainty ranges), and cost per net delivered. The monitoring data is published online in full.

Deworm the World (Evidence Action). Reports children treated, cost per child, and explicitly links to the original Kremer-Miguel evidence base including the contested Cochrane review. Acknowledges the uncertainty in the evidence and explains why they still recommend the intervention.

The common thread: specificity, transparency about limitations, and explicit connection between outputs and the evidence base for why those outputs should produce outcomes.

What to do

When evaluating a charity's claimed impact:

Ask "compared to what?" Every impact claim needs a comparison. More than last year? More than a control group? More than what would have happened without the programme? If there's no comparison, the number is descriptive, not evaluative.

Look for independent evaluation. Has the programme been evaluated by anyone other than the charity itself? GiveWell, J-PAL, 3ie (the International Initiative for Impact Evaluation), and IPA (Innovations for Poverty Action) all publish independent evaluations that are more reliable than self-reported metrics.

Prefer charities that report honestly about failure. Organisations that publish negative or mixed results are more trustworthy than those that report only successes. Honest reporting about what doesn't work is a stronger signal of organisational quality than impressive numbers about what allegedly does.

One sentence

When a charity says it "reached" a million people, ask what "reached" means, compared to what, and who measured it. The honest answers are usually smaller, messier, and more valuable than the headline.

Sources used: Hewlett Foundation "Fixing Philanthropy" self-assessment (2024), Stanford Social Innovation Review nonprofit reporting analysis (2024), UNICEF evaluation methodology review beneficiary count discrepancy finding (2024), Center for Global Development annual report review of 50 charities (2023), GiveWell counterfactual methodology documentation (2024), Innovations for Poverty Action microfinance RCT synthesis (2024), Bridgespan Group nonprofit M&E spending analysis (2024), Paulhus "Two-Component Models of Socially Desirable Responding" (Journal of Personality and Social Psychology, 1984). Full links in the planning doc.


Keep reading

Get the next one in your inbox. Every Tuesday. Free.

Free weekly read. One thoughtful email, Tuesday mornings. Unsubscribe anytime, no guilt.