Smart Buys Methods and Best Practices

April 9, 2026

Smart Buys Lists provide decision-makers with concrete guidance on which interventions are most likely to be impactful and cost-effective for achieving a particular outcome, and under what conditions they can expect high cost-effectiveness. They categorize interventions based on their cost-effectiveness and the strength of the evidence supporting them.

Donors such as FCDO, GIZ, USAID, and the World Bank have commissioned Smart Buys lists to inform funding decisions. For example, the Global Education Evidence Advisory Panel (GEEAP) classifies interventions to improve learning in low- and middle-income countries as "Great Buys," "Good Buys," "Promising," or "Bad Buys" based on evidence of their effectiveness and cost-effectiveness.

This memo aims to set out a set of best practices for conducting and assessing Smart Buys lists. This includes outlining common assumptions and decisions when constructing a comparative analysis to produce a Smart Buys List. Smart Buys lists utilize this methodology to assess approaches to impacting a common outcome. These lists serve various purposes for distinct audiences (e.g., guiding donor investments in implementation, assessing the generalizability of an intervention across contexts, identifying priority gaps in research, etc). No single set of assumptions will be appropriate for all reviews. However, by laying out a framework for the different decision points, this memo seeks to 1) guide producers of Smart Buys lists to produce high-quality evidence synthesis, 2) help end users of Smart Buys lists to better interpret and take action based on the results, and 3) provide a framework to compare Smart Buys lists to one another. 

What are Smart Buys Lists?

At their core, Smart Buys Lists present an organized framework for comparing the available evidence on the costs and benefits of important and/or relevant development interventions. They group interventions targeting a particular outcome by their relative impact per monetary unit and strength of evidence base. There is no single, accepted method for compiling a list of smart buys. However, there are some common considerations, and they typically share several features. Organizations use a diversity of names for these analyses (e.g., Best Buys, Best Bets, ImPACT Reviews, Smart Buys).

First, they start with a shared outcome of interest or a small set of outcomes (e.g., literacy and numeracy learning levels) and consult the global evidence base of rigorous evaluative research. The set of interventions considered in the analysis is typically selected based on a combination of: a literature review of evaluations measuring the outcome of interest, input from donors and implementers on priority interventions, and where there is good data on impact and costs. 

Secondly, Smart Buys Lists are a comparative cost-effectiveness analysis based on a review of the global impact evidence base. They consider both impacts and costs, and ultimately group interventions by their typical impact per monetary unit (e.g., additional years of schooling per $100) into several categories (e.g., Best Buy, Good Buy, Promising Buy, Bad Buy).

No investment can be cost-effective in and of itself, but rather only in comparison to others, for a particular outcome. The power of the Smart Buys method is to consider an outcome that you might want to improve, look at the available evidence on many different interventions that could move that outcome, and then, from there, make a comparative judgment on what set of interventions typically has the best relative cost-effectiveness.

Smart Buys lists are distinct from other types of impact research synthesis and cost analyses, such as cost-benefit analysis, literature reviews, systematic reviews, or other quantitative meta-analyses. In the case of cost-benefit analysis, all costs and benefits are monetized. Literature reviews typically scan the literature and identify key themes, but usually start with a set of interventions rather than an outcome, and do not include costing information. Additionally, they typically do not rank interventions by effectiveness. Systematic reviews emphasize transparency and reproducibility of study inclusion to avoid study selection bias. Quantitative meta-analyses, harmonize data from multiple studies that measure the same outcome. These sometimes include cost data, but are usually focused on studies of a single intervention. 

Smart Buys efforts aim to provide donors and implementers with timely guidance informed by available evidence. Because responsiveness is a priority, they are often limited in how systematic and representative they are. Smart Buys assemble evidence from impact evaluations, syntheses and cost-effectiveness analyses, but do not attempt to harmonize or collect new cost or cost-effectiveness evidence. Smart Buys should include a framework for users to assess the quality of the assembled evidence, as well as guidance on how they can be used. Additionally, the lists can be updated as new evidence is available. 

Use Cases for Smart Buys Lists

1. Providing an up-to-date picture of the evidence for consistently cost-effective interventions 

Smart Buys lists are primarily a public good that identifies the most consistently cost-effective interventions for a particular outcome. The lists can provide a policymaker-friendly picture of the state of the evidence on moving a specific outcome, and the potential cost-effectiveness at scale.

2. As an input to procurement language for institutional funders releasing requests for proposals

Smart Buys lists can inform procurement processes. Donors can integrate Smart Buys language into their requests for proposals to indicate a preferred set of interventions. Likewise, the lists can be valuable for donors when evaluating received proposals, both to assess the likely cost-effectiveness of the proposed intervention itself and to assess the appropriateness of evidence use and additional impact evaluation within the proposed activities. Beyond the recommended interventions alone, this can include a set of questions in a request for proposals to assess the appropriateness of the intervention for the context (e.g., government capacity, data on the presence of the underlying issue, etc.) or the feasibility of implementation.

This can also give prospective implementers a better idea of what is in scope for an RFP and where to include (or not!) more evidence generation as part of their proposal. For example, if an implementer proposes to implement a promising buy, where there is some encouraging early evidence but with key open questions, they could include an impact evaluation. Conversely, if a proposed activity is categorized as an unpromising buy, the implementer would need to provide more detail as to why they expect the intervention to be successful in this setting when it has not been in others. 

3. Identifying promising areas for further evidence generation

Smart Buys can help guide further investment towards the most promising areas for generating additional cost-effectiveness evidence. This could include recommending more quantitative synthesis for the most promising interventions. For example, a quantitative meta-analysis on an intervention categorized as a best buy could help identify the key drivers of impact and assess generalizability in new settings.

Methodological Decisions for a Smart Buys List

No single set of assumptions will be appropriate for all analyses. For example, development banks and governments may treat increased tax revenue as a negative cost, whereas donors may only be concerned with the implementation costs. However, for transparency and reproducibility, it is essential to include a clear description of all methodological decisions in a review. This section aims to outline the key decision points for conducting a comparative cost-effectiveness analysis. 

1. Defining the outcome of interest and the target population

Outcome of interest

To compare interventions based on their impact on a priority outcome, it is essential to have a clear definition of the outcome to identify relevant evidence and harmonize results across studies. This can be a single outcome or considering multiple outcomes. For programs targeting multiple outcomes, one approach is to break down the cost-effectiveness question into components and consider which intervention is likely to be most cost-effective for each outcome. 

For example, the GEEAP describes its choice of outcome variable with the following: 

Outcome variable: This synthesis focuses on identifying interventions that are most cost-effective in improving learning in basic education, measured in terms of learning core cognitive skills (typically, literacy and numeracy in school-aged children, and cognitive proxies in early childhood). In addition, the Panel analyzed the impact of interventions on enrollment, attendance, and dropout.

USAID’s Improved Activity Cost Effectiveness (ImPACt) Review on women’s agricultural income describes their approach as: 

In order to compare interventions based on their impact on a priority outcome, this review adopted a specific definition of “women’s agricultural income” that helped harmonize results from many studies. This definition had to be flexible enough to cover the many types of women farmers that USAID targets, including woman-headed households, women engaged in sole production in male-headed households, or women producing jointly with a male partner.

This review focused on impacts on two aspects:

  • Income (of households, or women individually): Income is defined as the revenue, or ideally the profits, earned due to a woman’s engagement in agricultural production, either independently or jointly with other members of her household, in on-farm or off-farm activities.
  • Control of income (for women in male-headed households): The fact that a woman helped to generate income does not guarantee she will be able to decide how the income is used on her or her household’s needs. In addition to focusing on women’s
    ability to generate income, this review also prioritizes increasing women’s control of her own and her household’s income.

Target populations

In addition to the outcome of interest, it is important to consider the target population. In some cases, this may be a particular country context, sub-group of the population, or time frame for results. In others, it may be global. Regardless, the target population should be clearly defined, with the review considering trade-offs in cost-effectiveness for that group. 

From USAID’s Cost-Effectiveness Position Paper:

Best Practice 1.1: Identify the desired target outcome clearly in program design. The definition of cost-effectiveness relies on knowing an intervention’s impact, but impact on what outcome? Stakeholders’ priorities (in particular, local stakeholders), needs assessments, and other factors drive the selection and prioritization of outcomes. This outcome selection can be specific to target populations to improve equity, or the timeframe of impacts. Once those have been determined, cost-effectiveness enables an assessment of trade-offs in furthering those outcomes, usually in program design or early in implementation.

2. Inclusion criteria for impact evaluations

To bound the analysis and ensure the rigor and credibility of results, Smart Buys lists should include transparent criteria for the quality of the evidence used. Typically, the inclusion criteria for a Smart Buys list include some consideration of the evaluation methodology, the publication status of research, and the date of publication.

Evaluation methodology

In some cases, Smart Buys limit their analysis to results of randomized evaluations. This has the advantage of making the comparison of the credibility of results across studies easier. However, in some cases, there may be limited RCT evidence on a particular outcome of interest. In other cases, Smart Buys may also include other evaluation methodologies with credible counterfactuals, such as quasi-experimental evaluations, meta-analyses, etc. A best practice is to tag studies by methodology in the underlying coding sheets to facilitate robustness of results to restrictions by study type. 

Smart Buys Lists may also utilize quantitative impact evaluations as a basis for the assessment of relative cost-effectiveness, and—where helpful—consult qualitative research to add detail to theories of change, causal mechanisms, and potential drivers of impact within intervention descriptions. 

Publishing status

Smart Buys exercises may choose to use only published evaluations, to include publicly available working papers, or a broader range of grey literature (such as internal reports). Using only papers published in peer-reviewed journals may hedge against results changing in the future, but, given the long timelines for publication, it may exclude more recent relevant evidence. One approach is to include only “clickable” working papers, which are publicly available on the internet. 

For example, the GEEAP describes their approach as: 

The team prioritized academic papers, including both those that had been published in academic journals and working papers. In some cases, the team included reports if they met the research standards. The team did not include studies if they did not have a publicly available write up. For example, the team did not include studies which had only been presented at a conference if the study did not have a written paper.

Date of publication

Typically, Smart Buys lists limit the papers consulted by a publication cut-off date. For example, only including papers published since 2000. This consideration can be based on the team's capacity to produce the analysis. Additionally, relevant contextual factors may change significantly over time, and authors may choose to include only more recent papers. 

3. Literature Search

The effort to identify "best buys" necessitates a search process that varies in extensiveness (transparency and reproducibility) based on the goals, the existing evidence base, and the volume of published material. Search activities for Smart Buys can encompass a wide range of options. These include:

  • Identifying anchor studies: e.g., meta-analyses, or systematic reviews.
  • Expanding the search: e.g., utilizing reference-mining, expert contact, and snowball sampling. 

Regardless of the search method, these activities can be complemented by:

  • Forming an expert advisory board to help pinpoint high-quality studies.
  • Eliciting insights from implementers when evidence is limited. This is crucial for identifying new and emerging best buys that warrant further impact evidence generation.

Furthermore, involving end users in the search process can be highly beneficial. This can be achieved through their participation in the advisory board, a project steering committee, or as a separate dedicated effort. Understanding end users' current perspective on the most effective interventions for the target outcomes and populations will help appropriately frame the findings of the Smart Buys review.

4. Costing

Smart Buys require good estimates of the cost of implementing an intervention, typically sourced from the underlying papers rather than new cost collection. 

For the analysis, it is important to decide  which costs (including costs for whom) will be included or excluded. This may vary based on the primary audience or target population (for example, a program officer at a donor agency may be primarily interested in the costs incurred directly by the fund, whereas a development bank building a project into a loan may consider full costs to the government, but not transfers).

Best practices for Smart Buys Lists

Producers of Smart Buys Lists can take several steps to ensure the final analysis is as useful as possible. 

A detailed description of methodological decisions increases transparency and can make the results more credible by facilitating reproductions of the analysis. For example, the GEEAP report includes an appendix outlining its approach to classifying interventions in detail. 

Include clear definitions of each intervention based on the programs studied in each intervention. This should include a description of the common features of the intervention across studies and any indication of key components driving impact. 

In some cases, policymakers may want to look at only a subset of interventions that are feasible in their context, or build additional outcomes onto an existing analysis. For example, with funds earmarked for primary age education, one might be interested in the set of interventions assessed in the GEEAP. Publishing the underlying data and coding sheets from a Smart Buys exercise can facilitate this type of use and enable replication of the analysis and deeper quantitative analysis.

For interventions identified as Smart Buys, some additional work can help facilitate donors and implementers to adopt the recommendations. For example, case studies or implementation guides that lay out key considerations and steps to implement the intervention in a new context. 

Most Smart Buys lists include an independent advisory panel. Typically, these are made up of a combination of experts in the field–some economists with experience evaluating relevant programs, academic sector experts, and policymakers. 

 

Cookie Settings
This website uses cookies for optimizing its service.

Cookie Settings

We use cookies to improve user experience. Choose what cookie categories you allow us to use. You can read more about our Cookie Policy by clicking on Cookie Policy below.

These cookies enable strictly necessary cookies for security, language support and verification of identity. These cookies can’t be disabled.

These cookies collect data to remember choices users make to improve and give a better user experience. Disabling can cause some parts of the site to not work properly.

These cookies help us to understand how visitors interact with our website, help us measure and analyze traffic to improve our service.

These cookies help us to better deliver marketing content and customized ads.