2022 Federal Index


U.S. Agency for International Development

Score
9
Leadership

Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY22?

1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s Evaluation Officer or equivalent (example: Evidence Act 313)?
  • In compliance with the Foundations for Evidence-Based Policymaking Act, the administrator of USAID appointed the agency evaluation officer through an internal eExecutive message that was shared with the Agency on June 4, 2019.
  • The agency’s evaluation officer  is a senior staff member who works directly with the LER director, who, in conjunction with the OLER in the Bureau for PPL, helps the agency build a body of evidence from which to learn and adapt programs. The LER director has the authority, staff, and budget to ensure agency evaluation requirements are met, including that all projects are evaluated at some level, and that decision-making is informed by evaluation and evidence. The LER director oversaw approximately 40 staff and an estimated $8,800,000 budget in FY21.
  • The Bureau for PPL aligns policy, resources, and evidence-based programming. It elevates evaluation as a source of evidence, through LER, by focusing on the agency’s ability and capability to generate, manage, and use evidence. The office performs a leadership role in the implementation of Title 1 of the Evidence Act, including the creation and development of the Agency Learning Agenda, the Annual Evaluation Plan, and the assessment of how USAID staff manage and use evidence in implementing policies and strategies. In 2022, the office developed and published a new Agency Learning Agenda that incorporated the Biden administration’s priorities.
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s Chief Data Officer (or equivalent)? (Example: Evidence Act 202(e))
  • In compliance with the Foundations for Evidence-Based Policymaking Act, USAID established the role of the chief data officer in 2019. The chief data officer  manages the USAID Data Services team, which focuses on improving the usage of data and information to ensure that the agency’s development outcomes are supported and enhanced by evidence. The chief data officer’s  team includes four direct hire data science and information technology professionals along with a budget for contract professionals who provide a comprehensive portfolio of data services in support of the agency’s mission. The chief data officer  oversaw approximately 83 contract staff and an estimated $106,000,000 budget in 2022. The chief data officer  is a senior career civil servant, and the USAID Data Services team is regularly called upon to generate products and services to support the agency’s highest priorities. The agency also invests in other complementary positions including the chief innovation officer, chief geographer, chief economist, chief scientist, and other key roles that enhance the use of evidence across the agency.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical official, performance improvement officer, and other related officials in order to support Evidence Act Implementation and improve, and evaluate the agency’s major programs?
  • The agency currently uses several governance structures and processes and will be updating these in accordance with guidance from the U.S Office of Management and Budget (OMB) related to the Foundations for Evidence-Based Policymaking Act. Some notable current examples include:
    1. Data Board: In September 2019, USAID established a Data Administration and Technical Advisory (DATA) Board, as mandated by the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) and subsequent guidance from the OMB in Memoranda M-19-18 and M-19-23. The DATA Board acts as USAID’s data governance body. It serves as a central venue for seeking input from agency stakeholders regarding data-related priorities and best practices to support agency objectives. It informs data-related policy, procedures, and standards for the agency. It supports the work of the agency evaluation officer by directing data services to facilitate evaluations. In addition to the agency evaluation officer, chief data officer, and statistical official, its membership includes the performance improvement officer, the chief financial officer, the chief technology officer, the senior agency official for privacy, and the USAID geographer as well as broad representation from across the agency including overseas missions. The USAID chief data officer, agency evaluation officer, and statistical official confer regularly to coordinate policy and activities.
    2. Management Operations Council: The agency also uses a Management Operations Council as the platform for agency leadership to assess progress toward achieving the strategic objectives in USAID’s Strategic Plan and cross-agency priority goals and additional management issues. Established in 2014, the Management Operations Council provides agency-wide leadership for initiatives and investments to reform USAID business systems and operations worldwide. It also provides a platform for senior leaders to learn about and discuss improving organizational performance, efficiency, and effectiveness. It is cochaired by the assistant administrator for the Bureau for Management and the agency’s chief operating officer. Membership includes, among others, all the agency’s chief executive officers (e.g., senior procurement executive, chief human capital officer, chief financial officer, chief information officer, performance improvement officer, and project management improvement officer). Depending on the agenda, the council may also include the chief data officer, agency evaluation officer, and agency senior statistical official.
    3. Weekly/Monthly Meetings among the Chief Data Officer, Chief Evaluation Officer, and Statistical Official: The agency established a standing meeting among the chief data officer’s team and leadership from the Office of LER, which manages agency requirements on performance monitoring, evaluation and organizational learning. As this meeting predated the first meetings of the chief data officer council and chief evaluation officer council, it was critical for information sharing and addressing priorities. The CDOs team also maintains an internal dashboard that is shared with the evaluation officer and statistical official to help track progress against milestones on an ongoing basis.
    4. Privacy Council Meetings: The agency holds monthly Privacy Council meetings to address necessary actions and raise any privacy and confidentiality concerns. Representation includes the senior agency official for privacy, the agency statistical official, and the chief privacy officer, among others.
Score
10
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence  building plan) and did it publicly release the findings of all completed program evaluations in FY22?

2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The agency-wide USAID Evaluation Policy, published in January 2011 and updated in October 2016 and April 2021, incorporates changes that better integrate with USAID’s Program Cycle Policy and ensure compliance with the Foreign Aid Transparency and Accountability Act and the Foundations for Evidence-Based Policymaking Act of 2018. The 2021 changes to the evaluation policy updated evaluation requirements to simplify implementation and increase the breadth of evaluation coverage, dissemination, and utilization.
  • The 2021 changes also established new requirements that will allow for the majority of program funds to be subjected to external evaluations. The requirements include (1) at least one evaluation per intermediate result defined in the operating unit’s strategy; (2) at least one evaluation per activity (contracts, orders, grants, and cooperative agreements) with a budget expected to be $20,000,000 or more; and (3) an impact evaluation for any new, untested approach anticipated to be expanded in scale and scope. The main way these requirements are communicated is through the USAID Automated Directives System (ADS) 201.
  • The Evaluation Policy requires consultation with in-country partners and beneficiaries as essential, as well as sufficient local contextual information included in evaluation reports. To make the conduct and practice of evaluations more inclusive and relevant to the country context, the policy requires that evaluations will be consistent with institutional aims of local ownership through respectful engagement with all partners, including local beneficiaries and stakeholders, while leveraging and building local capacity for program evaluation. As a result, the policy expects that evaluation specialists from partner countries who have appropriate expertise will lead and/or be included in evaluation teams. In addition, USAID focuses its priorities within its sectoral programming on supporting partner government and civil society capacity to undertake evaluations and use the results generated. Data from the USAID Evaluation Registry indicated that annually about two-thirds of evaluations were conducted by teams that included one or more local experts. Also, while local experts may be included in the team composition, it is still a rarity to have a local expert as the evaluation team lead for conducting USAID evaluations.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • Since the start of the operationalization of the Evidence Act, USAID has produced two agency-wide annual evaluation plans: the Annual Evaluation Plan for FY22 and the Annual Evaluation Plan for FY23. These plans also fulfill the Evidence Act requirement that all federal agencies should develop an annual evaluation plan that describes the significant evaluation activities the agency plans to conduct in the fiscal year following the year in which it is submitted. The plans contain significant evaluations that each address a question from the agency-wide Learning Agenda; performance evaluations of activities with budgets of $40,000,000 or more; impact evaluations; and ex-post evaluations.
  • In addition, USAID has an agency-wide evaluation registry that collects information on all evaluations planned to commence within the next three years (as well as tracking ongoing and completed evaluations). Currently, this information is accessible and used internally by USAID staff and is not published. To meet the Evidence Act requirement, in March 2022, USAID published its Annual Evaluation Plan for FY23 on the DEC. A draft agency-wide evaluation plan for FY24 will also be submitted to OMB in September 2022, as part of the Evidence Act deliverable.
  • In addition, USAID’s Office of LER works with bureaus to develop internal annual Bureau Monitoring, Evaluation and Learning Plans that review evaluation quality and evidence building and use within each bureau and identify challenges and priorities for the year ahead.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • The agency-wide learning agenda for USAID was first established in 2018, prior to the passing of the Evidence Act. Traditionally, USAID adopts a strongly consultative process with internal and external stakeholders to inform its priority learning needs in developing its learning agendas. Throughout the implementation of its learning agenda, USAID continues to engage external stakeholders through learning events and invitations to share evidence and by making learning agenda products and resources publicly available.
  • As priorities shift, it is essential that the Agency Learning Agenda adapts to continue to meet the learning needs of the agency. A new Agency Learning Agenda, published in May 2022, that incorporates current agency priorities and aligns with the FY22-26 Joint Strategic Plan was developed and published in March 2022. This learning agenda contained questions in key agency priority and policy areas including operational effectiveness, resilience to shocks, climate change, anti-corruption, affirmative development, migration; diversity, equity, inclusion, and accessibility; locally led development; and sustainability. The implementation of the Agency Learning Agenda is committed to furthering generation and use of evidence to inform agency policies, programs, and operations related to these critical agency priority areas.
  • Stakeholder consultations with internal and external stakeholders were central to the learning agenda development process. Consultations were conducted that captured a small prioritized set of agency learning needs related to agency policy priorities and identified opportunities for collaboration with key stakeholders on this learning. The Agency Learning Agenda team also consulted mission staff from across all of the regions in which USAID operates and Washington operating units to capture a diversity of internal voices. Consultations with external stakeholders included a selection of congressional committees, interagency partners (e.g. Department of State), other donors, think tanks, nongovernmental researchers, and development-focused convening organizations.  The Agency Learning Agenda incorporates feedback gathered through these stakeholder consultations, inputs from the joint strategic planning process with the Department of State, and a stocktaking of the previous learning agenda implementation, to arrive at  a prioritized set of questions that will focus agency learning on top policy priorities from 2022 through 2026.
2.4 Did the agency publicly release all completed program evaluations?
  • To increase access and awareness of available completed evaluation reports, USAID has created an Evaluations at USAID dashboard of completed evaluations starting from FY16. The dashboard includes an interactive map showing countries and the respective evaluations completed for each fiscal year, starting from FY16. Using filters, completed evaluations can be searched  by operating unit, sector, evaluation purpose, evaluation type, and evaluation use. The dashboard also has data on the percent of USAID evaluations that include local evaluation experts on the evaluation team that conducted the evaluation. The information for FY21 is being finalized and will be used to update the dashboard. The dashboard has also served as a resource for USAID missions. For example, in USAID/Cambodia and USAID/Azerbaijan, the dashboard was used to provide annotated bibliographies to inform the design of civic engagement activities.
  • In addition, all final USAID evaluation reports are published on the DEC, except for a small number of evaluations that receive a waiver of public disclosure (typically less than 5% of the total completed in a fiscal year). The process to seek a waiver of public disclosure is outlined in the document Limitations to Disclosure and Exemptions to Public Dissemination of USAID Evaluation Reports and includes exceptions for circumstances such as those when “public disclosure is likely to jeopardize the personal safety of U.S. personnel or recipients of U.S. resources.”
  • A review of evaluations as part of an equity assessment report to OMB (in response to the Racial and Ethnic Equity Executive Order) found that evaluations that include analysis of racial and ethnic equity are more likely to be commissioned by USAID’s Africa Bureau and USAID Programs in Ethiopia, Tanzania, Kenya, Liberia, Ghana, Uganda, Malawi, Indonesia, India, Cambodia, Kosovo, and Colombia. Reports on agriculture, education, and health programs most often utilize the words race and ethnicity in the evaluation findings.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • The agency recognizes that sound development programming relies on strong evidence that enables policymakers and program planners to make decisions, improve practice, and achieve development outcomes. As one of the deliverables of the Evidence Act, a capacity assessment was submitted to OMB and published in March 2022. This report provided an initial overview of coverage, quality, methods, effectiveness, and independence of statistics, evaluation, research, and analysis functions and activities within USAID. The report demonstrated that evaluations conducted by operating units cover the range of program areas of USAID foreign assistance investment. Economic growth, health, democracy, human rights, and governance accounted for more than three-quarters of evaluations completed by the agency in FY21.
  • The Capacity Assessment for Statistics, Evaluation, Research, and Analysis found that USAID staff use evidence from a variety of sources when they design USAID activities. Using quantitative data from a staff survey and qualitative data from key informant interviews , focus group discussions , and a data interpretation workshop, this capacity assessment used a maturity matrix benchmarking tool to assess USAID’s capacity to generate, manage, and use evidence. This tool was used to develop maturity levels of the agency around five elements that are most critical for evidence generation, management, and use: (1) resources, (2) culture, (3) collaborating, (4) learning, and (5) adapting.
  • Staff of USAID also review evaluation quality on an ongoing basis and review the internal Bureau Monitoring, Evaluation and Learning Plans referenced in 2.2 above. Most recently, USAID completed a review of the quality of its impact evaluations. The review assessed the quality of all 133 USAID-funded IE reports published between FY12 and FY19. In addition, there are several studies that have looked at parts of this question over the previous several years. These include GAO reports, such as Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations and From Evidence to Learning: Recommendations to Improve Foreign Assistance Evaluations; reviews by independent organizations like the Center for Global Development’s Evaluating Evaluations: Assessing the Quality of Aid Agency Evaluations in Global Health,Working Paper 461; and studies commissioned by USAID such as Meta-Evaluation of Quality and Coverage of USAID Evaluations 2009-2012. These studies generally show that USAID’s evaluation quality is improving over time with room for continued improvement.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
9
Resources

Did the agency invest at least 1% of program funds in evaluations in FY22 (examples: impact studies; implementation studies; rapid cycle evaluations; evaluation technical  assistance; and rigorous evaluations, including random assignments)?

3.1 ____ invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY22 budget.
  • In FY21 and prior years, USAID invested at least $211,600,000  in a combination of evaluations completed in FY21, evaluations that are ongoing in FY22, evaluation technical assistance, and evaluation capacity building, representing 1.1% of the agency’s $18,400,000,000 FY22 budget.
3.2 Did the agency have a budget for evaluation and how much was it (were there any changes in this budget from the previous fiscal year)?
  • In FY21 and prior years, USAID operating units invested approximately $84,400,000 on 94 evaluations that were completed in that fiscal year. Another 181 evaluations were ongoing in FY22 (many spanning more than one year) with total ongoing evaluation budgets estimated at $121,000,000. The budget of the Office of LER for evaluation technical assistance and evaluation capacity building in FY21 was $6,600,000, resulting in a budgeted total of $211,6000,000. This represents 1.1% of the agency’s $18,400,000,000 FY22 budget. This total does not include evaluation capacity building by other agency offices or field missions or other research, studies, analysis, or other data collection that is often used for evaluation, such as USAID’s investment in the the Demographic and Health Surveys Program or some of the assessments done by third parties across USAID’s innovation portfolio. It also does not include funding by agency subcomponents for evaluation technical assistance.
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
  • While specific data on this topic are limited, USAID estimates that investment in contracts or grants that provide support to build local organizational or governmental capacity in data collection, analysis, and use could be as high as $250,000,000.
  • For example, USAID’s Data for Impact (D4I) activity helps low- and middle-income countries, primarily in sub-Saharan Africa, to increase their capacity to use available data and generate new data to build evidence for improving health programs and health policies and for decision-making. The goal of  D4I is to help low-resource countries gather and use information to strengthen their health policies and programs and improve the health of their citizens.
  • In another example, the MEASURE Evaluation project, funded by USAID, has a mandate to strengthen health information systems (HIS) in low-resource settings. This project enables countries to improve lives by strengthening their capacity to generate and use high-quality health information to make evidence-informed strategic decisions at local, subregional, and national levels.
Score
10
Performance Management / Continuous Improvement

Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY22?

4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
  • The USAID partners with the U.S. Department of State to jointly develop and implement clear strategic goals, strategic objectives, and performance goals, which are articulated in the FY22-26 U.S. Department of State-USAID Joint Strategic Plan. The USAID and Department of State FY22-26 joint strategic plan incorporates as part of the planning process issues of racial equity, diversity, and inclusion. It includes five goals and nineteen objectives, including modernizing information technology and leveraging data to inform decision-making and support mission delivery. The FY22-26 joint strategic plan also includes a section on evidence building, and USAID and Department of States’ respective learning agendas are included in the annex.
  • The agency measures progress toward its own strategic goals, strategic objectives, and performance goals using data from across the agency, including annual performance plans and reports completed by operating units, and uses that information to report on performance externally through the Annual Performance Plan/Annual Performance Report and the Agency Financial Report.
  • To aggregate and track performance in key sectors, USAID works with the U.S. Department of State to develop and manage more than 100 standard foreign assistance indicators that have common definitions and defined collection methods. Once finalized, illustrative indicator data are published on a publicly available website known as Dollars to Results. Finally, USAID reports on agency priority goals and cross agency priority goal progress on www.performance.gov.
4.2 Does the agency use data/evidence to improve outcomes and return on investment?
  • Most of USAID’s innovation or co-created programs and those done in partnerships reflect a data-driven pay for results model, where milestones are approved by all parties and payments are made when milestones are achieved. This means that, for some programs, if a milestone is unmet, funds may be reapplied to an innovation or intervention that is achieving results. This rapid and iterative performance model means that USAID more quickly understands what is not working and can move resources away from it and toward what is working.
  • Prizes, Grand Challenges, and DIV can also be constructed to use pay for results approaches that have the potential to inform future USAID programming. Prizes set a results target, incentivize innovators to hit that target, and are paid only after assessors determine that the result has been achieved.  A recent competition, the Intelligent Forecasting Competition, incentivized competitors to use the data from health care facilities in Cote d’Ivoire to develop intelligent forecasting methods for family planning commodities and to see if those models outperformed traditional pen-and-paper forecasts.  They did.  Insights from the prize-winning model are now being tested in a grant to implement intelligent forecasting methods in Cote d’Ivoire’s health facilities. If evidence from the field suggests that intelligent forecasting methods outperform historical forecasts, this approach will be mainstreamed in USAID’s global health commodities procurements.  Most challenges, Grand Challenges, and DIV grants are fixed amount awards, a unique type of federal grant instrument that is tailor-made for pay for results approaches. Fixed amount awards are structured by paying for milestones achieved, which emphasizes performance (not just compliance) and reduces some administrative burden for all parties. In addition, interventions such as development impact bonds, like Instiglio’s Village Enterprise Development Impact Bond, supported by DIV, are used to create approaches where USAID pays only for outcomes and not inputs or attempts. The agency believes this model will pave the way for much of USAID’s work to be aligned with a pay for results approach. The agency is also piloting the use of the impact per dollar of cash transfers as a minimum standard of cost effectiveness for applicable program designs. Most innovations funded at USAID have a clear cost per impact ratio.
  • Additionally, USAID missions develop Country Development Cooperation Strategies (CDCSs) with clear goals and objectives and a Performance Management Plan that identifies expected results, performance indicators to measure those results, plans for data collection and analysis, and regular review of performance measures to use data and evidence to adapt programs for improved outcomes. The agency also promotes data-informed operations performance management to ensure that it achieves its development objectives and aligns resources with priorities. It uses its Management Operations Council to conduct an annual strategic review of progress toward achieving the strategic objectives in its strategic plan.
  • To improve linkages and break down silos, USAID continues to develop and pilot the Development Information Solution, an enterprise-wide management information system that will enable it to collect, manage, and visualize performance data across units, along with budget and procurement information, and thereby more efficiently manage and execute programming. The agency is currently in the process of world-wide deployment of the performance management module with almost half of its operating units using the system.
4.3 Did the agency have continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement (examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)?
  • The agency’s Program Cycle policy (ADS 201.3.2.18) requires that missions conduct at least one portfolio review per year that focuses on progress toward strategy-level results. Missions must also conduct a mid-course stocktaking at least once during the course of implementing their CDCS, which typically spans five years.
  • The agency developed an approach to explicitly ensure adaptation through learning called collaborating, learning, and adapting (CLA). It is incorporated into USAID’s Program Cycle guidance: “Strategic collaboration, continuous learning, and adaptive management link together all components of the Program Cycle.” Through CLA, USAID ensures that its programming is coordinated with others, grounded in a strong evidence base, and iteratively adapted to remain relative throughout implementation.
  • Based on the results of the 2022 capacity assessment of USAID, the agency was assessed as advanced in four elements of adaptive management: (1) scheduled processes for systematic review of evidence; (2) the extent of agency staff work with stakeholders to identify successes, challenges, etc.; (3) the frequency with which staff raise and document decisions; and (4) the ability of teams and operating units to implement decisions for changes in programming. An advanced level of maturity suggests that processes are often in place for systematic review, and findings from evidence generation activities are used in programmatic decisions. Pause and reflect opportunities are often hosted for staff and partners. Operating units often work with partners to identify successes, challenges, and subjects that warrant further exploration. Where findings and conclusions are raised, they are often aligned to specific programmatic and operational decisions, and decisions are often documented. Additionally, at this maturity level, planned actions are often tracked and implemented. Operating units often use data to inform decisions on maintaining, adapting, or discontinuing current approaches and often take action to adapt strategy, projects, or activities as appropriate.
  • The chief data officer’s team maintains an internal dashboard that is shared with the evaluation officer and statistical official to help track progress against milestones on an ongoing basis. This helps ensure that data needs are being met and achieving intended results.
  • In addition to this focus through its programming, USAID has two senior bodies that oversee enterprise risk management and meet regularly to improve the accountability and effectiveness of USAID programs and operations through holistic risk management. The agency tracks progress toward strategic goals and annual performance goals during data-driven reviews at Management Operations Council meetings. Also, through input from the Management Operations Council, an annual agency-wide customer service survey, and other analysis, USAID regularly identifies opportunities for operational improvements at all levels of the agency as part of its operational learning agenda as well as the agency-wide learning agenda. The initial set of learning questions in the agency Learning Agenda includes four questions that focused on operational aspects of the agency’s work that influence everything from internal policy to  design and procurement processes, program measurement, and staff training. It also includes key operational questions to support continuous learning and program improvement.
Score
8
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections to improve (or help other entities improve) outcomes, cost effectiveness, and/or the performance of federal, state, local, and other service providers programs in FY22 (examples: model data-sharing agreements or data-licensing agreements, data tagging and documentation, data standardization, open data policies, and data use policies)?

5.1 Did the agency have a strategic data plan, including an open data policy [example: Evidence Act 202(c), Strategic Information Resources Plan]?
  • The agency’s data-related investments and efforts are guided by its Information Technology Strategic Plan. This includes support for the Agency’s Development Data Policy, which provides a framework for systematically collecting agency-funded data, structuring the data to ensure usability, and making the data public while ensuring rigorous protections for privacy and security. In addition, this policy sets requirements for how USAID data are documented, submitted, and updated. Guidance for USAID’s Open Data Policy may be seen in the user guide, FAQs, and help videos.
  • In 2020 USAID revised the Development Data Policy to require development activities to create and submit data management plans before collecting or acquiring data. The Development Data Library (DDL) is the agency’s repository of USAID-funded machine readable data, created or collected by the agency and its implementing partners. The DDL, as a repository of structured and quantitative data, complements the DEC, which publishes qualitative reports and information. The agency’s data governance body, the DATA Board, is guided by annual data roadmaps that include concrete milestones, metrics, and objectives for agency data programs. A variety of stakeholder engagement tools are available on USAID’s DDL, including open data community questions and video tutorials on using DDL.
  • People-level indicators for development data have traditionally been disaggregated by sex (male or female), sometimes by age, and occasionally by other demographic markers. In 2022, the DATA Board organized a Data Disaggregation Working Group to address data disaggregation issues including but not limited to how to better measure disability status, sex vs. gender identity definitions, and collection requirements and disaggregation standards for development programming and research purposes.
  • In many countries it may be politically complicated or potentially unsafe to collect these data or data that ask about racial or ethnic identity. However, data can often be disaggregated by geographic location, region, or state and mapped with other demographic data to build a picture of geographic disparities. Country expertise can then be applied to analyze racial and ethnic equity dimensions, as described in ADS 205. Also in 2022, USAID’s GeoCenter developed a Geospatial Strategy, currently under review, that will guide the agency in collecting, analyzing, interpreting, and using geospatial data.
  • Conducted in August 2021, USAID’s equity assessment acknowledges the urgency of addressing diversity, equity, inclusion, and accessibility through an agency-wide approach. It recommended that USAID should use a consistent approach to incorporate racial and ethnic equity and diversity into policy, planning, and learning. To address this issue, the agency is  working toward increased participation of local stakeholders in the evaluation/learning process by recommending, where possible and appropriate, that USAID evaluation contractors use local experts, especially those from marginalized or underrepresented communities, as external evaluation team leaders for designing and conducting evaluations.
5.2 Did the agency have an updated comprehensive data inventory (example: Evidence Act 3511)?
  • Launched in November 2018 as part of the Development Information Solution, USAID’s public-facing DDL provides a comprehensive inventory of data assets available to the agency. It has posted the data inventory as a JavaScript object notationfile since 2015. Following the passage of the Foundations for Evidence-Based Policymaking Act, and in preparation for specific guidance expected in the upcoming release of phase 2 guidance for the act, USAID will make any necessary changes to its Comprehensive Data Inventory and continue reporting with quarterly updates as required. The DDL’s data catalog is also harvested via JavaScript on an ongoing basis for further distribution on the federal Data.gov website. Currently 566 USAID data assets are in the Comprehensive Data Inventory, available via USAID’s DDL, a 24% increase over the last Results for America report.
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement [examples: model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c)]?
  • The U.S. Agency for International Development provides both internal agency and public access to data through its enterprise digital repository solutions. Agency staff and the public can access USAID-funded reports and publications in the DEC and access USAID-funded data assets in the DDL. Both repositories store and publish information and data with standard metadata (tags), documentation such as data dictionaries, and clearly labeled standard licenses.
  • To strengthen staff and public access to usable data, the agency has established data management planning requirements that promote delivery of high-quality data with rich documentation, standards, and clear licensing and terms of use. These requirements direct USAID staff to work with USAID-funded partners on the creation of activity-level data management plans , which outline the data assets collected during an activity and plan documentation as well as use of standards. These data management plans can help ensure that USAID-funded data are submitted to the DDL as high-quality assets that are ready for publication and easy reuse.
  • The agency is also advancing modernization of data access and data linkage solutions. It is exploring an advanced analytics environment called the Development Data Commons (DDC) that will enable staff to access heterogeneous data, merge these data, and analyze and visualize them in a central place for evidence building and program improvement. In addition, USAID is prototyping an Informatica data quality solution that can automate the delivery of standardized data elements and support the ability of staff to link data more efficiently.
  • The USAID Data Services team, located in USAID’s Management Bureau’s Office of the Chief Information Officer,  manages a comprehensive portfolio of data services in support of the agency’s mission. This includes enhancing the internal and external availability and ease of use of USAID data and information via technology platforms such as the AidScape platform broadening global awareness of USAID’s data and information services and bolstering the agency’s capacity to use data and information via training and the provision of demand-driven analytical services.
  • The Data Services Team also manages and develops the agency’s digital repositories, including the DDL, the agency’s central data repository. Both USAID and external users can search for and access datasets from completed evaluations and program monitoring by country and sector.
  • Staff of USAID also have access to an internal database of more than 100 standard foreign assistance program performance indicators and associated baseline, target, and actual data reported globally each year. This database and reporting process, known as the Performance Plan and Report , promotes evidence building and informs internal learning and decisions related to policy, strategy, budgets, and programs.
  • The United States is a signatory to the International Aid Transparency Initiative (IATI) and reports some data to the IATI registry as frequently as monthly. The standard links an activity’s financial data to its evaluations. Partner country governments, civil society organizations, other initiatives, and websites can pull these data into their respective systems or view visualizations of IATI data. This supports the coordination and management of foreign aid and serves as an effective tool in standardizing and centralizing information about foreign aid flows within a country or to a specific topic, such as COVID-19.  The agency continues to improve and add to its published IATI data and is looking into ways to utilize these data as best practice, including using these data to populate partner country systems, fulfill transparency reporting as part of the U.S. commitment to the Grand Bargain, and make decisions internally, including based on what other development actors are doing by using the Development Cooperation Landscape tool. Throughout FY21 USAID continued to publish financial and descriptive information about its COVID-19 activities.
  • The agency continues to pursue better communicating data insights. Its Geocenter uses programmatic and demographic data linked with geospatial data to inform decision-making, emphasizing mapping to identify gaps in service provision and inform resource provision and decision-making (for example, to compare gender-based violence hot spots and access to relevant support services and to identify geographies and communities disparately impacted by natural disasters).
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information (example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)?
  • The agency’s Privacy Program and privacy policy (ADS 508) direct policies and practices for protecting personally identifiable information and data, while several policy references provide guidance for protecting information to ensure the health and safety of implementing partners. Its Development Data Policy (ADS Chapter 579) details a data publication process that provides governance for data access and data release in ways that ensure protections for personal and confidential information. As a reference to the Development Data Policy, ADS579maa explains USAID’s foreign assistance data publications and the protection of any sensitive information prior to release. The agency applies extensive statistical disclosure control on all public data before publication or inclusion in the DDL.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
  • While specific data on this topic is limited, USAID does invest in contracts or grants that provide support to build local organizational or governmental capacity in data collection, analysis, and use. In addition, to date, 566 USAID data assets are held in the agency’s Comprehensive Data Inventory via USAID’s DDL, a 24% increase over last year. These assets include microdata related to USAID’s initiatives that provide partner countries and development partners with insight into emerging trends and opportunities for expanding peace and democracy, reducing food insecurity, and strengthening the capacity to deliver quality educational opportunities for children and youth around the globe. Grantees are encouraged to use the data in the DDL, which provides an extensive user guide to aid in accessing, using, securing and protecting data. The Data Services team conducts communication and outreach to expand awareness of the DDL, how to access it, and how to contact the team for support. In addition, the Data Services team has developed a series of videos to show users how to access the data available. The [email protected] mail account responds to requests for assistance and guidance on a range of data services from both within the agency and from implementing partners and the public.
  • Data Services’ Data Literacy Learning Series will make available learning opportunities designed for both internal and external audiences on public-facing web pages in late 2022 that provide assistance to these public audiences about accessing the agency’s datasets while protecting privacy.
Score
8
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user friendly tool in FY22 (example: What Works Clearinghouses)?

6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • The agency has developed a draft agency-level evidence framework to clarify evidence standards for different decisions, including those related to funding. The draft was published for voluntary comment, and is being further updated. Once finalized, it will be published as a suggested help guide for USAID staff and continue to be refined and updated as needed.
  • The agency’s evidence standards are embedded within its policies and include requirements for the use of evidence in strategic planning, project design, activity design, program monitoring, and evaluation. Its Scientific Research Policy sets out quality standards for research across the agency. Its Program Cycle Policy requires the use of evidence and data to assess the development context, challenges, potential solutions, and opportunities in all of its country strategies. As part of the grant awards process, Grand Challenges, such as the Water and Energy for Food Grand Challenge and the Securing Water for Food Grand Challenge, collaborate with innovators to set ambitious results targets and make eligibility for subsequent funding contingent on demonstrated evidence of hitting those targets.  Other programs, such as DIV, use evaluation criteria based on evidence of causal impact, cost effectiveness, and pathways to scale and financial sustainability (see grant solicitation DIV Annual Program Statement). As one of USAID’s flagship open innovation programs, DIV helps to find, test, and scale innovative solutions to any global development challenge from anyone, anywhere. By backing proven innovations, driven by rigorous evidence and ongoing monitoring, USAID’s DIV program has proven to impact millions of lives at a fraction of the usual cost. Based on recent research announced in October 2020, a subset of grants from DIV’s early portfolio covering 2010-2012 has produced $17 in social benefit for every dollar spent by USAID. This research was led by Dr. Michael Kremer, a Nobel Prize-winning economist who is DIV’s scientific director. Further, the Government Accountability Office found in its December 2019 report Evidence-Based Policymaking: Selected Agencies Coordinate Activities, But Could Enhance Collaboration that  leading practices for collaborating when building and assessing evidence.
6.2 Did the agency have a common evidence framework for funding decisions?
  • The U.S. Agency for International Development has a draft agency-level evidence framework to clarify evidence definitions, principles, and approaches for different decisions, including those related to funding. The framework has been posted for review and comment by external stakeholders.
  • In addition, there are specific types of programs at the sub-agency level that do use evidence framework or standards to make funding decisions. For example, DIV uses a tiered funding approach to find, test, and scale evidence-based innovations. Its grants include stage 1 for piloting (up to $200,000), stage 2 for testing and positioning for scale (up to $1,500,000), stage 3 for transitioning to scale (up to $15,000,000), and evidence generation awards (up to $1,500,000) for research to determine causal impact of interventions that have already scaled. In particular for stage 2 grants, DIV requires evidence of impact that must be causal and rigorous—the grantee must have rigorous evidence of causal impact or conduct a rigorous evaluation of causal impact during the award. These stages are also common across other USAID-sponsored Challenge and Grand Challenge programs, such as the Mujer Prospera Challenge or the Creating Hope in Conflict Humanitarian Grand Challenge.
  • Evaluation criteria for DIV funding is based on its three core principles as further outlined in its annual grant solicitation (DIV Annual Program Statement): (1) evidence of Impact, (2) cost effectiveness, and (3) potential for scale and financial sustainability. Expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators.
  • In support of Grand Challenges programs, the Exploratory Programs and Innovation Competitions team has developed a sector-agnostic results framework and is developing a cost effectiveness analysis framework to improve the rigor and evidence-based programming for current and future Grand Challenges.
6.3 Did the agency have a user friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • An agency-wide repository for development information (including evaluation reports and other studies) is available to the public at the DEC. In addition, USAID uses the International Initiative for Impact Evaluations (3ie) database of impact evaluations relevant to development topics (including over 4,500 entries to date), knowledge gap maps, and systematic reviews that pull the most rigorous evidence and data from across international development donors. 3ie also houses a collection of institutional policies and reports that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence.
  • The Agency Programs and Functions policy designates technical bureaus responsible for being the repository for the latest information in the sectors they oversee; prioritizing evidence needs and taking actions to build evidence; and disseminating that evidence throughout the agency for those sectors. Several USAID bureaus and sectors have created user friendly tools to disseminate information on evidence-based solutions. These include, but are not limited to:
    • Climatelinks: A global knowledge portal for climate change and development practitioners;
    • Educationlinks: A portal for sharing innovations and lessons learned on implementation of the USAID Education Policy;
    • Natural Resources Management and Development Portal;
    • Urbanlinks: USAID’s sharing platform for resources on sustainable urban development.
  • Finally, USAID recently applied Natural Language Processing Text Analysis to analyze unstructured data from the previous ten years of evaluation reports published by USAID and identify countries that used specific language and terminology related to racial and ethnic equity. This review included 1,208 evaluation reports and 2,525 final contractor/grantee reports that were available on USAID’s public \DEC and converted to machine readable format. To develop an algorithm to find the most relevant information, the team consulted with experts from across the agency working on inclusive development and diversity, equity, inclusion, and accessibility issues to develop a lexicon of terms that together with other factors were tested and found to identify relevant documents.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and reapplication of evaluation findings and other evidence?
  • The agency’s approach to CLA helps ensure that evidence from evaluation of USAID programming is shared with and used by staff, partners, and stakeholders in the field. The agency requires a dissemination plan and post-evaluation action plan for each evaluation, and USAID field staff are encouraged to co-create evaluation action plans with key stakeholders based on evaluation evidence. It collects examples through the CLA Case Competition, held annually, which recognizes implementers, stakeholders, and USAID staff for their work generating and sharing technical evidence and learning from monitoring and evaluation. It is another way that the agency encourages evidence-based practices among its stakeholders.
  • The agency also periodically holds large learning events with partners and others in the development community around evidence including, but not limited to, evaluation summits, engagement around the Agency Learning Agenda, and Moving the Needle. These gatherings are designed to build interest in USAID’s evidence, build capacity around applying that evidence and learning, and elicit evidence and learning contributions.
  • The agency created and led the Million Lives Collective coalition, with more than thirty partners, which has identified more than 100 social entrepreneurs who have at least a million customers in order to share the learning that this successful cohort has had and better describe how USAID funding can assist more social entrepreneurs to grow successfully and rapidly. This unique learning platform brings donors, funders, governments, and the entrepreneurial community to the table together to learn and iterate on successful approaches.
  • Additionally, USAID recently published the Evaluations at USAID dashboard, which provides evidence of evaluation use by missions, as well as opportunities for peer learning.
Score
7
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY22 (examples: prizes and challenges, behavioral science trials, innovation labs/accelerators, performance partnership pilots, and demonstration projects or waivers with rigorous evaluation requirements)?

7.1 Did the agency engage leadership and staff in its innovation efforts to improve the impact of its programs?
  • The agency’s Innovation, Technology, and Research Hub (ITR) serves as a central point for promoting and building capacity for innovation throughout development and national security strategies across USAID, the U.S. government, and the international community. This includes helping to centrally coordinate the agency’s innovation-related work with entrepreneurs, implementing partners, universities, donors, and others to discover, test, and scale innovative solutions and approaches to development problems around the world. Key leaders focused on innovation include the agency’s chief innovation officer and the chief of DIV.  In addition to finding and supporting innovative solutions, the ITR hub also works with other USAID bureaus and independent offices to promote a culture of innovation across the agency to enable it to be a more innovative organization. For example, this includes building internal capacity, skills, and outside-the-box thinking to structure and provide funding in more creative and effective ways (e.g., using fixed amount awards as a grant instrument to pay for outcomes not just inputs).
7.2 Did the agency have programs to promote innovation to improve the impact of its programs?
  • Since 2011, USAID and its partners have launched forty-one innovative programming approaches including prizes, the DIV program, challenges, and Grand Challenges for Development. Across the Grand Challenges portfolio, partners have jointly committed over $619,000,000 ($154,000,000 from USAID) in grants and technical assistance for over 587 innovators in 107 countries. To date, more than $1,000,000,000 in follow-on funding has been catalyzed from external sources, a key measure of success.
  • The agency was honored when the co-founder and scientific director of USAID’s DIV program, Dr. Michael Kremer, received the 2019 Nobel prize for economics, along with Dr. Esther Duflo and Dr. Abhijit Banerjee. Some of the work that led to this honor was connected to USAID’s DIV program, which was launched in 2010. This program values rigorous testing methods such as impact evaluations or robust market tests to measure the impact of USAID innovations. Evidence of clear and measurable outcomes helps demonstrate what is working and what is not. Solutions that demonstrate rigorous evidence of impact can then be scaled to other contexts. Through the DIV program, Dr. Kremer helps USAID use evidence-based approaches to take small risks, identify what works, and scale those approaches to provide greater impact. Since 2010, the DIV program has made 255 grants to find, test, and scale evidence-based innovations directly affecting more than 55,000,000 lives across forty-seven countries. Based on the research by Dr. Kremer and others, announced in October 2020, a subset of grants from DIV’s early portfolio covering 2010-2012 has produced $17 in social benefit for every dollar spent by USAID.
  • As a research and development resource for all of USAID, DIV tests early stage innovations to de-risk and prepare them for adoption by other agency missions and bureaus. By allocating relatively small amounts of money to generate evidence early in an innovation’s development, DIV has enabled the agency to make evidence-driven funding decisions to back proven solutions. Sixteen missions and operating units have contributed funding to awards managed by DIV; some missions later invested significant resources to scale innovations from DIV’s portfolio.
  • In addition, the Exploratory Programs and Innovation Competitions (EPIC) team uses cutting edge program design that prioritizes solutions from local communities. It also supports USAID’s uptake of open innovation competitions, including Grand Challenges, challenges, prizes, and hackathons, to incentivize local and global entrepreneurs, academics, civil society, and other passionate problem solvers to address well-defined problems, pilot and test new solutions, and scale the most promising solutions.
  • Among U.S. Government agencies, USAID is recognized for having dedicated staff, funding, and authorities to encourage innovation and impact. In May 2022, the White House Office of Science and Technology Policy Report on the Implementation of Federal Prize and Citizen Science Authority (for FY19-20), analyzed how federal agencies use prize competitions and challenges to spur innovation, engage nontraditional solvers, address tough problems, and advance their core mission. The report found that among the federal agencies represented, USAID “stood out as exceptional in both how and why it uses prize competitions and challenges”. The findings emphasized that USAID prize competitions and challenges support USAID’s mission by leveraging resources from diverse partners to identify promising solutions to development problems and support them in reaching scale. The report also drew upon the positive return on investment and development impact generated by the Securing Water for Food Grand Challenge to illustrate the strengths of USAID’s Grand Challenge model. By elevating USAID’s strategic application of prize competition and challenge models, this report showcased USAID’s leadership and the expertise of ITR/I/EPIC and the Bureau for Global Health’s Center for Innovation and Impact (GH/CII) open innovation advisors in designing and managing open innovation competitions to the interagency.
  • For innovations specific to a particular sector, agency leadership has supported technical staff in surfacing groundbreaking ideas. For example, the Bureau for Global Health’s CII used open innovation approaches to issue the Saving Lives at Birth Grand Challenge and identify promising life-saving maternal and newborn health innovations. Similarly, the Bureau for Humanitarian Affairs’s Office for Private Sector Engagement, Diaspora, and Innovation supports the Creating Hope in Conflict Humanitarian Grand Challenge.
  • As the Bureau for Global Health’s dedicated innovation office, CII– takes a business-minded approach to fast-tracking the development, introduction, and scale-up of health innovations that address the world’s most important health challenges and assessing and adopting cutting-edge approaches (such as using unmanned aerial vehicles and artificial intelligence).
  • Feed the Future Partnering for Innovation partners with agribusinesses to help them commercialize and scale new agricultural innovations to help improve the livelihoods of smallholder farmers, increasing their productivity and incomes. To date the program has worked with fifty-nine partners in twenty different countries, investing more than $43,000,000 in new technologies and services, and leveraging nearly $100,000,000 in private sector investment. The program has helped commercialize more than 118 innovations, which resulted in an estimated $99,000,000 in sales. It has its own Innovation site that partners can easily see and connect with promising innovations and research.
  • In FY20, USAID released its first digital strategy, moving to a “digital by default” position. Since then USAID’s innovative approaches have helped get digital access to more than 40,000,000 people in the developing world. Its New Partnerships Initiative will allow USAID to work with a more diverse range of partners, strengthen existing partner relationships, and provide more entry points for organizations to work with the agency. The principles behind this initiative are outlined in the agency’s first-ever Acquisition and Assistance Strategy.
  • The agency’s investment in state-of-the-art geo and information intelligence centers mean that any program has the ability to leverage geospatial analysis and critical data sets to drive innovative solutions based on evidence and data. With more than twenty programs experimenting with artificial intelligence and machine learning, and its strong work on digital finance and connectivity, the agency is using technology to drive its programs farther and faster. It has also completed more than 1,500 Global Development Alliances, leveraging private sector in-kind or financial investments.
7.3 Did the agency evaluate its innovation efforts, including using rigorous methods?
  • Monitoring, Evaluation, Research and Learning Innovations Program (MERLIN) is a USAID endeavor designed to support the Bureau for PPPL and is now a part of the LER office within PPL. Formerly, the MERLIN program was part of the U.S. Global Development Lab, which became the ITR hub within the new Bureau for Development, Democracy, and Innovation (DDI). The MERLIN program works to innovate on traditional approaches to monitoring, evaluation, research and learning. While innovative in themselves, these approaches can also be better suited to evaluating an innovation effort. Two examples of MERLIN activities include developmental evaluation, which aims to provide ongoing feedback to managers on implementation through an embedded evaluator, and rapid feedback, which allows implementers to test various methods to reach certain targeted results (more quickly than through traditional midterm or final evaluations). Both of these approaches allow adaptive management during implementation to improve program impacts.
  • Many of the agency’s programs such as Grand Challenges and Development Innovation Ventures have been reviewed by formal audit and other performance and impact evaluations. In 2021, USAID released the findings of a meta-evaluation that encompassed nine Grand Challenges. The findings led to the evaluation’s conclusions about the achievements and effectiveness of USAID’s Grand Challenges: (1) that, overall, they have achieved positive results in varied sectors, many of which are likely to be sustainable, and have supported the scaling of some significant innovations; and (2) that the Grand Challenge model, when implemented well, is a results-driven approach that is both effective at supporting innovations to become scale-ready and at strengthening ecosystems.  The report also includes practical, actionable recommendations to strengthen Grand Challenge programing: strategic recommendations for USAID policy and Grand Challengemanagers and programmatic recommendations for USAID and partner Grand Challenge managers. The report annexes provide additional methodological information, further supporting analysis undertaken for the evaluation on comparators, cost effectiveness, and gender and social inclusion, as well as evidence from the grantee survey conducted for the evaluation.
  • Development Innovation Ventures uses a tiered funding approach to find, test, and scale evidence-based innovations. It’s grant levels include: stage 1 for piloting (up to $200,000), stage 2 for testing and positioning for scale (up to $1,500,000), stage 3 for transitioning to scale (up to $15,000,000), and evidence generation” (up to $1,500,000) for research to determine causal impact of interventions that have already scaled. In particular for stage 2 grants, DIV requires evidence of impact that must be causal and rigorous. The grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period.
  • Evaluation criteria for DIV funding is based on its three core principles as further outlined in its annual grant solicitation (DIV Annual Program Statement): (1) evidence of Impact, (2) cost effectiveness, and (3) potential for scale and financial sustainability. Expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators. Most Challenge, Grand Challenge, and DIV grants are fixed amount awards, a unique type of federal grant instrument that is tailor-made for pay for results approaches. Fixed amount awards are structured by paying for milestones achieved, which emphasizes performance (not just compliance) and reduces some administrative burden for all parties.
  • Development Innovation Ventures supports innovative solutions across all countries and development sectors in which USAID operates, including education, agriculture, water, energy, and economic development. Since 2010, it has provided more than $174,000,000 for 255 grants in forty-seven countries, reaching more than 55,000,000 beneficiaries.
Score
10
Use of Evidence in Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY22 (examples: tiered-evidence frameworks, evidence-based funding set-asides, priority preference points or other preference scoring for evidence, and pay for success provisions)?

8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
  • USAID’s top five program accounts based on actual appropriation amounts in FY22 are:
    1. International Disaster Assistance: $4,400,000,000; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303;
    2. Migration and Refugee Assistance: $3,430,000,000; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303;
    3. Development Assistance: $3,500,000,000; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303;
    4. Global Health (USAID): $3,200,000,000; eligible grantees: any U.S. or non-U.S. organization, individual, nonprofit, or for-profit entity that meets the requirements described in ADS 303;
    5. Economic Support Fund: $3,080,000,000 (ADS 303).
  • The U.S. Foreign Assistance Reference Guide provides more information on each of these accounts. More information can also be found in the FY22 Congressional Budget Justification. When awarding grants and cooperative agreements, USAID generally does not limit eligibility; eligibility may be restricted for an individual notice of funding opportunity in accordance with the procedures in ADS 303.
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs (e.g., were evidence-based interventions/practices required or suggested and was evidence a significant requirement)?
  • The agency is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of its work. The USAID Program Cycle Policy ensures that evidence from monitoring, evaluation, and other sources informs funding decisions at all levels, including during strategic planning, project and activity design, procurement, and implementation.
  • The agency’s Senior Obligation Alignment Review helps to ensure that the agency is using evidence to design and approve funding for innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of grant or contract mechanisms and proposed results.
  • The agency includes past performance to comprise 30% of the non-cost evaluation criteria for contracts. As part of determining grant awards, USAID’s policy requires an applicant to provide a list of all its cost-reimbursement contracts, grants, or cooperative agreements involving similar or related programs during the past three years. The Grant Selection Committee chair must validate the applicant’s past performance reference information based on existing evaluations to the maximum extent possible and must make a reasonable good faith effort to contact all references to verify or corroborate how well an applicant performed.
  • For assistance, as required by 2 CFR 200, USAID also does a risk assessment to review an organization’s ability to meet the goals and objectives outlined by the agency. Internal procedures for conducting the risk assessment are found in ADS 303.3.9, with guidance on how to look for evidence of effectiveness from potential grantees. According to  ADS, this can be done through reviewing past performance and evaluation/performance reports such as the Contractor Performance Assessment Reporting System (CPARS).
  • Even though there is no federal requirement (as there is with CPARS), USAID also assesses grantee past performance for use when making funding decisions (detailed in ADS 303, p. 66). According to USAID’s ADS 303 policy, before making an award of any grant or cooperative agreement, the agreement officer must state in the memorandum of negotiation that the applicant has a satisfactory record of performance. When making the award, the agreement officer may consider withholding authority to proceed to the next phase of a grant until provided evidence of acceptable performance within a given period.
  • In its recent report published on September 5, 2018, Managing for Results: Government-wide Actions Needed to Improve Agencies’ Use of Performance Information in Decision Making (GAO-18-609SP), GAO recognized USAID as one of four agencies ( of 23 surveyed) with proven practices for using performance information. Additionally, USAID was the only CFO Act agency with a statistically significant increase in the Agency Use of Performance Information Index since 2007.
  • To help vaccinate the world, save lives, and provide critical humanitarian assistance, USAID is programming $5,175,000,000 in COVID-19 global response funds received under the American Rescue Plan Act of 2021 in more than 115 countries. The agency is working with local and international partners, partner governments, and civil society to deliver and distribute vaccines; protect and train health workers; provide support for risk communication and community engagement; support infection prevention and control; strengthen diagnostic and surveillance systems; improve case management to include increasing access to oxygen; deliver emergency food assistance, humanitarian services, supplies, and response training; support continuity of basic services; and mitigate social and economic impacts caused by the pandemic.
8.3 Did the agency use its five largest competitive grant programs to build evidence (e.g., requiring grantees to participate in evaluations)?
  • Grantees report on the progress of activities through documentation such as Activity Monitoring, Evaluation, and Learning Plans, periodic performance reporting, and external and internal evaluation reports (if applicable). These reports help USAID remain transparent and accountable and also help it build evidence of what does and does not work in its interventions. Any internal evaluation undertaken by a grantee must also be provided to USAID for learning purposes. All datasets compiled under USAID-funded projects, activities, and evaluations are to be submitted by grantees to the USAID DDL. All final evaluation reports must also be submitted to the agency’s\DEC unless they receive a waiver of the USAID’s public dissemination requirements. These are rare and require the concurrence of the director of the Office of LER.
8.4 Did the agency use evidence of effectiveness to allocate funds in any other competitive grant programs (besides its five largest grant programs)?
  • The agency is actively engaged in utilizing evidence of effectiveness to allocate funds. For example, the DIV program uses a tiered funding approach to find, test, and scale evidence-based innovations. Its grants include stage 1 for piloting (up to $200,000), stage 2 for testing and positioning for scale (up to $1,500,000), stage 3 for transitioning to scale (up to $15,000,000), and evidence generation (up to $1,500,000) for research to determine causal impact of interventions that have already scaled. In particular for stage 2 grants, DIV requires evidence of impact that must be causal and rigorous. The grantee must either have rigorous underlying evidence already established, use this funding to run an evaluation with an evaluation partner, or run an evaluation with its own funding during the grant period.
  • As part of the grant awards process, Grand Challenges, such as the Water and Energy for Food Grand Challenge and the earlier Securing Water for Food Grand Challenge, collaborate with innovators to set ambitious results targets and make eligibility for subsequent funding contingent on demonstrated evidence of hitting those targets.
  • Development Innovation Ventures’ evaluation criteria for its funding is based on its three core principles as further outlined in its annual grant solicitation (DIV Annual Program Statement): (1) evidence of Impact, (2) cost effectiveness, and (3) potential for scale and financial sustainability. Expectations vary by stage, but every awardee must report against a set of pre-negotiated key performance indicators. Most DIV grants are fixed amount awards, a unique type of federal grant instrument that is tailor-made for pay for results approaches. Fixed amount awards are structured by paying for milestones achieved, which emphasizes performance (not just compliance) and reduces some administrative burden for all parties [2 CFR 200.201(b)].
  • Development Innovation Ventures  supports innovative solutions across all countries and development sectors in which USAID operates, including education, agriculture, water, energy, and economic development. Since 2010, DIV has provided more than $174,000,000 for 255 grants across forty-seven countries, reaching more than 55,000,000 beneficiaries. Based on recent research announced in October 2020 led by Nobel Prize-winning economist and DIV advisor Dr. Michael Kremer, a subset of grants from DIV’s early portfolio covering 2010-2012 has produced $17 in social benefits for every dollar spent by USAID.
  • As USAID pursues its localization agenda, there have been several notices of funding opportunity that have been announced that will include funding for organizations in the countries where USAID works. For example, Civic Engagement in Local Governance for Accountability in Sierra Leone, USAID Education: Equity and Inclusion, New Partnerships Initiative Conflict Prevention and Recovery Program, and Local Entities Advancing and Driving Health Responses (LEADR) Activity.
8.5 What are the agency’s 1-2 strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
  • Development Innovation Ventures specifically emphasizes rigorous evidence of causal impact in its official grant solicitation (DIV Annual Program Statement, p. 4): “DIV supports the piloting of early stage innovations, funds research to test new and traditional solutions to development challenges, and helps those innovations that have successfully demonstrated impact to transition to scale. DIV looks for different indicators of impact and requires different degrees of rigor in evaluation methodologies depending on the stage of financing that the applicant is seeking and on the innovation’s proposed pathway to scale.”
  • Evaluation criteria are based on DIV’s three core principles as further outlined in its Annual Program Statement: (1) evidence of impact, (2) cost effectiveness, and (3) potential for scale and financial sustainability.
  • Fenix offers expandable, lease-to-own, solar home systems financed through ultra-affordable installments over mobile money. In 2016, ENGIE Energy Access partnered with USAID’s Scaling Off-Grid Energy Grand Challenge team to support Fenix’s expansion from Uganda into Zambia, a nascent and largely underserved market. By the end of its DIV award, Fenix was the leading solar home system company in Zambia. In 2017, Fenix was acquired by ENGIE, a multinational electric utility company, and expanded into four new countries: Benin, Côte d’Ivoire, Nigeria, and Mozambique. Fenix has delivered clean, affordable energy to 3.5 million people across six countries in Africa.
  • EarthEnable is a social enterprise that has developed durable adobe floor replacements for traditional dirt floors. EarthEnable flooring minimizes exposure to bacteria and parasites–particularly for children–and is 70% less expensive than other clean floor alternatives. Early investments by DIV supported EarthEnable to test different business models and scale up operations, expanding its geographic reach and enabling it to serve lower income households. To date, EarthEnable has replaced more than 5,000 dirt floors and served more than 20,000 people in Rwanda and Uganda.
  • In 2013, DIV funded a randomized control trial to evaluate evidence for causal impact of the program Teaching at the Right Level, implemented by Pratham, an Indian nongovernmental organization. While progress has been made to help more children attend school, millions of students are not actually learning at their grade level. In response, Teaching at the Right Level Helps lagging students catch up by teaching to their skill level rather than to their age or grade. The approach works by dividing children (generally in grades 3 to 5) into groups based on learning needs rather than age or grade. It dedicates time to basic skills rather than focusing solely on the curriculum and it regularly assesses student performance, not just end-of-year examinations. In 2017, DIV further partnered with J-PAL Africa, UNICEF, USAID/Zambia, and the Zambian Ministry of General Education to scale Teaching at the Right Level across Zambia. To date, DIV’s support has helped catalyze more than $25,000,000 in additional funding beyond USAID to scale the Teaching at the Right Level model to 12 countries across Africa.
  • The Intelligent Forecasting Competition incentivized competitors to use the data from health care facilities in Cote d’Ivoire to develop intelligent forecasting methods for family planning commodities and to see if those models outperformed traditional pen-and-paper forecasts. They did. Insights from the prize-winning model are now being tested in a grant to implement intelligent forecasting methods in Cote d’Ivoire’s health facilities. If evidence from the field suggests that intelligent forecasting methods outperform traditional forecasts, this approach will be mainstreamed in USAID’s global health commodities procurements, which exceed $16,000,000,000 dollars in U.S. taxpayer investments.
8.6 Did the agency provide guidance which makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity building efforts?
  • The agency’s Program Cycle Policy states that “funding may be dedicated within a project or activity design for implementing partners to engage in an internal evaluation for institutional learning or accountability purposes.”
Score
7
Use of Evidence in Noncompetitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its non-competitive grant programs in FY22 (examples: evidence-based funding set-asides; requirements to invest funds in evidence-based activities, and pay for success provisions)?

  • USAID does not administer noncompetitive grant programs (relative score for criteria #8 applied).
Score
6
Repurpose for Results

In FY22, did the agency shift funds away from or within any practice, policy, or program that   consistently failed to achieve desired outcomes (examples: requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; incentivizing or urging grant applicants to stop using ineffective practices in funding announcements; proposing the elimination of ineffective programs through annual budget requests; incentivizing well-designed trials to fill specific knowledge gaps; supporting low-performing grantees through mentoring, improvement plans, and other forms of assistance; and using rigorous evaluation results to shift funds away from a program)?

10.1 Did the agency have policy(ies) for determining when to shift funds away from grantees, practices, policies, interventions, and/or programs that consistently failed to achieve desired outcomes, and did the agency act on that policy?
  • Based on the USAID Rapid Feedback approach USAID/Cambodia mission and implementers developed a Theory of Change, and conducted implementation research and rapid experiments on social and behavior change communication for communities and for donors. The Cambodia Children’s Trust had been working at the community level to discourage families from sending their children to residential care institutions. The mission also supported Friends International, which had been working with donors to encourage behavior change on their support to RCIs. Both implementers adopted action plans based on the results: the Cambodia Children’s Trustused the findings to streamline the social behavior change campaign before rolling it out to more villages, and Friends International used the rapid feedback findings to inform use of social and behavior change communication beyond paid social media.
  • The agency shifts funds away from ineffective grantees. For example, the Water and Energy for Food Grand Challenge, and before that, the Securing Water for Food Grand Challenge was designed with a technical assistance facility to consult and work with grantees to identify specific growth barriers and then connect them with vetted service providers that bring expertise and capabilities to help these grantees overcome their strategic barriers. The technical assistance facility provides tailored financial and acceleration support to help these grantees improve their market-driven business development, commercial growth, and scaling.
  • If a grantee is unable to meet specific performance targets, such as number of customers or product sales, further funding is not granted (per the terms of the grant), and the grantee is re-categorized into the program’s group of unsuccessful alumni. The Securing Water for Food Grand Challenge used milestone-based grants to terminate fifteen awards that were not meeting their annual milestones and shifted that money to both grants and technical assistance for the remaining twenty-five awards in the program.
  • Also, USAID’s INVEST program is designed for constant feedback loops around the partner performance. Not only are underperforming partners dropped, but new partners can be added dynamically, based on demand. This greatly increases USAID’s new partner base and elevates the performance standard across the board.
  • The agency’s Business Ecosystem Project , implemented by Palladium Group, is designed to increase private sector investment in strengthening domestic supply chains and workforce development in North Macedonia. Business Ecosystem Project’s initial strategy was to mobilize corporate social responsibility funds from investors and large international corporations toward the project’s goal, but it quickly became evident that such investments would be neither strategic nor sustainable. To achieve a lasting impact on North Macedonia’s business ecosystem, Business Ecosystem Project partnered with companies that were better positioned to recognize the link between local economic development and their own business interest. Business Ecosystem Project learned from its local partners and adapted its private sector engagement strategy to target small, medium, and large enterprises that were more dependent on domestic supply chains and workers. It no longer focuses only on foreign direct investment companies with corporate social responsibility budgets, but approaches all companies that have a real economic incentive to invest in local supply chains and workforce development. This approach was more effective and allowed Business Ecosystem Project  to co-invest in a diverse range of supply chain and workforce development initiatives, first as a proof of concept and later at scale.
10.2 Did the agency identify and provide support to agency programs or grantees that failed to achieve desired outcomes?
  • USAID/Food for Peace’s Sustainable Action for Resilience and Food Security (Sabal) is a five-year program in Nepal, implemented by Save the Children and a consortium of partners. Sabal’s goal is to improve food security and resilience in targeted districts in Nepal by improving livelihoods, health and nutrition, disaster risk reduction, and climate change adaptation. Sabal utilized CLA approaches such as pause and reflect, M&E for learning, and adaptive management to be able to adapt to the changing context. In 2015, there were devastating earthquakes, which necessitated geographic program expansion and then, two years later, there were budget cuts, which meant ending implementation in those expansion areas. At that time, CLA approaches were utilized to identify sustainability strategies; assess the level of self-reliance among community groups; tailor interventions based on the data; and gain consensus and buy-in among internal staff, consortium partners, and the local government. As a result, Sabal registered high-performing community groups with the government and linked these groups with local resources and leaders. At the same time, Sabal identified poorly performing groups and built their capacity through targeted training and community capacity building.
  • The agency’s Regional Health Integration to Enhance Services in Eastern Uganda (RHITES-E) activity (2016-2021), implemented by IntraHealth International and its partners, supports the Government of Uganda’s health “surge” strategy to find new HIV positive patients and enroll them in care and treatment. The data and results from RHITES-E’s first quarter performance review showed that the activity was way behind its target. The activity leadership and USAID decided to shift from a “business as usual” attitude to applying collaborating, learning, and adapting approaches to draw on and analyze existing data, from a USAID dashboard, to reflect on findings with key stakeholders and fill identified needs and gaps to improve surge efforts. By the end of the fiscal year 2017, the activity had improved its surge performance, resulting in better results and outcomes, and shifted in its culture to be a learning organization. Together with stakeholders, staff identified ineffective approaches such as mass HIV testing and developed and implemented new strategies to include screening of clients before testing for efficient and effective identification and linkage of new HIV positive clients into care and treatment.
  • The agency’s Empleando Futuros (Employing Futures) program, an at-risk youth program, was launched in Honduras in 2016. During its first year, a pause and reflect event found a significant number of dropouts and the need to strengthen the program’s response to better meet the needs of youth and the labor market. Subsequently USAID and its implementing partner, Banyon Global, applied USAID’s collaborating, learning, and adapting framework and tools to establish a framework for strategic pause and reflect events throughout the year, strengthen the program’s performance monitoring system, and develop an online platform for tracking program participants’ progress. These changes helped the implementer to revisit the program’s underlying assumptions and theory of change, learn continuously, and inform evidence-based decisions. Preliminary findings suggest that the program has fewer dropouts, capacity of local systems and partners has been strengthened, and private sector engagement has improved.
Back to the Standard

Visit Results4America.org