2022 Federal Index


Millennium Challenge Corporation

Score
9
Leadership

Leadership: Did the agency have senior staff members with the authority, staff, and budget to build and use evidence to inform the agency’s major policy and program decisions in FY22?

1.1 Did the agency have a senior leader with the budget and staff to serve as the agency’s evaluation officer or equivalent (example: Evidence Act 313)?
  • The Monitoring and Evaluation (M&E) managing director, designated in accordance with the Foundations for Evidence-Based Policymaking Act, serves as MCC’s evaluation officer. The managing directorship is a career civil service position with the authority to execute M&E’s budget, an estimated $12,200,000 in due diligence funds in FY22, with a staff of twenty-five people.
1.2 Did the agency have a senior leader with the budget and staff to serve as the agency’s chief data officer or equivalent [example: Evidence Act 202(e)]?
  • The director of Data and Product Management in the Office of the Chief Information Officer is MCC’s chief data officer. Designated in accordance with the Foundations for Evidence-Based Policymaking Act, the chief data officer manages a staff of three and an estimated FY22 budget of $6,500,000 in administrative funds.
1.3 Did the agency have a governance structure to coordinate the activities of its evaluation officer, chief data officer, statistical officer, performance improvement officer, and other related officials in order to support Evidence Act Implementation and improve, and evaluate the agency’s major programs?
  • The MCC Evaluation Management Committee (EMC) oversees decision-making, integration, and quality control of the agency’s evaluation and programmatic decision-making in accordance with the Foundations for Evidence-Based Policymaking Act. The EMC integrates evaluation with program design and implementation to ensure that evaluations are designed and implemented in a manner that increases their utility to both MCC and in-country stakeholders and external stakeholders. It includes the agency’s evaluation officer, chief data officer, representatives from Monitoring and Evaluation (M&E), the project lead, sector specialists, the economist, and gender and environmental safeguards staff. For each evaluation, the EMC has between eleven and sixteen meetings or touchpoints, from evaluation of scope of work to final evaluation publication. The EMC plays a key role in coordinating MCC’s Evidence Act implementation.
Score
7
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence building plan) and did it publicly release the findings of all completed program evaluations in FY22?

2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The corporation’s Independent Evaluation Portfolio is governed by its publicly available Policy for M&E. This policy requires all programs to develop and follow comprehensive M&E plans that adhere to MCC’s standards. It was revised in March 2017 to ensure alignment with the Foreign Aid Transparency and Accountability Act of 2016. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation. The policy is currently being updated to reflect best practice in monitoring and evaluation standards and further align with the Evidence Act.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • Every MCC investment must adhere to MCC’s rigorous Policy for M&E, which requires every MCC program to contain a comprehensive M&E plan and undergo an independent evaluation. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry into force. The M&E plan lays out the evaluation strategy and includes two main components. The monitoring component includes the methodology and process for assessing progress toward the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state, and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • To advance MCC’s evidence base and respond to the Evidence Act, MCC is implementing a learning agenda around women’s economic empowerment with short- and long-term objectives. The agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s women’s economic empowerment activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s women’s economic empowerment work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/women’s economic empowerment design?
    • How does MCC’s women’s economic empowerment work increase income and assets for women and girls within those households?
    • How does MCC’s women’s economic empowerment work increase women’s empowerment, defined through measures relevant to the women’s economic empowerment intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders. Agency leadership has named inclusion and gender as a key priority. In support of this priority, MCC released a new Inclusion and Gender Strategy to further codify ambition around learning on these issues.
2.4 Did the agency publicly release all completed program evaluations?
  • The corporation publishes each independent evaluation of every project, underscoring its commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the new MCC Evidence Platform. As of September 2022, MCC had contracted, planned, and/or published 236 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 122 final reports and 45 interim reports have been finalized and published.
  • In FY22, MCC also continued producing Evaluation Briefs, an MCC product that distills key findings and lessons learned from MCC’s independent evaluations. It will produce Evaluation Briefs for each evaluation moving forward. In FY22, MCC also completed an Evaluation Brief for the backlog of all completed evaluations. As of September 2022, MCC has published 131 Evaluation Briefs.
2.5 Did the agency conduct an Evidence Capacity Assessment that includes information about the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • Millennium Challenge Corporation is currently working on a draft capacity assessment in accordance with the Evidence Act. Once a compact or threshold program is in implementation, M&E resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes and assess the attributable impact of all of MCC’s programs. It sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E managing director, chief economist, and vice president for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY22, MCC began or continued comprehensive independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s .
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The corporation employs rigorous independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of September 2022, about 36% of MCC’s evaluation portfolio consists of impact evaluations, and 64% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted as prescribed by the program’s M&E plan, in accordance with MCC’s Policy for M&E.
Score
8
Resources

Did the agency invest at least 1% of program funds in evaluations in FY22 (examples: impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance; and rigorous evaluations, including random assignments)?

3.1 ____ invested $____ on evaluations, evaluation technical assistance, and evaluation capacity-building, representing __% of the agency’s $___ billion FY22 budget.
  • Millennium Challenge Corporation invested $12,200,000 in evaluations, evaluation technical assistance, and evaluation capacity building, representing 1.5% of the agency’s $800,000,000 FY22 budget (minus staff/salary expenses).
3.2 Did the agency have a budget for evaluation and how much was it (were there any changes in this budget from the previous fiscal year)?
  • In FY22 MCC budgeted $12,200,000 for monitoring and evaluation, a decrease of $5,400,000 from FY21 ($17,600,000 total). This decrease is due to the reduced number of countries MCC made eligible for investment in FY22 rather than to a reduction in the value MCC places on monitoring and evaluation.
3.3 Did the agency provide financial and other resources to help city, county, and state governments or other grantees build their evaluation capacity (including technical assistance funds for data and evidence capacity building)?
  • In support of its emphasis on country ownership, MCC also provides substantial, intensive, and ongoing capacity building to partner country M&E staff in every country in which it invests. As a part of this effort, MCC provides training and ongoing mentorship in the local language. This includes publishing select independent evaluations, Evaluation Briefs and other documentation in the country’s local language. The dissemination of local language publications helps further MCC’s reach to its partner country’s government and members of civil society, enabling them to fully reference and utilize evidence and learning beyond the program. Data strengthening and national statistical capacity are also included as a part of MCC’s evidence building investments. This agency-wide commitment to building and expanding an evidence-based approach with every partner country is a key component of MCC’s investments.
  • As a prime example of this work, MCC continues to implement a first of its kind evaluation partnership in its Morocco investment. The local implementing entity, TMCA Morocco, signed MCC’s first cooperation agreement, a funded partnership within a country program, under the new Partnership Navigator Program Partnership Solicitation process. This first MCA-driven partnership agreement is bringing Nobel prize-winning economic analysis approaches from the Massachusetts Institute of Technology and Harvard to partner with a Moroccan think tank to create an employment lab for conducting rigorous research into Moroccan labor market programs and policies. This research is coupled with training and capacity building for key Moroccan policymakers to promote evidence-based decision-making.
Score
7
Performance Management / Continuous Improvement

Did the agency implement a performance management system with outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY22?

4.1 Did the agency have a strategic plan with outcome goals, program objectives (if different), outcome measures, and program measures (if different)?
4.2 Does the agency use data/evidence to improve outcomes and return on investment?
  • Millennium Change Corporation is committed to using high-quality data and evidence to drive its strategic planning and program decisions. The M&E plans for all programs and tables of key performance indicators for all projects are available online by compact and threshold program and by sector, for use by both partner countries and the general public. Prior to investment, MCC performs a cost-benefit analysis to assess the potential impact of each project and estimates an economic rate of return. It uses a 10% economic rate of return hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. It then recalculates economic rates of return at investment closeout, drawing on information from its monitoring data (among other data and evidence) to test original assumptions and assess the cost effectiveness of MCC programs. In an effort to complete the evidence loop, MCC now includes evaluation-based cost-benefit analysis as a part of its independent final evaluation. As a part of the independent evaluation, the evaluators analyze the MCC-produced economic rate of return and associated project assumptions five or more years after investment close to understand if and how the benefits actually accrued. These evaluation-based economic rates of return add to the evidence base by contributing to a better understanding of the long-term effects and sustainable impact of MCC’s programs.
  • In addition, MCC produces periodic reports that capture the results of its learning efforts in specific sectors and translate that learning into actionable evidence for future programming. Once MCC has a critical number of evaluations in a given sector, it endeavors to draw portfolio-wide learning from that sector in the form of Principles into Practice reports.
4.3 Did the agency have continuous improvement or learning cycle processes to identify promising practices, problem areas, possible causal factors, and opportunities for improvement (examples: stat meetings, data analytics, data visualization tools, or other tools that improve performance)?
  • To enhance its credibility in terms of results, transparency, learning, and accountability, MCC continues to implement and expand its reporting system. The Star Report and its associated quarterly business process capture key information to provide a framework for results and improve the ability to promote and disseminate learning and evidence throughout the compact and threshold program lifecycle. For each compact and threshold program, evidence is collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements. Critically, this information is available in one report after each program ends. Each country will have a Star Report published roughly seven months after completion.
  • Continual learning and improvement is a key aspect of MCC’s operating model. The corporation continuously monitors progress toward compact and threshold program results on a quarterly basis using performance indicators that are specified in the M&E plan for each country’s investments. The M&E plans specify indicators at all levels (process, output, and outcome) so that progress toward final results can be tracked. Every quarter each partner country submits an indicator tracking table that shows actual performance of each indicator relative to the baseline that was established before the activity began and the performance targets that were established in the M&E plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC management and the relevant country team review this data in a formal Quarterly Performance Review meeting to assess whether results are being achieved and integrate this information into project management and implementation decisions.
  • Millennium Challenge Corporation also publishes and produces semiannual updates on an exciting new interactive sector-level learning product: Sector Results and Learning pages. Sector Results and Learning pages are interactive web pages that promote learning and inform program design by consolidating the latest monitoring data, independent evaluation results, and lessons from the key sectors in which MCC invests. Critically, this information is now publicly available in one place, for the first time. An interactive learning database allows practitioners to efficiently retrieve past learning to inform new programs. Sector Results and Learning pages have been published for all six common sectors on which MCC reports: water, sanitation, and hygiene; transportation; agriculture and irrigation; education; energy; and land.
Score
8
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections to improve (or help other entities improve) outcomes, cost effectiveness, and/or   the performance of federal, state, local, and other service providers programs in FY22 (examples: model data-sharing agreements or data-licensing agreements, data tagging and documentation, data standardization, open data policies, and data use policies)?

5.1 Did the agency have a strategic data plan, including an open data policy [example: Evidence Act 202(c), Strategic Information Resources Plan]?
  • In FY22, MCC is continuing to develop a strategic data plan. As detailed on the Digital Strategy and Open Government pages of the MCC website, MCC promotes transparency to provide people with access to information that facilitates their understanding of MCC’s model, MCC’s decision-making processes, and the results of MCC’s investments. Transparency, and therefore open data, is a core principle for MCC because it is the basis for accountability, provides strong checks against corruption, builds public confidence, and supports informed participation of citizens.
  • As a testament to MCC’s commitment to and implementation of transparency and open data, the agency was again the highest ranked bilateral donor and U.S. government agency for the sixth consecutive Index. In addition, the U.S. government is part of the Open Government Partnership, a signatory to the International Aid Transparency Initiative, and must adhere to the Foreign Aid Transparency and Accountability Act. All of these initiatives require foreign assistance agencies to make it easier to access, use, and understand data. All of these actions have created further impetus for MCC’s work in this area, as they establish specific goals and timelines for adoption of transparent business processes.
  • Additionally, MCC convenes an internal Data Governance Board, an independent group consisting of representatives from departments throughout the agency, to streamline its approach to data management and advance data-driven decision-making across its investment portfolio.
5.2 Did the agency have an updated comprehensive data inventory (example: Evidence Act 3511)?
  • Through its Open Data Catalog, which includes an enterprise data inventory of all data resources across the agency for release of data in open, machine-readable formats, MCC makes extensive program data, including financials and results data, publicly available. The Department of Policy and Evaluation leads the MCC Disclosure Review Board process for publicly releasing the de-identified microdata that underlie the independent evaluations on the MCC Evidence Platform, following MCC’s Microdata Management Guidelines to ensure appropriate balance in transparency efforts with the protection of human subjects’ confidentiality.
5.3 Did the agency promote data access or data linkage for evaluation, evidence-building, or program improvement [examples: model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; downloadable machine-readable, de-identified tagged data; Evidence Act 3520(c)]?
  • The new Evidence Platform, which links and provides access to all of MCC’s microdata from evaluation packages, offers a first of its kind data enclave for users to access and use public and restricted use data. The virtual data enclave connects datasets to qualitative reports and results for integrated research and learning. The platform encourages research, learning, and reproducibility and connects datasets to analytical products across the portfolio. In addition to the Evidence Platform, MCC’s Data Analytics Program enables enterprise data-driven decision-making through the capture, storage, analysis, publishing, and governance of MCC’s core programmatic data. It streamlines the agency’s data life cycle, facilitating increased efficiency. Additionally, it promotes agency-wide coordination, learning, and transparency. For example, MCC has developed custom software applications to capture program data, established the infrastructure for consolidated storage and analysis, and connected robust data sources to end user tools that power up-to-date dynamic reporting and also streamline content maintenance on MCC’s public website. As a part of this effort, the M&E  team has developed an Evaluation Pipeline application that provides up-to-date information on the status, risk, cost, and milestones of the full evaluation portfolio for better performance management.
5.4 Did the agency have policies and procedures to secure data and protect personal, confidential information (example: differential privacy; secure, multiparty computation; homomorphic encryption; or developing audit trails)?
  • The corporation’s Disclosure Review Board ensures that data collected from surveys and other research activities are made public according to relevant laws and ethical standards that protect research participants while recognizing the potential value of the data to the public. The board is responsible for reviewing and approving procedures for the release of data products to the public; reviewing and approving data files for disclosure; ensuring that de-identification procedures adhere to legal and ethical standards for the protection of research participants; and initiating and coordinating any necessary research related to disclosure risk potential in individual, household, and enterprise-level survey microdata on MCC’s beneficiaries.
  • The Microdata Evaluation Guidelines inform MCC staff and contractors, as well as other partners, about how to store, manage, and disseminate evaluation-related microdata. These microdata are distinct from other data MCC disseminates because they typically include personally identifiable information and sensitive data as required for independent evaluations. With this in mind, MCC’s Guidelines govern how to manage three competing objectives: share data for verification and replication of the independent evaluations, share data to maximize usability and learning, and protect the privacy and confidentiality of evaluation participants. These guidelines were established in 2013 and updated in January 2017. Following these guidelines, MCC has publicly released 117 de-identified, public use, microdata files for its evaluations and evidence studies. It also has 25 restricted data packages cleared by the Disclosure Review Board that it can make accessible on the new MCC Evidence Platform. The corporation’s experience with developing and implementing this rigorous process for data management and dissemination while protecting human subjects throughout the evaluation life cycle is detailed in Opening Up Evaluation Microdata: Balancing Risks and Benefits of Research Transparency. MCC is committed to ensuring transparent, reproducible, and ethical data and documentation and seeks to further encourage data use through its new Evidence Platform.
5.5 Did the agency provide assistance to city, county, and/or state governments, and/or other grantees on accessing the agency’s datasets while protecting privacy?
  • Both MCC and its partner in-country teams produce and provide data that are continuously updated and accessed. The MCC website is routinely updated with the most recent information, and in-country teams are required to do the same on their websites. As such, all MCC program data are publicly available on MCC’s website and individual MCA websites for use by MCC country partners and other stakeholder groups. As a part of each country’s program, MCC provides resources to ensure that data and evidence are continually collected, captured, and accessed. In addition, each project’s evaluation has an Evaluation Brief that distills key learning from MCC-commissioned independent evaluations. Select Evaluation Briefs have been posted in local languages, including Mongolian, Georgian, French, and Romanian, to better facilitate use by country partners.
  • Millennium Challenge Corporation also has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR), referred to as the Data Collaboratives for Local Impact (DCLI). This partnership is improving the use of data analysis for decision-making within PEPFAR and MCC partner countries by working toward evidence-based programs to address challenges in HIV/AIDS and health, empowerment of women and youth, and sustainable economic growth. Data-driven priority setting and insights gathered by citizen-generated data and community mapping initiatives contribute to improved allocation of resources in target communities to address local priorities, such as job creation, access to services, and reduced gender-based violence. The impact of DCLI is being extended through a new partnership in Côte d’Ivoire, where MCC, Microsoft, and others are partnering to develop a women’s data lab and network program. The program will empower women-owned or women-led small and medium enterprises and female innovators and entrepreneurs with digital and data skills to effectively participate in the digital economy and grow their businesses.
Score
8
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding purposes; did that framework prioritize rigorous research and evaluation methods; and did the agency disseminate and promote the use of evidence-based interventions through a user friendly tool in FY22 (example: What Works Clearinghouses)?

6.1 Did the agency have a common evidence framework for research and evaluation purposes?
  • For each investment, MCC’s Economic Analysis Division undertakes a constraints analysis to determine the binding constraints to economic growth in a country. To determine the individual projects in which MCC will invest in a given sector, MCC’s Economic Analysis Division combines root cause analysis with a cost-benefit analysis. The results of these analyses allow MCC to determine which investments will yield the greatest development impact and return on MCC’s investment. Every investment also has its own set of indicators as well as standard agency-wide sector indicators for monitoring during the life cycle of the investment and an evaluation plan for determining the results and impact of a given investment. MCC’s Policy for Monitoring and Evaluation details its evidence-based research and evaluation framework. According to the policy, each completed evaluation requires a summary of findings, now called the Evaluation Brief, to summarize the key components, results, and lessons learned from the evaluation. Evidence from previous MCC programming is considered during the development of new programs. Again, according to the policy, “monitoring and evaluation evidence and processes should be of the highest practical quality. They should be as rigorous as practical and affordable. Evidence and practices should be impartial. The expertise and independence of evaluators and monitoring managers should result in credible evidence. Evaluation methods should be selected that best match the evaluation questions to be answered. Indicators should be limited in number to include the most crucial indicators. Both successes and failures must be reported.”
6.2 Did the agency have a common evidence framework for funding decisions?
  • The corporation uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. It uses evidence-based selection criteria, generated by independent, objective third parties, to select countries for grant awards. To be eligible for selection, World-Bank-designated low- and lower-middle-income countries must first submit to MCC a collection of twenty independent third party indicators that objectively measure their policy performance in the areas of economic freedom, investing in people, and ruling justly. An in-depth description of the country selection procedure can be found in the annual report.
6.3 Did the agency have a user-friendly tool that disseminated information on rigorously evaluated, evidence-based solutions (programs, interventions, practices, etc.) including information on what works where, for whom, and under what conditions?
  • Millennium Challenge Corporation is a leader in the production of evidence on the results of its international development programs. As a data-driven agency, it invests in evidence-generating activities, such as due diligence surveys, willingness to pay surveys, and independent evaluations. It has more room to lead, however, in the accessibility and usability of its evidence. Since 2013, MCC has shared the data, documentation, and analysis underlying its independent evaluations. In terms of accessibility of evaluation materials, users have noted that MCC’s central evaluation and data repository, the Evaluation Catalog, is hard to navigate.
  • Recognizing that transparency is not enough to achieve accountability and learning, MCC developed the MCC Evidence Platform. The Evidence Platform offers first of its kind study and data access and usability and encourages the utilization of its vast library of evidence. The corporation invites researchers, from students to experienced professionals, to use the data and documentation provided here to reproduce and build upon MCC’s evidence base and thereby drive development effectiveness for, and beyond, MCC.
  • The MCC Evidence Platform shares resources:
    • Studies–Users may search by studies to find all the related data and documentation associated with each study. Study types include independent evaluations, monitoring, constraints analysis, willingness to pay, due diligence, and country-led studies.
    • Documentation–Users may search by specific documentation associated with MCC-funded studies. This documentation is shared as specific knowledge product types including design reports, baseline reports, interim analysis reports, final analysis reports, MCC learning documents, evaluation-based cost-benefit analyses, and questionnaires.
    • Data Packages–Users may search by specific data packages associated with MCC-funded studies. Data package types include round (baseline, interim, or final), public, and restricted access.
  • The MCC Evidence Platform encourages the use of MCC’s data, documentation, and analysis as global public goods to support mutual accountability for the agency and its country partners and to encourage learning from measured results. The platform includes information about the level of rigor, research methodology, and population effects for every evaluation.
6.4 Did the agency promote the utilization of evidence-based practices in the field to encourage implementation, replication, and application of evaluation findings and other evidence?
  • As described above, the new MCC Evidence Platform was intentionally designed and launched with utilization as a primary goal. The platform specifically encourages users to take MCC learning and evidence and apply and reproduce it for new learning. The platform will then aim to also share new learning based on published MCC evidence. As a part of this comprehensive approach, Evaluation Briefs continue to be a cornerstone to promoting utilization across audience groups. Enhanced utilization of MCC’s vast evidence base and learning was a key impetus behind the creation and expansion of the Evaluation Briefs and Star Reports. A push to ensure sector-level evidence use has led to the renewed emphasis of the Principles into Practice series, with recent reports on the transport, education, and water and sanitation sectors.
  • The corporation has also enhanced its in-country evaluation dissemination events to ensure further results and evidence building with additional products in local languages and targeted stakeholder learning dissemination strategies.
Score
7
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY22 (examples: prizes and challenges, behavioral science trials, innovation labs/accelerators, performance partnership pilots, and demonstration projects or waivers with rigorous evaluation  requirements)?

7.1 Did the agency engage leadership and staff in its innovation efforts to improve the impact of its programs?
  • The Millenium Challenge Corporation supports the creation of multidisciplinary country teams to manage the development and implementation of each compact and threshold program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. Prior to moving forward with a program investment, teams are encouraged to use the lessons from completed evaluations to inform their work.
  • The corporation is implementing proposed innovations to make the compact and threshold program development timeline more rapid, a proposal from the second Millennium Efficiency Challenge (MEC). The MEC was designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the compact and threshold program development timeline while maintaining MCC’s rigorous quality standards and investment criteria.  The corporation also actively encouraged innovation in the evaluation methods of its programs, including through new data collection techniques.
7.2 Did the agency have programs to promote innovation to improve the impact of its programs?
  • The corporation’s approach to development assistance hinges on its innovative and extensive use of evidence to inform investment decisions, guide program implementation strategies, and assess and learn from its investment experiences. As such, MCC’s Office of Strategic Partnerships offers opportunities within its Annual Program Statement that allow MCC division and country teams to tap into the most innovative solutions to new development issues. These partnerships allow MCC evaluation and economic staff to ensure cutting edge innovation in each new program.
  • These innovations in evidence generation have been even more critical in the past year given the inability to conduct many data collection activities in person. The corporation has utilized local data collection and better technology to maintain evidence generation. For example, MCC partnered with the University of Colorado for use of satellite-connected sensors on water kiosks built by the Sierra Leone Threshold Program’s Water Project. Additionally, MCC partnered with a consortium of the University of Colorado and the SweetSense Inc. technology firm to collect high-frequency monitoring data using emerging and cost-effective technologies to understand the state of water service from the water kiosks constructed by the project. The partnership provided significant flexibility to collaboratively determine how available technology can suit MCC’s monitoring needs, including monitoring in data-challenged environments. It also offered an example of how other MCC water projects can capitalize on the use of similar technology tools to collect more reliable data more frequently.
  • The Millenium Challenge Corporation regularly engages in implementing test projects as part of its overall compact programs. A few examples are (1) an innovative pay-for-results mechanism in Morocco to replicate or expand proven programs that provide integrated support; (2) a “call-for-ideas” in Benin for information regarding potential projects that would expand access to renewable off-grid electrical power; and (3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system.
7.3 Did the agency evaluate its innovation efforts, including using rigorous methods?
  • Although MCC rigorously evaluates all program efforts, it takes special care to ensure that innovative or untested programs are thoroughly evaluated. In addition to producing final program evaluations, MCC continuously monitors and evaluates all programs throughout the program life cycle, including innovation efforts, to determine if mid-program course-correction actions are necessary. These interim data help MCC continuously improve its innovation efforts so that they can be most effective and impactful. Although 36% of MCC’s evaluations use random-assignment methods, all of MCC’s evaluations, both impact and performance, use rigorous methods to achieve the three-part objectives of accountability, learning, and results in the most cost-effective way possible.
Score
15
Use of Evidence in Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its competitive grant programs in FY22 (examples: tiered-evidence frameworks, evidence-based funding set-asides, priority preference points or other preference scoring for evidence, and pay for success provisions)?

8.1 What were the agency’s five largest competitive programs and their appropriations amount (and were city, county, and/or state governments eligible to receive funds from these programs)?
  • MCC awards all of its agency funds through two competitive grants:
    1. the compact program: $651,000,000 n in FY22; eligible grantees: low- and lower-middle-income countries;
    2. the threshold program ($31,000,000 in FY22; eligible grantees: low- and lower-middle-income countries.
8.2 Did the agency use evidence of effectiveness to allocate funds in its five largest competitive grant programs (e.g., were evidence-based interventions/practices required or suggested and was evidence a significant requirement)?
  • For country partner selection, as part of the compact and threshold competitive programs, MCC uses twenty different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These objective indicators of a country’s performance are collected by independent third parties. The corporation regularly assesses the scorecard to ensure that indicators use the best possible data. When third party entities publish new or better data, MCC updates the scorecard to ensure that it captures comprehensive policy performance.
  • When considering granting a second compact, MCC further considers whether countries have (1) exhibited successful performance on their previous compact, (2) improved scorecard performance during the partnership, and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. According to MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use these data to inform project proposal assessment, project design, and implementation approaches.”
8.3 Did the agency use its five largest competitive grant programs to build evidence (e.g., requiring grantees to participate in evaluations)?
  • According to its Policy for M&E, MCC requires independent evaluations of every project to assess progress in achieving outputs and outcomes and program learning based on defined evaluation questions throughout the lifetime of the project and beyond. As described above, MCC publicly releases all these evaluations on its MCC Evidence Platform and uses findings, in collaboration with stakeholders and partner countries, to build evidence in the field so that policymakers in the United States and in partner countries can leverage MCC’s experiences to develop future programming. In line with MCC’s Policy for M&E, MCC projects are required to submit quarterly indicator tracking tables showing progress toward projected targets.
8.4 Did the agency use evidence of effectiveness to allocate funds to any other competitive grant programs (besides its five largest grant programs)?
  • Millennium Challenge Corporation uses evidence of effectiveness to allocate funds in all its competitive grant programs as noted above.
8. 5 What are the agency’s one or two strongest examples of how competitive grant recipients achieved better outcomes and/or built knowledge of what works or what does not?
  • Millennium Challenge Corporation’s $358,000,000  Lesotho Compact funded a $17,900,000 land administration reform activity within the Private Sector Development Project. The activity supported land policy and regulatory reform, streamlined land administration procedures, created the Land Administration Authority, conducted public outreach, and supported systematic land regularization. The activity was based on the theory that strengthened tenure and efficient land administration would reduce land conflict, drive formal land transactions, and increase investment and use of land as collateral, which would contribute to private sector development, especially for women.
  • An independent evaluation of the investment found that:
    • Women’s tenure improved with a 55 percentage point increase in women’s sole or joint ownership of newly registered parcels.
    • Systematic land regularization decreased land conflict concerns by 5% points for female-headed households but had no effect on the fear of losing land.
    • The adoption of the 2010 Land Act and establishment of the Land Administration Authority were associated with significant reductions in transaction time. The average time to register a land transfer and mortgage fell by more than 67% and 80%, respectively.
    • The activity catalyzed land transfers from 165 a year in 2010 to 1,075 a year in 2019.
    • Policy and institutional reforms activated credit markets by more than doubling mortgages and increasing the share of mortgages issued jointly or in the name of women.
    • Systematic land regularization in informal settlements did not result in land investment.
  • The corporation is applying the evidence from these outcomes to a subsequent investment in Lesotho as well as to other land sector projects around the world.
8.6 Did the agency provide guidance that makes clear that city, county, and state government, and/or other grantees can or should use the funds they receive from these programs to conduct program evaluations and/or to strengthen their evaluation capacity building efforts?
  • As described above, MCC develops an M&E plan for every grantee, describing the independent evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. As such, grantees use program funds for evaluation.
  • MCC’s Policy for Monitoring and Evaluation stipulates that the “primary responsibility for developing the M&E Plan lies with the MCA [grantee] M&E Director with support and input from MCC’s M&E Lead and Economist. MCC and MCA Project/Activity Leads are expected to guide the selection of the indicators at the process and output levels that are particularly useful for management and oversight of activities and projects.” The M&E policy is intended primarily to guide MCC and partner country staff decisions to utilize M&E effectively throughout the entire program life cycle in order to improve outcomes. All MCC investments also include M&E capacity building for grantees.
Score
10
Use of Evidence in Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its non-competitive grant programs in FY22 (examples: evidence-based funding set-asides, requirements to invest funds in evidence-based activities, and pay for success provisions)

  • MCC does not administer non-competitive grant programs (relative score for criteria #8 applied).
Score
8
Repurpose for Results

In FY22, did the agency shift funds away from or within any practice, policy, or program that consistently failed to achieve desired outcomes (examples: requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; incentivizing or urging grant applicants to stop using ineffective practices in funding announcements; proposing the elimination of ineffective programs through annual budget requests; incentivizing well-designed trials to fill specific knowledge gaps; supporting low-performing grantees through mentoring, improvement plans, and other forms of assistance; and using rigorous evaluation results to shift funds away from a program)?

10.1 Did the agency have policy(ies) for determining when to shift funds away from grantees, practices, policies, interventions, and/or programs that consistently failed to achieve desired outcomes and did the agency act on that policy?
  • Its Policy on Suspension and Termination specifies the reasons for which MCC may suspend or terminate assistance to partner countries, including if a country “engages in a pattern of actions inconsistent with the MCC’s eligibility criteria” by by exhibiting negative outcomes such as:
    • a decline in performance on the indicators used to determine eligibility;
    • a decline in performance not yet reflected in the indicators used to determine eligibility; or
    • actions by the country which are determined to be contrary to sound performance in the areas assessed for eligibility for assistance, and which together evidence an overall decline in the country’s commitment to the eligibility criteria.
  • Of 64 compact selections by MCC’s Board of Directors, including regional compacts, 17 have had their partnerships or a portion of their funding ended due to concerns about country commitment to MCC’s eligibility criteria or a failure to adhere to their responsibilities under the compact. The Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators, including achieving desired results.
  • In a number of cases, MCC has repurposed investments based on real-time evidence. In MCC’s first compact with Lesotho, MCC canceled the Automated Clearing House sub-activity within the Private Sector Development Project after monitoring data determined that it would not accomplish the economic growth and poverty reduction outcomes envisioned during compact development. The remaining $600,000 in the sub-activity was transferred to the Debit Smart Card sub-activity, which targeted expanding financial services to people living in remote areas of Lesotho. In Tanzania, the $32,000,000 Non-Revenue Water activity was re-scoped after the final design estimates on two of the activity’s infrastructure investments indicated higher costs that would significantly impact their economic rates of return. As a result, $13,200,000 was reallocated to the Lower Ruvu Plant Expansion activity, $9,600,000 to the Morogoro Water Supply activity, and $400,000 for other environmental and social activities. In all of these examples, the funding was either reallocated to activities with continued evidence of results or returned to MCC for investment in future programming.
10.2 Did the agency identify and provide support to agency programs or grantees that failed to achieve desired outcomes?
  • For every investment in implementation, MCC undertakes a Quarterly Performance Review with senior leadership to review, among many issues, quarterly results include indicator tracking tables. If programs are not meeting evidence-based targets, MCC undertakes mitigation efforts to work with the partner country and program implementers to achieve desired results. These efforts are program- and context-specific but can take the form of increased technical assistance, reallocated funds, and/or new methods of implementation. For example, MCC reallocated funds in its compact with Ghana after the country failed to achieve approved policy reforms to ensure the sustainability of the investments. Upon program completion, if a program does not meet expected results targets, MCC works to understand and document why and how this occurred, beginning with program design, the theory of change, and program implementation. The results and learning from this inquiry are published through the country’s Star Report.
  • The corporation also consistently monitors the progress of compact programs and their evaluations across sectors, using the learning from this evidence to make changes to its operations. For example, in Côte d’Ivoire, MCC is currently implementing the Abidjan Transport Project that builds on critical learning from sixteen completed roads projects. The corporation learned that projects must be selected based on a complete road network analysis and that any transport program must address policy and institutional issues in the transport sector up front to ensure the sustainability of road investments. As such, the Abidjan Transport Project will focus on the rehabilitation of up to 32 kilometers of critical roadway and adjoining infrastructure in the central corridor of Abidjan and will invest in educational and training resources for road asset management, develop road asset and safety resources and management tools, and develop mechanisms to support more efficient use of road maintenance funds.
  • In Morocco, MCC is implementing a Workforce Development activity that builds on the results and learning from eleven completed technical and vocational education and training (TVET) investments. MCC synthesized learning from past TVET programs and concluded that MCC’s TVET investments should have two primary goals: placing graduates in higher income jobs and supplying the private sector with in-demand skills. Based on this learning, the Morocco Compact’s Workforce Development activity aims to increase the quality and relevance of TVET by supporting private-sector-driven governance as well as the construction/rehabilitation of fifteen training centers, together with targeted investments in policy reform of the sector. This activity is also investing in improvements to job placement services through a results-based financing mechanism as well as improvements to the availability and analysis of labor market data.
Back to the Standard

Visit Results4America.org