2022 Federal Index


Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and learning agenda (evidence building plan), and did it publicly release the findings of all completed program evaluations in FY22?

Score
7
Millennium Challenge Corporation
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The corporation’s Independent Evaluation Portfolio is governed by its publicly available Policy for M&E. This policy requires all programs to develop and follow comprehensive M&E plans that adhere to MCC’s standards. It was revised in March 2017 to ensure alignment with the Foreign Aid Transparency and Accountability Act of 2016. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation. The policy is currently being updated to reflect best practice in monitoring and evaluation standards and further align with the Evidence Act.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • Every MCC investment must adhere to MCC’s rigorous Policy for M&E, which requires every MCC program to contain a comprehensive M&E plan and undergo an independent evaluation. For each investment MCC makes in a country, the country’s M&E plan is required to be published within 90 days of entry into force. The M&E plan lays out the evaluation strategy and includes two main components. The monitoring component includes the methodology and process for assessing progress toward the investment’s objectives. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E plan represents the evaluation plan and learning agenda for that country’s set of investments.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state, and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • To advance MCC’s evidence base and respond to the Evidence Act, MCC is implementing a learning agenda around women’s economic empowerment with short- and long-term objectives. The agency is focused on expanding the evidence base to answer these key research questions:
    • How do MCC’s women’s economic empowerment activities contribute to MCC’s overarching goal of reducing poverty through economic growth?
    • How does MCC’s women’s economic empowerment work contribute to increased income and assets for households—beyond what those incomes would have been without the gendered/women’s economic empowerment design?
    • How does MCC’s women’s economic empowerment work increase income and assets for women and girls within those households?
    • How does MCC’s women’s economic empowerment work increase women’s empowerment, defined through measures relevant to the women’s economic empowerment intervention and project area?
  • These research questions were developed through extensive consultation within MCC and with external stakeholders. Agency leadership has named inclusion and gender as a key priority. In support of this priority, MCC released a new Inclusion and Gender Strategy to further codify ambition around learning on these issues.
2.4 Did the agency publicly release all completed program evaluations?
  • The corporation publishes each independent evaluation of every project, underscoring its commitment to transparency, accountability, learning, and evidence-based decision-making. All independent evaluations and reports are publicly available on the new MCC Evidence Platform. As of September 2022, MCC had contracted, planned, and/or published 236 independent evaluations. All MCC evaluations produce a final report to present final results, and some evaluations also produce an interim report to present interim results. To date, 122 final reports and 45 interim reports have been finalized and published.
  • In FY22, MCC also continued producing Evaluation Briefs, an MCC product that distills key findings and lessons learned from MCC’s independent evaluations. It will produce Evaluation Briefs for each evaluation moving forward. In FY22, MCC also completed an Evaluation Brief for the backlog of all completed evaluations. As of September 2022, MCC has published 131 Evaluation Briefs.
2.5 Did the agency conduct an Evidence Capacity Assessment that includes information about the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • Millennium Challenge Corporation is currently working on a draft capacity assessment in accordance with the Evidence Act. Once a compact or threshold program is in implementation, M&E resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes and assess the attributable impact of all of MCC’s programs. It sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders and the general public, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E managing directorchief economist, and vice president for the Department of Policy and Evaluation are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY22, MCC began or continued comprehensive independent evaluations for every compact or threshold project at MCC, a requirement stipulated in Section 7.5.1 of MCC’s Policy for M&E. All evaluation designs, data, reports, and summaries are available on MCC’s .
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The corporation employs rigorous independent evaluation methodologies to measure the impact of its programming, evaluate the efficacy of program implementation, and determine lessons learned to inform future investments. As of September 2022, about 36% of MCC’s evaluation portfolio consists of impact evaluations, and 64% consists of performance evaluations. All MCC impact evaluations use random assignment to determine which groups or individuals will receive an MCC intervention, which allows for a counterfactual and thus for attribution to MCC’s project and best enables MCC to measure its impact in a fair and transparent way. Each evaluation is conducted as prescribed by the program’s M&E plan, in accordance with MCC’s Policy for M&E.
Score
10
U.S. Department of Education
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The department’s evaluation policy is posted online at ed.gov/data. Key features of the policy include the department’s commitment to: (1) independence and objectivity, (2) relevance and utility, (3) rigor and quality, (4) transparency, and (5) ethics. Special features include additional guidance to ED staff on considerations for evidence building conducted by ED program participants, which emphasize the need for grantees to build evidence in a manner consistent with the parameters of their grants (e.g., purpose, scope, and funding levels) up to and including rigorous evaluations that meet IES’s What Works ClearinghouseTM (WWC) standards without reservations.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • The Department’s FY22-FY26 Learning Agenda is posted at https://www.ed.gov/data under “Evidence-Building Deliverables,” as well as on evaluation.gov. The Learning Agenda describes both how stakeholders were engaged in the Agenda’s development, and how stakeholders are to be engaged after the Agenda’s publication.
2.4 Did the agency publicly release all completed program evaluations?
  • The Institute of Education Sciences publicly releases all peer-reviewed publications from its evaluations on the IES website and also in the Education Resources Information Center (ERIC). Many IES evaluations are also reviewed by its What Works Clearinghouse. The institute also maintains profiles of all evaluations on its website, both completed and ongoing, including key findings, publications, and products. It regularly conducts briefings on its evaluations for ED, the OMB, Congressional staff, and the public.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 3115, subchapter II (c)(3)(9]?
  • The department’s FY22-FY26 Capacity Assessment is part of the agency’s FY22-FY26 Strategic Plan. It is also available at evaluation.gov.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The IES website includes a searchable database of planned and completed evaluations, including those that use experimental, quasi-experimental, or regression discontinuity designs. All impact evaluations rely upon experimental trials. Other methods, including matching and regression discontinuity designs, are classified as rigorous outcomes evaluations. The institute also publishes studies that are descriptive or correlational, including implementation studies and less rigorous outcome evaluations. The Department of Education’s evaluation policy outlines its commitment (p. 5) to leveraging a broad range of evaluation tools, including rigorous methods to best suit the needs of the research question.
Score
10
U.S. Agency for International Development
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The agency-wide USAID Evaluation Policy, published in January 2011 and updated in October 2016 and April 2021, incorporates changes that better integrate with USAID’s Program Cycle Policy and ensure compliance with the Foreign Aid Transparency and Accountability Act and the Foundations for Evidence-Based Policymaking Act of 2018. The 2021 changes to the evaluation policy updated evaluation requirements to simplify implementation and increase the breadth of evaluation coverage, dissemination, and utilization.
  • The 2021 changes also established new requirements that will allow for the majority of program funds to be subjected to external evaluations. The requirements include (1) at least one evaluation per intermediate result defined in the operating unit’s strategy; (2) at least one evaluation per activity (contracts, orders, grants, and cooperative agreements) with a budget expected to be $20,000,000 or more; and (3) an impact evaluation for any new, untested approach anticipated to be expanded in scale and scope. The main way these requirements are communicated is through the USAID Automated Directives System (ADS) 201.
  • The Evaluation Policy requires consultation with in-country partners and beneficiaries as essential, as well as sufficient local contextual information included in evaluation reports. To make the conduct and practice of evaluations more inclusive and relevant to the country context, the policy requires that evaluations will be consistent with institutional aims of local ownership through respectful engagement with all partners, including local beneficiaries and stakeholders, while leveraging and building local capacity for program evaluation. As a result, the policy expects that evaluation specialists from partner countries who have appropriate expertise will lead and/or be included in evaluation teams. In addition, USAID focuses its priorities within its sectoral programming on supporting partner government and civil society capacity to undertake evaluations and use the results generated. Data from the USAID Evaluation Registry indicated that annually about two-thirds of evaluations were conducted by teams that included one or more local experts. Also, while local experts may be included in the team composition, it is still a rarity to have a local expert as the evaluation team lead for conducting USAID evaluations.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • Since the start of the operationalization of the Evidence Act, USAID has produced two agency-wide annual evaluation plans: the Annual Evaluation Plan for FY22 and the Annual Evaluation Plan for FY23. These plans also fulfill the Evidence Act requirement that all federal agencies should develop an annual evaluation plan that describes the significant evaluation activities the agency plans to conduct in the fiscal year following the year in which it is submitted. The plans contain significant evaluations that each address a question from the agency-wide Learning Agenda; performance evaluations of activities with budgets of $40,000,000 or more; impact evaluations; and ex-post evaluations.
  • In addition, USAID has an agency-wide evaluation registry that collects information on all evaluations planned to commence within the next three years (as well as tracking ongoing and completed evaluations). Currently, this information is accessible and used internally by USAID staff and is not published. To meet the Evidence Act requirement, in March 2022, USAID published its Annual Evaluation Plan for FY23 on the DEC. A draft agency-wide evaluation plan for FY24 will also be submitted to OMB in September 2022, as part of the Evidence Act deliverable.
  • In addition, USAID’s Office of LER works with bureaus to develop internal annual Bureau Monitoring, Evaluation and Learning Plans that review evaluation quality and evidence building and use within each bureau and identify challenges and priorities for the year ahead.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • The agency-wide learning agenda for USAID was first established in 2018, prior to the passing of the Evidence Act. Traditionally, USAID adopts a strongly consultative process with internal and external stakeholders to inform its priority learning needs in developing its learning agendas. Throughout the implementation of its learning agenda, USAID continues to engage external stakeholders through learning events and invitations to share evidence and by making learning agenda products and resources publicly available.
  • As priorities shift, it is essential that the Agency Learning Agenda adapts to continue to meet the learning needs of the agency. A new Agency Learning Agenda, published in May 2022, that incorporates current agency priorities and aligns with the FY22-26 Joint Strategic Plan was developed and published in March 2022. This learning agenda contained questions in key agency priority and policy areas including operational effectiveness, resilience to shocks, climate change, anti-corruption, affirmative development, migration; diversity, equity, inclusion, and accessibility; locally led development; and sustainability. The implementation of the Agency Learning Agenda is committed to furthering generation and use of evidence to inform agency policies, programs, and operations related to these critical agency priority areas.
  • Stakeholder consultations with internal and external stakeholders were central to the learning agenda development process. Consultations were conducted that captured a small prioritized set of agency learning needs related to agency policy priorities and identified opportunities for collaboration with key stakeholders on this learning. The Agency Learning Agenda team also consulted mission staff from across all of the regions in which USAID operates and Washington operating units to capture a diversity of internal voices. Consultations with external stakeholders included a selection of congressional committees, interagency partners (e.g. Department of State), other donors, think tanks, nongovernmental researchers, and development-focused convening organizations.  The Agency Learning Agenda incorporates feedback gathered through these stakeholder consultations, inputs from the joint strategic planning process with the Department of State, and a stocktaking of the previous learning agenda implementation, to arrive at  a prioritized set of questions that will focus agency learning on top policy priorities from 2022 through 2026.
2.4 Did the agency publicly release all completed program evaluations?
  • To increase access and awareness of available completed evaluation reports, USAID has created an Evaluations at USAID dashboard of completed evaluations starting from FY16. The dashboard includes an interactive map showing countries and the respective evaluations completed for each fiscal year, starting from FY16. Using filters, completed evaluations can be searched  by operating unit, sector, evaluation purpose, evaluation type, and evaluation use. The dashboard also has data on the percent of USAID evaluations that include local evaluation experts on the evaluation team that conducted the evaluation. The information for FY21 is being finalized and will be used to update the dashboard. The dashboard has also served as a resource for USAID missions. For example, in USAID/Cambodia and USAID/Azerbaijan, the dashboard was used to provide annotated bibliographies to inform the design of civic engagement activities.
  • In addition, all final USAID evaluation reports are published on the DEC, except for a small number of evaluations that receive a waiver of public disclosure (typically less than 5% of the total completed in a fiscal year). The process to seek a waiver of public disclosure is outlined in the document Limitations to Disclosure and Exemptions to Public Dissemination of USAID Evaluation Reports and includes exceptions for circumstances such as those when “public disclosure is likely to jeopardize the personal safety of U.S. personnel or recipients of U.S. resources.”
  • A review of evaluations as part of an equity assessment report to OMB (in response to the Racial and Ethnic Equity Executive Order) found that evaluations that include analysis of racial and ethnic equity are more likely to be commissioned by USAID’s Africa Bureau and USAID Programs in Ethiopia, Tanzania, Kenya, Liberia, Ghana, Uganda, Malawi, Indonesia, India, Cambodia, Kosovo, and Colombia. Reports on agriculture, education, and health programs most often utilize the words race and ethnicity in the evaluation findings.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • The agency recognizes that sound development programming relies on strong evidence that enables policymakers and program planners to make decisions, improve practice, and achieve development outcomes. As one of the deliverables of the Evidence Act, a capacity assessment was submitted to OMB and published in March 2022. This report provided an initial overview of coverage, quality, methods, effectiveness, and independence of statistics, evaluation, research, and analysis functions and activities within USAID. The report demonstrated that evaluations conducted by operating units cover the range of program areas of USAID foreign assistance investment. Economic growth, health, democracy, human rights, and governance accounted for more than three-quarters of evaluations completed by the agency in FY21.
  • The Capacity Assessment for Statistics, Evaluation, Research, and Analysis found that USAID staff use evidence from a variety of sources when they design USAID activities. Using quantitative data from a staff survey and qualitative data from key informant interviews , focus group discussions , and a data interpretation workshop, this capacity assessment used a maturity matrix benchmarking tool to assess USAID’s capacity to generate, manage, and use evidence. This tool was used to develop maturity levels of the agency around five elements that are most critical for evidence generation, management, and use: (1) resources, (2) culture, (3) collaborating, (4) learning, and (5) adapting.
  • Staff of USAID also review evaluation quality on an ongoing basis and review the internal Bureau Monitoring, Evaluation and Learning Plans referenced in 2.2 above. Most recently, USAID completed a review of the quality of its impact evaluations. The review assessed the quality of all 133 USAID-funded IE reports published between FY12 and FY19. In addition, there are several studies that have looked at parts of this question over the previous several years. These include GAO reports, such as Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations and From Evidence to Learning: Recommendations to Improve Foreign Assistance Evaluations; reviews by independent organizations like the Center for Global Development’s Evaluating Evaluations: Assessing the Quality of Aid Agency Evaluations in Global Health,Working Paper 461; and studies commissioned by USAID such as Meta-Evaluation of Quality and Coverage of USAID Evaluations 2009-2012. These studies generally show that USAID’s evaluation quality is improving over time with room for continued improvement.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
9
AmeriCorps
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • AmeriCorps has an evaluation policy that presents five key principles that govern the agency’s planning, conduct, and use of program evaluations: rigor, relevance, transparency, independence, and ethics.
2.2 Did the agency have an agency-wide evaluation plan? [example: Evidence Act 312(b)]?
  • AmeriCorps developed and implemented its FY22-26 strategic plan as well as its learning agenda, which describes anticipated evidence generated from ongoing and planned evaluations.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders, including but not limited to the general public, state and local governments, and researchers/academics, in the development of that agenda (example: Evidence Act 312)?
  • AmeriCorps uses the terms learning agendaevaluation plan, and strategic evidence building plan synonymously. AmeriCorps has an evergreen learning agenda. The plan was updated and approved by the U.S. Office of Management and Budget (OMB) in FY22. As part of its stakeholder engagement process for the strategic plan, AmeriCorps invited stakeholders to provide input on the learning agenda. Listening sessions included the learning agenda. AmeriCorps State and National also invited state commissions to share feedback through various calls. Internal stakeholders (e.g., staff working with grantees) may provide feedback on the learning agenda using a link on the Office of Research and Evaluation’s SharePoint home page.
2.4 Did the agency publicly release all completed program evaluations?
  • All completed evaluation reports are posted to the Evidence Exchange, an electronic repository for evaluation studies and other reports. This virtual repository was launched in September 2015.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • A comprehensive portfolio of research projects has been built to assess the extent to which AmeriCorps is achieving its mission. As findings emerge, future studies are designed to continuously build the agency’s evidence base. The Office of Research & Evaluation relies on scholarship in relevant fields of academic study; a variety of research and program evaluation approaches including field, experimental, and survey research; multiple data sources including internal and external administrative data; and different statistical analytic methods. AmeriCorps relies on partnerships with universities and third party research firms to ensure independence and access to state-of-the-art methodologies. It supports its grantees with evaluation technical assistance and courses to ensure that their evaluations are of the highest quality, and it requires grantees receiving $500,000 or more in annual funding to engage an external evaluator. These efforts have resulted in a robust body of evidence that allows: (1) national service participants to experience positive benefits, (2) nonprofit organizations to be strengthened, and (3) national service programs to effectively address local issues (along with a suite of AmeriCorps resources for evaluations).
  • While AmeriCorps is a non-CFO agency and is therefore not required to comply with the Evidence Act, including the mandated Evidence Capacity Assessment, it procured a third party to support analysis of its workforce capacity inclusive of its evaluation, research and statistical and analysis workforce capacity. Findings from this workforce analysis will be submitted to the agency in FY22 and used for continuous improvement.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • AmeriCorps uses the research design most appropriate for addressing the research question. When experimental or quasi-experimental designs are warranted, the agency uses them and encourages its grantees to use them, as noted in the agency evaluation policy: “AmeriCorps is committed to using the most rigorous methods that are appropriate to the evaluation questions and feasible within statutory, budget and other constraints.” As of August 2022, AmeriCorps has received 47 grantee evaluation reports that use experimental design and 144 that use quasi-experimental design.” AmeriCorps has also funded a mixed-methods longitudinal study of National Civilian Community Corps (NCCC) members that includes a matched comparison group. This member development study will conclude in FY23.
Score
10
U.S. Department of Labor
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • The Department of Labor has an Evaluation Policy that formalizes the principles that govern all program evaluations in the department, including methodological rigor, independence, transparency, ethics, and relevance. The policy represents a commitment to using evidence from evaluations to inform policy and practice. It states that “evaluations should be designed to address DOL’s diverse programs, customers, and stakeholders; and DOL should encourage diversity among those carrying out the evaluations.”
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • The Chief Evaluation Office develops, implements, and publicly releases evidence building plans and assessments and annual evaluations plans. These plans are based on the agency learning agendas as well as the department’s Strategic Plan priorities, statutory requirements for evaluations, and priorities of the Secretary of Labor and the Presidential administration. The evaluation plan includes the studies the office intends to undertake in the next year using set-aside dollars. Appropriations language requires the chief evaluation officer to submit a plan to the U.S. Senate and House Committees on Appropriations outlining the evaluations that will be carried out using dollars transferred to the office. The DOL evaluation plan serves that purpose. The Chief Evaluation Office also works with agencies to undertake evaluations and evidence building strategies to answer other questions of interest identified in learning agencies but not undertaken directly by the Chief Evaluation Office.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including, but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • The department’s evidence building plans and assessments outline the process for internal and external stakeholder engagement. Specifically, the Chief Evaluation Office has made explicit outreach efforts with state and local workforce agencies as well as academic scholars, including outreach to historically Black colleges and universities and Hispanic-serving institutions.
  • The department publishes multi-year evidence building plans (learning agendas) publicly. Further, in May 2022, the Chief Evaluation Office hosted a public event introducing the office as well as providing an opportunity for attendees to learn about upcoming research activities funded by DOL, including how individuals and organizations can engage with the office and provide input into future research priorities. The evaluation officer provided an overview of the office’s mission and activities, and staff provided an overview of DOL’s new strategic planning documents, the FY22-23 Evaluation Plan and FY22-26 Evidence Building Plan.
2.4 Did the agency publicly release all completed program evaluations?
  • All DOL program evaluation reports and findings funded by the Chief Evaluation Office are publicly released and posted on the complete reports section of the website of the Office of the Assistant Secretary for Policy. Department agencies such as the ETA, also post and release their own research and evaluation reports. Some program evaluations include data and results disaggregated by such characteristics as race, ethnicity, and gender. The department’s website also provides accessible summaries and downloadable one-pagers on each study. Its research development and review process includes internal and external working groups and reviews.
  • The Chief Evaluation Office publishes a quarterly newsletter and sends email campaigns on large relevant evaluations and other opportunities for academics and researchers; public events are also published on the website.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • The Chief Evaluation Office sponsored an assessment of DOL’s baseline capacity to produce and use evidence, with the aim of helping the department and its agencies identify key next steps to improve evidence capacity. It developed technical requirements and contracted with the American Institutes for Research /IMPAQ International, LLC (research team) to design and conduct this independent third party assessment, which included the sixteen DOL agencies in the department’s Strategic Plan. The assessment reflects data collected through a survey of targeted DOL staff, focus groups with selected DOL staff, and a review of selected evidence documents. The capacity assessment is publicly available on DOL’s website.
  • The department’s Evaluation Policy touches on its commitment to high-quality methodologically rigorous research through funding independent research activities. Further, Chief Evaluation Office staff have expertise in research and evaluation methods as well as in DOL programs and policies and the populations they serve. For the majority of evaluation projects the office also employs technical working groups whose members have deep technical and subject matter expertise. The office leveraged the FY20 learning agenda process to create an interim capacity assessment, per Evidence Act requirements, and has conducted a more detailed assessment of individual agencies’ capacity, as well as DOL’s overall capacity in these areas to be published in 2022.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The department employs a full range of evaluation methods to answer key research questions of interest, including impact evaluations when appropriate. Among DOL’s active portfolio of approximately fifty projects, the study type ranges from rigorous evidence syntheses to implementation studies to quasi-experimental outcome studies and impact studies. Examples of DOL studies with a random-assignment component include an evaluation of a Job Corps demonstration pilot, the Cascades Job Corps College and Career Academy, and the Ready-to-Work Partnership Grant evaluation An example of a multi-arm randomized control trial is the Reemployment Services and Eligibility Assessments evaluation, which assessed a range of strategies to reduce unemployment insurance duration and improve employment as well as wage outcomes.
Score
10
Administration for Children and Families (HHS)
2.1  Did the agency have an agency-wide evaluation policy? (Example: Evidence Act 313(d))
  • The Administration for Children and Families Evaluation Policy confirms its commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. ACF seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. It established an evaluation policy in 2012 and updated it in 2021. It published the updated version, which includes a focus on equity throughout all five principles, in the Federal Register on November 11, 2021. In late 2019, ACF released a short video about the policy’s five principles and how it uses them to guide its work.
  • As ACF’s primary representative to the HHS Evidence and Evaluation Council, the ACF deputy assistant secretary for planning, research, and evaluation co-chaired the HHS Evaluation Policy Subcommittee, the body responsible for developing an HHS-wide evaluation policy, which was released in 2021.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • In accordance with guidance from the U.S. Office of Management and Budget (OMB), ACF contributes to the HHS-wide evaluation plan. The Office of Planning, Research, and Evaluation also annually identifies questions relevant to the programs and policies of ACF and develops an annual research and evaluation spending plan. This plan focuses on activities that OPRE plans to conduct during the following fiscal year.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • In accordance with OMB guidance, HHS developed an agency wide evidence-building plan. To develop this document, HHS asked each sub-agency to submit examples of its agency’s priority research questions, potential data sources, anticipated approaches, challenges and mitigation strategies, and active engagement strategies with those affected by this work. The Administration for Children and Families drew from its existing program-specific learning agendas and research plans to contribute priority research questions and has learning activities related to Strategic Goal 3: Strengthen Social Well-Being, Equity, and Economic Resilience in the HHS evidence building plan.
  • In 2020, ACF released a research and evaluation agenda describing research and evaluation activities and plans in nine ACF program areas with substantial research and evaluation portfolios: adolescent pregnancy prevention and sexual risk avoidance, child care, child support enforcement, child welfare, Head Start, health profession opportunity grants, healthy marriage and responsible fatherhood, home visiting, and welfare and family self-sufficiency.
  • In addition to fulfilling requirements of the Evidence Act, ACF has supported and continues to support systematic learning and active engagement activities across the agency. For example:
2.4 Did the agency publicly release all completed program evaluations?
  • The Administration for Children and Families Evaluation Policy requires that “ACF will release evaluation results regardless of findings.Evaluation reports will present comprehensive findings, including favorable, unfavorable, and null findings. ACF will release evaluation results timely–usually within two months of a report’s completion.” ACF has publicly released the findings of all completed evaluations to date. In 2021, OPRE released over 220 research publications. These publications are publicly available on the OPRE website.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [Example: Evidence Act 3115, subchapter II (c)(3)(9)]?
  • In accordance with OMB guidance, ACF contributed to an HHS-wide capacity assessment, which was conducted in early 2022.
  • Additionally, OPRE launched the ACF Evidence Capacity Support project in 2020. This project provides support to ACF’s efforts to build and strengthen programmatic and operational evidence capacity, including supporting learning agenda development and the development of other foundational evidence through administrative data analysis. To operationalize “evidence capacity” and guide engagement at the ACF level, the project developed a research-based conceptual framework that will be publicly available in late 2022.
  • Given the centrality of data capacity to evidence capacity, ACF partnered with the HHS Office of the Chief Data Officer to develop and pilot test a tool to conduct an HHS-wide data capacity assessment, consistent with Title II Evidence Act requirements. In support of specifically modernizing ACF’s data governance and related capacity, ACF launched the Data Governance Consulting and Support project. The Data Governance Support project is providing information gathering, analysis, consultation, and technical support to ACF and its partners to strengthen data governance practices within ACF offices and between ACF and its partners at the federal, state, local, and tribal levels.
  • The Administration for Children and Families also continues to support the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts as follows:
    • Quality: The Administration of Children and Families’ Evaluation Policy states that ACF is committed to using the most rigorous methods that are appropriate to the evaluation questions and the populations with whom research is being conducted and feasible within budget and other constraints. Rigor is necessary not only for impact evaluations, but also for implementation/process evaluations, descriptive studies, outcome evaluations, and formative evaluations; and in both qualitative and quantitative approaches.
    • Methods: The Administration of Children and Families uses a range of evaluation methods. It conducts impact evaluations as well as implementation and process evaluations, cost analyses and cost benefit analyses, descriptive and exploratory studies, research syntheses, and more. It also develops and uses methods that are appropriate for studying diverse populations, taking into account historical and cultural factors and planning data collection with disaggregation and subgroup analyses in mind. ACF is committed to learning about and using the most scientifically advanced approaches to determining effectiveness and efficiency of ACF programs; to this end, OPRE annually organizes meetings of scientists and research experts to discuss critical topics in social science research methodology and how innovative methodologies can be applied to policy-relevant questions.
    • Effectiveness: Its evaluation policy states that ACF will conduct relevant research and disseminate findings in ways that are accessible and useful to policymakers, practitioners, and the diverse populations that ACF programs serve. The Office of Planning, Research, and Evaluation engages in ongoing collaboration with ACF program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions such as ACF regulations and funding opportunity announcements. For example, when ACF’s Office of Head Start significantly revised its program performance standards﹘the regulations that define the standards and minimum requirements for Head Start services﹘the revisions drew from decades of OPRE research and the recommendations of the OPRE-led Secretary’s Advisory Committee on Head Start Research and Evaluation. Similarly, ACF’s Office of Child Care drew from research and evaluation findings related to eligibility redetermination, continuity of subsidy use, use of funds dedicated to improving the quality of programs, and other information to inform the regulations accompanying the reauthorization of the Child Care and Development Block Grant.
    • Independence: The Administration for Children and Families’ Evaluation Policy states that independence and objectivity are core principles of evaluation. Agency and program leadership, program staff, service providers, populations and communities studied, and others should participate actively in setting evaluation priorities, identifying evaluation questions, and assessing the implications of findings. However, it is important to insulate evaluation functions from undue influence and from both the appearance and the reality of bias. To promote objectivity, ACF protects independence in the design, execution, analysis, and reporting of evaluations. To this end, ACF will conduct evaluations through the competitive award of grants and contracts to external experts who are free from conflicts of interest.
  • The deputy assistant secretary for planning, research, and evaluation reports directly to the assistant secretary for children and families, serves as ACF’s chief evaluation officer, has authority to approve the design of evaluation projects and analysis plans, and has authority to approve, release and disseminate evaluation reports.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • The Administration for Children and Families’ Evaluation Policy states that, in assessing the effects of programs or services, its evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences and that for causal questions experimental approaches are preferred. As of April 2021, at least twenty-two ongoing OPRE projects included one or more random assignment impact evaluations. In FY22, OPRE released randomized control trial impact findings related to Health Profession Opportunity Grants and TANF job search assistance strategies:
Score
10
Substance Abuse and Mental Health Services Administration
2.1 Did the agency have an agency-wide evaluation policy [example: Evidence Act 313(d)]?
  • In FY22, SAMHSA developed and approved an Evaluation of SAMHSA Programs Policies and Procedures document, which incorporates guidance provided by the Foundations for Evidence-based Policymaking Act of 2018 (Evidence Act). In recognition of the need to formalize a systematic approach to planning, managing, and overseeing programmatic and policy evaluation activities within SAMHSA, this document provides guidance to imbue core principles of consistency, quality, and rigor into all SAMHSA evaluations of program and policies while ensuring that they are conducted in an ethical manner and that the dignity, respect, rights, and privacy of all participants are zealously safeguarded. Completed significant evaluations will be posted publicly (https://www.samhsa.gov/data/program-evaluations/evaluation-reports).
  • In accordance with this policy document and the Evidence Act, SAMHSA created an agency-wide Evidence and Evaluation Board. Building on the Evaluation of SAMHSA Programs Policies and Procedures documents, the board drafted a SAMHSA FY23 Evaluation Plan that includes ongoing and planned evaluations for FY23. An Evaluation Plan will be drafted annually, with the Evaluation of SAMHSA Program Policies and Procedures document reviewed every two years. A Learning Agenda is under development by the Evidence and Evaluation Board and will be annually reviewed and updated, as needed.
  • In addition to the Policies and Procedure documents, during this fiscal year, SAMHSA dedicated staff and resources from all offices and centers to update its DIS. The statement is required of discretionary grant programs and is designed to support greater diversity, equity, and inclusion among those impacted by SAMHSA grants by raising awareness and intention to include populations that are not represented or suffer health disparities. This revised DIS template for grantees was implemented in early FY23.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • As part of the Evidence Act, agencies within the U.S. Department of Health and Human Services (HHS) submitted a plan that lists and describes the specific evaluation activities the agency plans to undertake in the fiscal year following the year in which the evaluation plan is submitted (referred to as the HHS Evaluation Plan). The HHS Evaluation Plan and Evidence Building Plan are organized based on priority areas drawn from HHS’s departmental priorities, proposed strategic plan goals, and proposed agency priority goals. The Substance Abuse and Mental Health Administration contributed to both the HHS Evaluation Plan and the Evidence Building Plan and plays an active role in HHS monthly meetings.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • As an agency within HHS and an active participant in HHS Evidence and Evaluation Policy Council, SAMHSA contributed to the HHS learning agenda, participated in monthly meetings, and contributed to developing the Evidence Building Plan. One example of a SAMHSA contribution to the HHS strategy was its support for evidence building for the first HHS priority area: protect and strengthen equitable access to high-quality and affordable health care. During the COVID-19 pandemic, to avoid placing additional burden on state and local governments and representatives of non-governmental research, HHS engaged a range of stakeholders with various expertise across the department, utilizing existing communication channels and bodies, such as the HHS Evidence and Evaluation Council.
  • Through the Evidence and Evaluation Board, SAMHSA is working on an agency-specific evidence building plan that will include ongoing and proposed evaluations, performance monitoring for discretionary and block grants, foundational fact finding (through discretionary grant program profiles), and policy and evidence-based practices (through the SAMHSA Policy Lab). The engagement strategy SAMHSA will use is still under development but will likely include input from SAMHSA’s National Advisory Councils and through partnerships with SAMHSA regional offices.  Internal stakeholders will be engaged through the Evidence and Evaluation Board as well as cross-agency activities conducted by CBHSQ, such as data parties (activities designed to examine SAMHSA data for problem solving and sharing of diverse perspectives and to promote opportunities for SAMHSA to discuss ways to improve data collection, data quality, and data use) and individual outreach to key internal informants and champions.
  • Similar to the HHS plan, SAMHSA’s evidence building plan and learning agenda will include the agency’s five priority areas: overdose prevention; enhancing access to suicide prevention and crisis care; promoting resilience and emotional health for children, youth, and families; integrating behavioral and physical health care; and strengthening the behavioral health workforce. The cross-SAMHSA areas include equity, trauma-informed approaches, and commitment to data and evidence. The evidence building plan will also include conclusions found through activities legislatively mandated and process evaluations, such as the triennial report required for the Projects for Assistance in Transition from Homelessness (PATH).
2.4 Did the agency publicly release all completed program evaluations?
  • SAMHSA has website architecture to support the impending posting of the approved/cleared program evaluation reports that are currently undergoing 508 compliance conversion. The Programs Evaluations page provides access to Evaluation Reports, along with Evaluation Policies (directly linked to Evidence Act requirements), Ongoing Evaluations, and Evidence-Based Resources pages. SAMHSA is working to populate these pages with an archive of previous evaluations for purposes of transparency and to post current and future evaluation results as they are completed.
  • Publicly available evaluations analyze data by race, ethnicity, and gender, among other elements such as social determinants of health (e.g., stable housing and employment). SAMHSA strives to share program data whenever possible to promote continuous quality improvement. For example, SAMHSA’s PATH funds services for people with serious mental illness experiencing homelessness; annual data may be found online. Similarly, comparative state mental health data from block grants can be found in the SAMHSA Uniform Reporting System output tables.
  • SAMHSA shared evaluation and performance measurement data on all programs in its publicly available FY23 Congressional Justification.
  • SAMHSA is in the process of sharing several evaluations either in full or through a spotlight. For example, SAMHSA’s evaluation report for PATH has been posted to Evaluation Reports. The Strategic Prevention Framework-Prescription Drugs (SPF-Rx) program evaluation has been approved for public release and will be posted to Evaluation Reports upon completion of the 508 compliance process.
  • SAMHSA has many ongoing evaluations with results not yet available for release. Once these program evaluation reports and related materials have been finalized and cleared and have successfully completed the 508 process, they will be posted to Evaluation Reports. SAMHSA has also developed a process to not only publicly share program evaluations but also to ensure that future evaluations include a discussion of the dissemination plan during the early stages of development.
  • Not as an evaluation but  as a foundational fact-finding activity, annual project profiles were developed by CBHSQ in partnership with SAMHSA centers for discretionary grants (such as client demographics, changes in social determinants of health, and pre/post changes in substance use) covering a set of performance indicators to track and monitor performance.
  • In FY22, SAMHSA created a cross-center workgroup to systematically share data collected through discretionary grant programs. The workgroup developed a strategy for sharing data and evidence to both internal and external stakeholders including newsletters highlighting topics (such as the Minority AIDS Initiative and women’s health month) and a SAMHSA Stats e-blast listserv that shares Government and Performance Results Act data monthly to over 80,000 registered users. Data were also shared on SAMHSA’s blog.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • As part of the HHS Evidence and Evaluation Council, all agencies within the department conducted an internal capacity assessment. The SAMHSA assessment was included in the HHS report. In FY22, SAMHSA created an Evidence and Evaluation Board. This board is composed of the directors of each of SAMHSA’s centers and offices as well as the chief data officer, evaluation officer, and statistician. This board will examine agency capacity, quality, and evaluation efforts. For FY22, the board served as the lead body to assess evidence capacity, as well as taking the lead to ensure that conclusions and recommendations from evaluations and evidence activities are included in discussions regarding future funding activities.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • Instead of applying one strategy for all evaluations, SAMHSA employs a variety of models including performance monitoring, formative, process, and summative evaluations using primarily quantitative data and mixed methods when appropriate and available. In FY22, SAMHSA developed an Evaluation Policies and Procedures document. This document articulates these principles for evaluation. In recognition of the need to formalize a systematic approach to planning, managing, and overseeing programmatic and policy evaluation activities within SAMHSA, it provides guidance to imbue core principles of consistency, quality, and rigor into all SAMHSA evaluations of program and policies while ensuring that evaluations are conducted in an ethical manner and that the dignity, respect, rights, and privacy of all participants are zealously safeguarded. This document is reviewed every two years by the Evidence and Evaluation Board and updated as needed.
  • SAMHSA strives for a balance between the need for collecting data and the desire to minimize grantee burden. For example, in FY21, an evaluation of SAMHSA’s Naloxone Education and Distribution Program used a mixed methods approach, examining qualitative data from key informant interviews and focus groups, coupled with SAMHSA’s discretionary grant data collected through the SAMHSA Performance Accountability and Reporting System (SPARS). Another example is a final report for SAMHSA’s SPF-Rx program that included several sources of primary and secondary quantitative data (for example, from SAMHSA and the Centers for Disease Control and Prevention) mixed with interviews, all in response to three primary evaluation questions. This evaluation utilized a quasi-experimental model (differences in design) and external administrative data to compare grant-funded areas to comparison counties.
  • In addition, recognizing that one size does not fit all, SAMHSA has developed a draft evaluation plan that includes a dissemination strategy for each of its current evaluation projects. The plan is still under review by the Evidence and Evaluation Board. As part of this work, all proposed and ongoing evaluations will be required to share the evaluation model and data to be included in the evaluation work. These evaluations will be encouraged to consider a mixed methods approach and to employ the most rigorous methods possible. Evaluation work must also consider how the findings will be shared with internal and external stakeholders (e.g., full report shared on SAMHSA’s website or a spotlight highlighting key findings).
  • The Substance Abuse and Mental Health Services Administration is partnering with the National Institute on Drug Abuse to support the HEALing Communities Study, which is a research initiative that intends to enhance the evidence base for opioid treatment options. Launched in 2019, this study aims to test the integration of prevention, overdose treatment, and medication-based treatment in select diverse communities hard hit by the opioid crisis. This comprehensive treatment model will be tested in a coordinated array of settings, including primary care, emergency departments, and other community settings. Findings will establish best practices for integrating prevention and treatment strategies that can be replicated by communities nationwide.
  • SAMHSA has also supported the National Study on Mental Health, which intends to provide national estimates of mental health and substance use disorders (SUDs) among U.S. adults aged eighteen to sixty-five. For the first time, this study will include adults living in households across the U.S. as well as in prisons, jails, state psychiatric hospitals, and homeless shelters. Data will be available in 2023.
Score
10
U.S. Dept. of Housing & Urban Development
2.1 Did the agency have an agency-wide evaluation policy [Example: Evidence Act 313(d)]?
  • In 2016, HUD published a Program Evaluation Policy in which it established core principles and practices of PD&R’s evaluation and research activities. Rigor, relevance, transparency, independence, ethics, and technical innovation are set as core values and the policy applies to all evaluations and import analyses supported by HUD.
  • In August 2021, PD&R updated the 2016 Program Evaluation Policy to address issues that have arisen since 2016 as well as stakeholder input received via a town hall that PD&R hosted discussing its experience with sponsoring and publishing evaluations. Specifically, the new HUD Program Evaluation Policy enhances the transparency of evaluation results by publishing interim results, utilizes more data sharing licenses, and ensures data privacy requirements. Further, the updates also include HUD’s focus on additional analysis relevant to underserved and underrepresented groups. That is, HUD’s update demonstrated its agency-wide evaluation policy that aligns with rigorous analysis and weighs on diversity and inclusion.
2.2 Did the agency have an agency-wide evaluation plan [Example: Evidence Act 312(b)]?
  • HUD published its FY23 Annual Evaluation Plan (AEP) in March 2022 as an annual update to its 2022 Evaluation Plan. The FY23 AEP includes new evaluation activities to be started in FY23 and ongoing evaluation activities. It focuses on presenting not all but selected significant evaluation activities that satisfy the criteria of (1) addressing pressing questions (topical relevance), (2) requiring substantial planning and cooperation (coordination), and (3) having secured funding from appropriations in a prior year or using dedicated in-house resources (commitment of resources). As guided by the Evidence Act, HUD’s Annual Evaluation Plan aligns with the goals identified in the department’s FY22-26 Strategic Plan. The department’s annual performance reports (FY23 Annual Performance PlansFY21 Annual Performance Reports) provide specific information on agencies’ program milestones.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • Since 2014, HUD has actively invested in developing and publishing the Research Roadmap, which worked as an integrated document for composing research questions and an evidence-building plan. With the enactment of the Evidence Act, HUD published FY22-26 Learning Agenda, which replaced the research planning part of the Research Roadmap. Stakeholder engagement is a key to HUD’s learning agenda development process. Stakeholders include program partners in state and local governments and the private sector, researchers and academics, policy officials; and members of the general public who frequently access the HUDuser.gov portal. Outreach mechanisms for Roadmap development include email, web forums, conferences and webcasts, and targeted listening sessions. Appendix A of the recent Learning Agenda discloses the process of compiling ideas and organizing research questions in detail.
  • Furthermore, in response to the executive order on Advancing Racial Equity and Support for Underserved Communities (Executive Order 13985), HUD has formed an Equity Leadership Committee comprising staff and an Equity Working Group with participation of the various HUD offices. To ensure equity is integrated into the department’s work, HUD’s Equity Action Plan has prioritized stakeholder engagement as an area for immediate analysis by all program offices. The equity assessment seeks to identify and utilize the lived and professional knowledge knowledge- of stakeholders who have been historically underrepresented in the federal government and underserved by or subject to discrimination in federal policies and programs. Findings from this assessment will further inform HUD’s long-term “equity transformation,” which aims to sustainably embed and improve equity throughout all of HUD’s work. The department’s long-term Equity Action Plan was released in April 2022, demonstrating commitment to stakeholder engagement in developing agendas.
2.4 Did the agency publicly release all completed program evaluations?
  • The Program Evaluation Policy of the Office of Policy Development and Research requires timely publishing and dissemination of all evaluations that meet standards of methodological rigor. Completed evaluations and research reports are posted on PD&R’s website, HUDUSER.gov. Additionally, the policy includes language in research and evaluation contracts that allows researchers to independently publish results, even without HUD approval, after not more than six months. HUD’s publicly released program evaluations typically include data and results disaggregated by race, ethnicity, and gender, where the data permit such disaggregation. For example, in 2020 HUD expanded the detail of race and ethnicity breakouts in the Worst Case Housing Needs reports to Congress to the full extent permitted by the data. Executive summaries will highlight disparate impacts if they are found to be statistically significant; otherwise, such findings may be found in the main body of the report or its appendices.
  • The Office of PD&R is in the process of reorganizing HUD published research and enhancing its search capabilities on HUDUSER.gov. These steps are being implemented to enhance the usability of HUD’s research resources for researchers, policymakers, and the general public.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • The Office of Policy Development and Research is HUD’s independent evaluation office, with a scope spanning all the department’s program operations. In March 2022, PD&R published the HUD Capacity Assessment for Research, Evaluation, Statistics and Analysis.  As required by the Office of Management and Budget (OMB), the assessment applies five criteria (coverage, quality, methods, effectiveness, and independence) to assess four evidence categories (statistics, evaluation, research, and analysis). The assessment is a collaborative output based on extensive inputs from multiple perspectives and personnel with hands-on experience in HUD’s programs. For each criterion, the report notes keywords considered for assessment (e.g., for Coverage criterion, the assessment reflects on whether HUD programs meet the expected level of comprehensiveness, appropriateness, and targeting). The assessment utilized the National Research Council’s review on HUD’s capacity development conducted in 2008 as an external source for reference. Survey results from HUD senior managers and federal managers of the Government Accountability Office were also used as additional data points to support the internal assessment.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
Score
10
Administration for Community Living (HHS)
2.1 Did the agency have an agency-wide evaluation policy [Example: Evidence Act 313(d)]?
  • The agency’s public evaluation policy confirms its commitment to conducting evaluations and using evidence from evaluations to inform policy and practice. As addressed in this policy, ACL seeks to promote rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. The policy was updated in 2021 to better reflect U.S. Office of Management and Budget (OMB) guidance provided in OMB memo M-20-12 and to more explicitly affirm ACL’s commitment to equity in evaluation.
2.2 Did the agency have an agency-wide evaluation plan [example: Evidence Act 312(b)]?
  • An agency-wide evaluation plan was submitted to HHS in support of HHS’s requirement to submit an annual evaluation plan to OMB in conjunction with its Agency Performance Plan. The agency’s annual evaluation plan includes evaluation activities related to the learning agenda and any other “significant” evaluation, such as those required by statute. The plan describes the systematic collection and analysis of information about the characteristics and outcomes of programs, projects, and processes as a basis for judgments, to improve effectiveness, and/or to inform decision-makers about current and future activities.
2.3 Did the agency have a learning agenda (evidence building plan) and did the learning agenda describe the agency’s process for engaging stakeholders including but not limited to the general public, state and local governments, and researchers/academics in the development of that agenda (example: Evidence Act 312)?
  • Based on the learning agenda approach that it adopted in 2018, ACL published an FY20–FY22 learning agenda in FY20. In developing the plan, ACL engaged stakeholders through meetings with program staff and grantees as required under OMB guidance provided in memo M-19-23. Most meetings with stakeholder groups, such as those in conjunction with conference sessions, were put on hold for 2020 due to COVID-19 travel restrictions. In 2021, ACL communicated with stakeholder groups to contribute to its learning activities. For example, ACL worked with members of the RAISE Family Caregiving Advisory Council and a range of stakeholders to inform changes to the 2021 data collection under the National Survey of Older Americans Act Participants. In 2021, ACL also released a request for information directed at small businesses to solicit research approaches related to its current research priorities.
2.4 Did the agency publicly release all completed program evaluations?
  • The Administration for Community Living releases all completed evaluation reports and studies, ongoing studies, and evaluation design projects according to its evaluation policy.
2.5 Did the agency conduct an Evidence Capacity Assessment that addressed the coverage, quality, methods, effectiveness, and independence of the agency’s evaluation, research, and analysis efforts [example: Evidence Act 315, subchapter II (c)(3)(9)]?
  • Staff from OPE play an active role in HHS’s capacity assessment efforts, serving on the Capacity Assessment and Learning Agenda Subcommittees of the HHS Evidence and Evaluation Council. The HHS 2023-2026 Capacity Assessment discusses ACL’s contributions to the coverage, quality, methods, effectiveness, and independence of the agency’s statistics, evaluation, research, and analysis efforts.  The agency’s self-assessment results were provided to HHS to support its ability to submit the required information to OMB. These results  provided information about planning and implementing evaluation activities, disseminating best practices and findings, incorporating employee views and feedback, and carrying out capacity building activities in order to use evaluation research and analysis approaches and data in day-to-day operations. Based on this information, in 2021 ACL focused on developing educational materials for its staff and data improvement tools for ACL grantees. In 2021 the ACL Data Council published a guide to evaluation system change initiatives,  as well as additional documents to promote responsible data usage: Data Quality 201: Data Visualization and Data Quality 202: Data Quality Standards. While designed initially for ACL staff, these documents are available on the ACL website and have been promoted through several industry conferences.
2.6 Did the agency use rigorous evaluation methods, including random assignment studies, for research and evaluation purposes?
  • Starting in 2020 and continuing into 2021, ACL is funding contracts to design the most rigorous evaluations appropriate to measure the return on investment of the Aging Network, the extent to which ACL services address social determinants of health, and the value of volunteers to ACL programs. The agency sometimes funds evaluation design contracts, such as those for the Older Americans Act Title VI Tribal Grants Program evaluation and the Long Term Care Ombudsman evaluation that are used to determine the most rigorous evaluation approach that is feasible given the structure of a particular program. While the Ombudsman program is full coverage programs, where comparison groups are not possible, ACL most frequently uses propensity score matching to identify comparison group members. This was the case for the Older Americans Act Nutrition Services Program and National Family Caregivers Support Program evaluations and the Wellness Prospective Evaluation Final Report conducted by Centers for Medicare and Medicare Services in partnership with ACL.
  • The agency’s  NIDILRR funds the largest percentage of its randomized control trials (151 of 659  or 23%) for research projects employing a randomized clinical trial. To ensure adequate quality, NIDILRR adheres to strict peer reviewer evaluation criteria in the grant award process. In addition, ACL’s evaluation policy states that “in assessing the effects of programs or services, ACL evaluations will use methods that isolate to the greatest extent possible the impacts of the programs or services from other influences such as trends over time, geographic variation, or pre-existing differences between participants and non-participants. For such causal questions, experimental approaches are preferred. When experimental approaches are not feasible, high-quality quasi-experiments offer an alternative.”
Back to the Standard

Visit Results4America.org