Steps in Applying the Cost-Utility Decision-Making Framework

Decision Problem

I. Decision Problem

Resources to help you identify the problem, define your goals, and define the decision problem

DecisionMaker® will help you address decision problems in which a choice needs to be made between several alternative educational strategies, programs or tools. You can also use it for other types of decisions such as which budget items to continue funding, how much to scale up a promising strategy or program, or which candidate to hire from among a number of potential candidates. Examples of decision problems to which DecisionMaker® can be applied include the following:

School Decisions

  • Which Social and Emotional Learning (SEL) curriculum should our school purchase?
  • What is the best strategy for providing out-of-school (OOS) instruction for our students who are out of school for discipline or health reasons?
  • Is an existing after-school program worth continuing to fund at our school?

School District Decisions

  • Which elementary literacy program is most appropriate for our students whose reading skills are below the 750 lexile level?
  • Should we scale up a promising Gifted and Talented Identification program to a few grades at all schools or to all grades at a few schools?
  • Which digital device should we provide district-wide for our students and teachers?
  • What professional development should we provide teachers in order to improve pedagogical practices targeted at helping our K-2 African American students improve their math achievement?

State Education Agency Decisions

  • What is the best way to improve computer education in our state to meet the demands of the 21st Century?
  • Should we have a centralized or distributed student information system across our districts?
  • Which standardized tests should we require across the state?

University/College Decisions

  • Which instructional technology tool should we integrate into our classes?
  • How do we best serve students who need remediation in math?
  • What strategy should we pursue to improve on-time graduation for at-risk students?
← Back

Identify stakeholders

II. Identify Stakeholders

Resources to help you identify which stakeholders to include and how

Stakeholders are people who may be affected by a decision. In DecisionMaker®, stakeholders are more specifically those who are invited to participate in making the decision being addressed. Relevant stakeholders will vary depending on what the decision is about, but potential stakeholders in educational contexts include the following:

School, District, College/University or State Administrative Leaders

  • Commissioner
  • President/Chancellor
  • Superintendents and Assistant Superintendents
  • Division Chiefs
  • Department Chairs/Heads/Directors
  • Principals and Assistant Principals

School or College/University Staff

  • Faculty/Teachers and other instructional staff
  • Instructional coaches
  • Guidance Counselors
  • Administrative and other support staff


  • School, District or State Education Board members
  • Parents
  • Students
  • Community members
  • Community partners
  • Colleges
  • Employers

The National Research Council of the National Academies » provides a list of questions that are relevant to the inclusion of stakeholders in environmental decisions. Many of the same issues are relevant to education decision-making. Click here » to download the free PDF as a guest, and refer to page 194 to reference this set of questions. Chapter 8 of the document provides a full discussion of issues related to stakeholder participation in decision-making.

← Back

Solution Options

III. Solution Options

Resources to help you identify possible programs, tools or strategies to address the decision problem you have articulated

Solution Options are programs/strategies/tools/individuals that could potentially address the problem that prompted this decision-making process, i.e., they are potential “answers” or solutions to your problem. For example, if you are looking for a Social and Emotional Learning (SEL) program to implement in K-12 schools to address high suspension rates and conflict between students, two Solution Options you might consider are RULER and Character First.

Where Can I Find Solution Options?

It may be the case that you have ideas for Solution Options right away, or you may want to ask stakeholders to help you come up with ideas.

To identify potential Solution Options for your decision problem, there are a variety of resources you might consult. Potential sources of information include the following:

Sometimes, establishing a list of Screening Criteria (non-negotiable requirements) first can help you develop a list of viable Solution Options, especially if there could be many different Solution Options and/or you need some guideposts to facilitate brainstorming. For example, imagine that your decision problem is how to provide out-of-school instruction to students who are on long-term suspension or medical leave. Before brainstorming Solution Options, you could check your state’s regulations on what is required under these circumstances. If you are in a state such as New Jersey, you would discover that any solution to this problem must involve at least 10 hours per week of face-to-face and one-to-one interactions between the student and a certified teacher. This immediately rules out some potential Solution Options like having these students participate in regular classes by teleconference or take online courses without teacher supervision.

How Many Solution Options Should I Consider?

The optimal number of Solution Options to consider will depend on the decision problem at hand and the time and capacity you have available for identifying and evaluating the options. You can add new Solution Options to consider at any time in the decision-making process and “put away” those Solution Options that are clearly not meeting your criteria.

For example, if you are trying to select a reading curriculum for struggling readers, gathering relevant information on each Solution Option and thoroughly evaluating each potential curriculum could be a very time-consuming process. In this situation, you would want to ensure that the number of options considered is not too large to preclude a thorough evaluation: 3-6 options might be a feasible number.

However, a different kind of decision problem, such as ranking budget items to prioritize them for funding, might involve several hundred options. In this case, you would want to devise a less onerous process of gathering relevant information and evaluating each budget item.

Note that if you start with a long list of Solution Options, you can use Screening Criteria to narrow down the list until you have a more manageable set to consider thoroughly.

How do I know which information sources are likely to yield suitable Solution Options?

The source of your information may determine the extent to which the potential Solution Option is applicable to your context, and also how reliable the information is. Imagine that you are making a decision to select a reading curriculum for elementary school students and a large majority of your students are English Language Learners (ELLs). Looking for potential reading curricula (Solution Options) in peer-reviewed journals may produce very reliable information about the efficacy of each curriculum in the context studied, but your context may be quite different. Perhaps the study populations contained a very low proportion of ELL students, so even though you trust that the curricula worked well for students in the studies, you are not sure whether that will translate to your own students. You can use DecisionMaker®’s Relevance Index to help you decide whether a study is relevant enough to your context to give you confidence that the study’s results might apply in your situation.

Alternatively, a curriculum recommended by someone from another school in your district with a similar proportion of ELL students may appear to be more suitable for your context but may not have been evaluated rigorously for effectiveness at improving reading outcomes. You will need to consider this trade-off between suitability and reliability when identifying potential Solution Options. It is also important to consider how well each Solution Option interacts with existing policies and regulations in your context. Hence, an option used by another school in your district subject to similar policies and regulations may be the most viable to implement in your context. As a result, you might decide to score a curriculum recommended by a neighboring school higher on “fit with local context” than a curriculum you find in a peer-reviewed journal, but lower on evidence of effectiveness.

Bearing these considerations in mind, any of the avenues listed above and below are viable places to start looking for information. We recommend that, once you do have some Solution Options in mind, you use repositories of research to check whether rigorous studies of effectiveness have been conducted in the past, and whether they found positive impacts. If not, you may want to consider piloting some Solution Options to evaluate them in your own context.

Repositories of Research or Databases of Programs that have been Evaluated

You can use the links below to find reports of educational programs and strategies that have been rigorously studied. These may come in the form of individual evaluation reports or studies, or systematic reviews, which synthesize findings from many studies of programs under one theme.

  • Best Evidence Encyclopedia (BEE) » Provides summaries of scientific reviews produced by many authors and organizations and links to the full texts of each review.
  • Campbell Collaboration » Provides systematic reviews of research evidence in several areas including education, health, social welfare, crime and justice, and environment and conservation.
  • Cochrane Library »Provides systematic reviews of health-related interventions.
  • Digital Promise Research Map » Provides links to reviews and articles on educational research in 12 teaching and learning topics, and organizes them in Network View and Chord View to show connections among them.
  • Evidence for ESSA » Free database of math and reading programs that align to the ESSA evidence standards.
  • Google Scholar » You can search by key word for scholarly publications on any topic including peer-reviewed articles, evaluation reports, and white papers. These have not been vetted by Google, so you will need to carefully assess the sources and credibility of any documents you find here. Note that peer-reviewed publications often prefer that authors avoid naming specific programs and products. Also, some peer-reviewed journals may be behind a paywall. Sometimes you can find a free, pre-print version of the same paper on the internet by copying and pasting the title of the paper into your search engine.
  • LearnPlatform® Product Library » Freely available information and ratings on over 6,000 edtech tools. Includes rubric-based feedback from educators and privacy information on products.
  • Master Evidence Repository »This is our own database of educational programs and strategies that our collaborating districts have implemented. We document implementation details as well as summarizing evidence of effectiveness from a number of other sources.
  • What Works Clearinghouse » Provides reviews o f existing rigorous research on different programs, products, practices, and policies in education, and summaries of findings.
← Back

Screening Criteria

IV. Screening Criteria

Resources to help you identify any absolute or non-negotiable requirements that can be applied to narrow down the list of Solution Options to a number that will be feasible to evaluate fully

Screening Criteria are non-negotiable requirements that can be quickly and easily assessed as yes/no answers. These can be helpful for kicking off a brainstorming session to identify potential Solution Options or in narrowing down a list of potential Solution Options you already have. If further information is needed to determine whether a Solution Option meets one of these Screening Criteria and this information cannot be obtained immediately, this may suggest the Solution Option should move forward for further evaluation.

Examples of Screening Criteria

  • Fits within available budget
  • Can be implemented by date required
  • Comports with privacy standards
  • Evidence of effectiveness exists
  • Fits within school schedule
  • Meets content requirements or learning objectives
  • Meets state code or other regulations
  • Serves target population (grade level, ELL, etc.)

For example, if you are trying to decide which literacy program to implement in your school, and you already have a list of 6 possible literacy programs (Solution Options), you may decide to use the following Screening Criteria:

  1. Fits within available budget: the purchase price of the program must be less than $100 per student
  3. Fits within school schedule: program must be able to be implemented for 3 hours per week during school hours.

If you already have the necessary information about purchase price and how the programs could fit into your school schedule, you can quickly eliminate any program that has a purchase price greater than $100 per student and/or that cannot be implemented for 3 hours per week during school hours. Hopefully, this initial screening will leave you with a more manageable list of literacy programs to move forward for full evaluation. DecisionMaker® allows you to put away Solution Options that do not meet your Screening Criteria and reinstate them later if you change your mind.

← Back

Screen Solution Options

V. Screen Solution Options

Applying the Screening Criteria to your Solution Options

The table below demonstrates how you might map each Solution Option against each Screening Criterion to decide whether to keep or eliminate each option.

Screening Criterion Option 1 Option 2 Option 3 Option 4 Option 5 Option 6
Fits within available budget: program must cost less than $100/student Yes Yes No No Yes Yes
Fits within school schedule: program must be able to be implemented for 3 hours/week during school hours Yes Yes No Yes Yes No
"Keep option" or "Put it away for now" Keep option Keep option Put it away for now Put it away for now Keep option Put it away for now

← Back

Evaluation Criteria

VI. Evaluation Criteria

Resources to help identify factors you will consider in evaluating each of the possible Solution Options

Evaluation Criteria are factors that you will consider to help determine which of the potential Solution Options best meet(s) the needs of your stakeholders. Unlike Screening Criteria which essentially require only a Yes/No answer, the goal with Evaluation Criteria is to assess how well each Solution Option meets each criterion. It may take you or your colleagues some time and effort to gather the information and evidence needed to make these assessments.

It is important that each Evaluation Criterion you use in DecisionMaker® can be assigned a numerical value, for example, a test score, a number on a scale, a number corresponding to a rubric rating or other descriptive rating, or a 1/0 rating for Yes/No. By quantifying each item you are using to judge the Solution Options, it is possible to calculate an overall value for each option allowing them to be compared across all factors at once.

To help facilitate this, we provide examples of Evaluation Criteria from which you can choose in addition to entering your own criteria. These are presented in two layers: first the “Overarching evaluation criteria” which are the overarching issues against which you might want to evaluate Solution Options, such as “Impact on student academic achievement,” and second, the “Granular evaluation criteria,” which are the more specific and measurable facets against which you might assess each Solution Option, such as “Impact on college admission.” You can view examples of commonly used Overarching and Granular Evaluation Criteria here ».

Examples of Overarching Evaluation Criteria

  • Addresses the identified need
  • Equity
  • External recommendations
  • Feasibility of implementation
  • Fit with local context
  • Impact on student academic performance
  • Impact on student or staff engagement
  • Meets required standards and regulations
  • Impact on student socio-emotional development
  • Improves teacher performance
  • Quality of implementation (for programs/strategies/tools already in place)
  • Support from stakeholders
Teacher Outside with Kids

For example, if you are selecting a literacy program to implement at your school:

  • You might include impact on student academic performance as one Evaluation Criterion. For each Solution Option, you could either review existing studies that indicate a percentage point improvement in performance on a standardized test of reading or conduct your own pilot studies to compare the efficacy of the programs.
  • You might include the level of support from teachers as a second criterion. To assess this, you could conduct demonstration lessons from each curriculum during professional development sessions and ask each teacher to indicate their level of support for each program on a scale of 0 to 10. The average teacher rating for each program would be used as the overall metric for teacher support for each program.

How many Evaluation Criteria should I use?

There is no fixed number that is right. There should be enough Evaluation Criteria to capture all considerations that are important to you and other stakeholders, but not so many that it would be hard to find the time to gather all the information needed to evaluate each Solution Option against each criterion. In past applications of this framework, six criteria is the average number we have seen used by education decision-makers. But this can vary from 2 to 15. At the extreme, we have also come across a decision that involved an RFP for a new learning management system (LMS) in which over 200 criteria were used to evaluate each LMS being considered. As you can imagine, it took many people several months to evaluate each Solution Option against all these criteria.

Note that if the Solution Options do not vary at all in how well they meet a particular Evaluation Criterion, there is no need to include that criterion in the decision analysis. The point of Evaluation Criteria is to help you distinguish between options. In the LMS example above, all LMS’s being considered met most of the criteria so, in reality, the decision-makers only needed to focus their attention on the few items on which the LMS’s varied.

← Back

Importance Scores

VII. Importance Scores

What are Importance Scores and how should they be assigned?

Different stakeholders care to a greater or lesser extent about different things. Allowing stakeholders to assign an Importance Score between 0 and 100 to each Evaluation Criterion provides a way to factor different levels of concern about different issues into a decision about which Solution Options to adopt.

For example, if you are selecting a literacy program to implement at your school and you are considering the following Evaluation Criteria:

  • Impact on student academic performance
  • Support from teachers
  • Availability of materials online

You can assign Importance Scores to each Evaluation Criterion to indicate the relative importance of each criterion as a consideration in the decision. If you believe that impact on student academic performance is of utmost importance, you might assign it an Importance Score of 100. If support from teachers is relatively half as important as impact on student academic achievement, you would assign it an Importance Score of 50. Finally, whether the materials are available online may be of little importance to you if few of your students have internet access after school, so you could assign it a lower score, such as 20 or 10 indicating that it is 5-10 times less important than impact on student academic achievement.

Integrating Importance Scores from Multiple Stakeholders

If you invite several stakeholders to contribute Importance Scores, DecisionMaker® will average the scores to provide an overall Importance Score for each Evaluation Criterion. If you are the Project Administrator (Facilitator) for this decision, you will be able to see who contributed which scores. If they vary widely, it may help to get the stakeholders to talk with each other about why different things matter so much (or little) to them and revisit the scoring. Averaging scores can seem democratic but it can also paper over differences that should be understood and addressed to avoid an unnecessarily controversial decision being made.

Assigning Different Numbers of Votes to Different Stakeholders

If you invite stakeholders to contribute Importance Scores, you can allow the scores of certain stakeholders to count more heavily in the decision analysis by giving different numbers of “votes” to different people. For example, you may want to give a student representative a voice in which criteria are most important, while at the same time making sure that a teacher’s inputs count more heavily in the analysis. Each stakeholder starts with 10 votes but you can redistribute them so that, in this situation, the teacher could be assigned 15 votes while the student gets 5 votes.

Calculating Importance Scores

First, the PA (Project Administrator) assigns votes to stakeholders who are contributing Importance Scores to Evaluation Criteria:

Weights Example Table

Note that the default number of votes per person is 10, i.e., all stakeholders have an equal number of votes unless the PA changes this. A stakeholder’s vote-weight is the number of votes assigned to them divided by the total number of votes across all stakeholders. Take Stakeholder 2 as an example:


Then, the PA and all stakeholders should individually assign Importance Scores to each Evaluation Criterion.

Importance Scores Example Table

The Importance Score weighted by votes for each Evaluation Criterion is a weighted sum of stakeholders' individual Importance Scores for that Evaluation Criterion. The weight for each stakeholder is determined by the proportion of votes they are assigned. Take Evaluation Criterion 4 as an example:


The final importance weight for each criterion is its vote-weighted Importance Score divided by the sum of all vote-weighted Importance Scores. Take Evaluation Criterion 4 (EC4) as an example.


Note that discrepancies may occur due to rounding.

← Back


VIII. Evidence-Gathering to Evaluate Options

Resources to help identify existing evidence or collect new evidence to evaluate how well each Solution Option meets your Evaluation Criteria

Identifying Evaluation Measures and Data to Collect

To assess how well each Solution Option you are considering meets your stated Evaluation Criteria, you will need common measures to evaluate the options against each criterion. For example, if the criterion relates to improving performance on a standardized test, the evaluation measure might be “score on standardized test” or “gain in score on standardized test since last Academic Year.”

DecisionMaker® suggests ways to evaluate Solution Options against many Evaluation Criteria here » but, if you need additional ideas, the resources listed below may be helpful in finding existing data or collecting your own data.

Some examples of existing information you can gather to evaluate Solution Options include:

  • Efficacy studies or process evaluations published in peer-reviewed journals or other publications
  • Information from the program vendor
  • Data collected on outcomes in places where the Solution Options have been implemented previously, either in your own context or in other contexts

You may also choose to collect your own information by:

  • Piloting Solution Options and surveying teachers or students for feedback
  • Collecting data on outcomes
  • Holding focus groups with stakeholders to understand buy-in
Science Teacher

If you don’t have much time before a decision must be made, you may need to rely on data from other contexts such as other districts or universities and use this to make an informed estimate of what results you could expect in your own context. Your confidence in being able to replicate the results from another location in your own location might be affected by factors such as the similarity of conditions and populations served, or whether the Solution Option was used in multiple locations and performed similarly across all of them. The Relevance and Credibility section below provides more guidance about this.

If no relevant data are available or can be collected within your timeframe for making the decision, you could elicit professional judgments from experienced staff members about how well each Solution Option will perform against each criterion. For example, you could ask Counselors to review college planning software intended for high schoolers and ask them to estimate the percentage increase in high schoolers in your district who will apply to college if you implemented each of several alternative software options in your high schools.

Relevance and Credibility

In some cases, decision-makers may want to use available evidence to evaluate Solution Options against Evaluation Criteria. However, decision-makers may struggle to determine whether the available evidence is relevant or even appropriate to use in their context.

Additionally, in the particular case of assessing evidence of effectiveness, prior studies may exist, but these studies are likely to vary in quality or credibility. Decision-makers may want to consider the results of a less rigorous study to assess whether a Solution Option is effective, but may want to account for the fact that the study could be over- or under- estimating the size of the impact.

To account for these issues, DecisionMaker® provides two optional indices to evaluate existing evidence: a Relevance Index and a Credibility Index.

The Relevance Index helps decision-makers determine how well a study’s findings apply to their own setting by assessing the extent to which various characteristics of the study population and context are similar to the decision-maker’s own context. The Credibility Index helps decision-makers assess how seriously to take the findings of a study by scoring it on the quality of study design. Click the Relevance and Credibility Indices in your preferred format below.

The RI and CI template is also available as an Excel workbook that calculates the relevance and credibility indices automatically. For a more detailed explanation of the calculations, technical guidance is available. Click below to download the Excel template.

We have created a four-part video tutorial to help practitioners use the Relevance and Credibility Indices. See the Relevance and Credibility Index Tutorials page in DecisionMaker® to learn more.

Assigning Values to Solution Options for Each Evaluation Criterion

Once you have identified the measures you will use to assess each Solution Option against each Evaluation Criterion and have the data you need to do so in hand, you will need to assign values on each measure for each Solution Option and enter these values into DecisionMaker®.

Data Must be in Numerical Form

Data must be collected in a numerical form in order for DecisionMaker® to calculate a utility value, e.g., points on a 0-10 scale, percentage points, hours per week, days per year, or rubric scores. If you have qualitative data, for example, rubric ratings of “Meets grammar standards to the highest expected level,” “Meets most grammar standards at an acceptable level,” etc., you will need to assign a numerical score to each rubric rating.

Likely Lowest and Likely Highest Scores

DecisionMaker® uses your expectations about likely best and worst case scenarios with respect to each evaluation measure to establish the high and low bounds of utility. The best case scenario you provide is set at 10 out of 10 for utility, assuming that you will be totally satisfied if a Solution Option performs at this level. Similarly, the worst case scenario is set at 0 out of 10 for utility, assuming you will be very dissatisfied with a Solution Option that performs at the lowest likely level. For example, if the evaluation measure is a test of reading comprehension scored between 0 and 40, you would enter the lowest score you think a student in the decision context might feasibly earn, and the highest score. If the population you are trying to serve is struggling readers, you might enter a likely lowest score of 10 and a likely highest score of 25. If they are your advanced readers, the range might be 30-40. These ranges might be based on your past experiences reviewing data from this test, or based on benchmarks provided by the test’s developer/vendor.

Setting the Direction of Your Preferences

For some evaluation measures such as test scores, graduation rate, or time on task, you are likely to prefer higher values. For others, such as days of absence, dropout rate, or suspensions, you are likely to prefer lower values. DecisionMaker® asks you to indicate whether higher values are better for each Evaluation Criterion in order to establish whether utility increases for you and your stakeholders as the values go up or down.

Resources to Help You Evaluate Solution Options

The free, online resources listed below can be used to help find existing evidence on educational programs, strategies and tools, or to help you produce new evidence to evaluate your Solution Options. Using Evidence to Strengthen Education Investments » is a useful introduction to thinking about evaluating educational programs and strategies.

+ Resources to help you conduct your own evaluations of Solution Options
  • Best Practices in Survey Design from Qualtrics » Some tips on how to design questions for your own surveys.
  • California Evidence-Based Clearinghouse » Provides descriptions and information on research evidence on child welfare programs, and guidance on how to choose and implement programs.
  • Digital Promise’s Ed Tech Pilot Framework » Provides an 8-step framework for district leaders to plan and conduct pilots on educational technology products, along with research-based tools and resources, and tips collected from 14 districts.
  • Mathematica’s e2i Coach » Facilitates the design and analysis of evaluations of educational programs/strategies and the interpretation of the results using data on the group using the program/strategy and a comparison group.
  • Practical Guide on Designing and Conducting Impact Studies in Education » A guide from American Institutes for Research (AIR) about designing and conducting impact studies in education. This guide can also help research users assess the quality of research and the credibility of the evidence it produces.
  • RCT-Yes » Facilitates the estimation and reporting of program effects using randomized controlled trials or other evaluation designs with comparison groups. Note that users need to download and install the software as well as R or Stata.
  • Regional Education Laboratories »Regional Education Laboratories (RELs) can serve as a “thought partner” for evaluations of educational programs or initiatives. They also offer training, coaching, and technical support (TCTS) for research use “in the form of in-person or virtual consultation or training on research design, data collection or analysis, or approaches for selecting or adapting research-based interventions to new contexts.”

+ Resources to help you find existing evidence on educational programs, strategies and tools
  • Best Evidence Encyclopedia (BEE) » Provides summaries of scientific reviews produced by many authors and organizations and links to full texts of each review.
  • Campbell Collaboration » Provides systematic reviews of research evidence.
  • Cochrane Library » Provides systematic reviews of intervention effectiveness and diagnosis tests related to public health in evaluation settings.
  • Digital Promise Research Map » Provides links to reviews and articles of educational research on 12 teaching and learning topics, and organizes them in Network View and Chord View to show connections among them.
  • ERIC » Education Resources Information Center. An online library of education research and information, sponsored by the Institute of Education Sciences (IES) of the U.S. Department of Education.
  • ERIC for Policymakers – A Gateway to Free Resources » Webinar recording of an April 16, 2019 presentation by ERIC staff.
  • Evidence for ESSA » Free database of math and reading programs that align to the ESSA evidence standards.
  • Google Scholar » You can search by key word for scholarly publications on any topic including peer-reviewed articles, evaluation reports, and white papers. These have not been vetted by Google, so you will need to carefully assess the sources and credibility of any documents you find here. Note that peer-reviewed publications often prefer that authors avoid naming specific programs and products. Also, some peer-reviewed journals may be behind a paywall. Sometimes you can find a free, pre-print version of the same paper on the internet by copying and pasting the title of the paper into your search engine.
  • Regional Education Laboratories » School districts or State Education Agencies can contact their REL for help with gathering evidence on educational strategies and programs. Upon request, RELs will perform reference searches and informal reviews of the existing research on interventions, and/or of specific studies against What Works Clearinghouse standards.
  • What Works Clearinghouse » Provides reviews of existing research on different programs, products, practices, and policies in education, and summaries of findings.
← Back


IX. Costs

Resources to help identify the resource requirements and estimate the costs for each Solution Option

Ingredients Method

We use the ingredients method to estimate the costs of educational programs, strategies or tools (see Levin, 1975; Levin, 1983; Levin & McEwan, 2001; or Levin, McEwan, Belfield, Bowden, & Shand, 2018, for full details on this method). This method defines costs as the resource requirements for implementing a program, strategy or tool (i.e., a Solution Option), regardless of how they are budgeted or financed. These may include personnel time such as teacher and school administrator time, training, facilities, materials, technology and other equipment, services, and other resources.

Calulating Costs

Expenditures Versus Full Costs of Implementation

You may also consider enumerating costs by only considering new expenditures. However, such an analysis may neglect accounting for important resource demands such as the reallocation of teacher and administrator time.

For example, expenditures for a new program might include items such as the initial costs of training staff, and any curriculum or additional materials you need to purchase in order to implement the program. However, to understand the full costs of implementing the program, you should consider how teacher and administrator time is being used now, and how that will change when implementing the new program. If one program is very time-consuming for teachers to implement, and another program is not, that may be an important consideration for you, in addition to expenditures. However, if primary concerns for key decision-makers are the additional start-up costs and new demands on the school/district budget, then considering expenditures only may be useful.

*Note that if you are only focused on expenditures, then you will be producing expenditure-utility ratios rather than cost-utility ratios in the analysis.


Below is a table comparing implementation costs vs. expenditures for a school that is considering adopting a social-emotional learning (SEL) program. The school considered two SEL curricula, RULER and Character First. A third Solution Option considered was maintaining the status quo, which involved students attending advisory sessions each day.

As you can see from the table, maintaining the status quo required no new expenditures, while Character First required over $1,300 for training and curriculum materials, and RULER required almost $11,000, primarily because the training is more intensive and involves travel.

While the expenditure amounts for the two SEL programs were not large, the amount of personnel time involved in implementing an SEL curriculum is quite considerable, especially in the first year. As shown in the table, this amounts to over $100,000 in personnel time (i.e., salary and fringe benefits for the number of hours collectively spent by personnel on this activity). SEL curricula tend to encourage developing a core implementation team and getting the entire community involved in understanding and practicing the principles taught by the curriculum. If personnel are spending time on these SEL activities, they must be spending less time on other school activities (or working longer hours!). As a result, considering the personnel time involved in implementing each program may be important for this particular decision. Note, however, that the status quo of running advisory sessions is also using teacher and administrator time, albeit in different ways. Costs of personnel time for the first year of implementing RULER are almost $10,000 more than for implementing advisory, but the school would actually save $7,400 in personnel time in the first year of implementing Character First.

When enumerating costs associated with personnel time, you should always consider how resource requirements will change relative to business as usual, as the status quo is also utilizing important resources in another way. You can see that implementing advisory already demands a lot of personnel time. RULER requires even more time, at least in the first year because it involves a dedicated implementation team and intensive initial training, while Character First uses fewer personnel resources, partly because the availability of sample lesson plans reduces the amount of time teachers need to prepare their own lessons.

Cost Implementation Table

+ Tools and Guidance on Cost Estimation Methods
  • E$timator » A free, online tool to help estimate the costs and cost-effectiveness of educational or other social programs.
  • Cost Analysis in Practice (CAP) Project » Free tools, guidance and one-on-one technical assistance (funded by Institute of Education Sciences, U.S. Department of Education) on conducting cost analysis of educational programs.

Guidance on cost estimation methods:

Reports and articles illustrating cost analyses and cost-effectiveness analyses of educational programs by researchers who have rigorously estimated costs:

← Back

Make a Decision

X. Make a Decision

How to think about the results and apply them to decision-making

Using the information you have entered about the Solution Options you are considering, DecisionMaker® provides summary metrics that you can use to compare the Solution Options and inform your decision about which, if any, of the solutions to pursue.

Utility Values

These values should reflect how well each Solution Option meets your stakeholders’ criteria and, consequently, how satisfied they will be with this Solution Option. A Solution Option with an overall utility value of 10 would, in theory, meet your stakeholders’ criteria perfectly and a Solution Option scoring 0 would not meet their Evaluation Criteria at all. Clearly, choosing options with higher overall utility values should make your stakeholders happier.

If you want to understand more about how utility is calculated, click here ».

If none of the Solution Options scores very well on overall utility, you may need to think about whether there is time to start over and consider a new set of options. Alternatively, now that you have a good idea about what your stakeholders care about, it may be possible to modify one of the current options to make it more acceptable to stakeholders.


Importance Weights

Note that these overall utility values incorporate stakeholders’ Importance Scores which indicate how much they care about each Evaluation Criterion used in the analysis. These scores are rescaled into the Importance Weights shown on the final results page. A bigger weight means stakeholders care more about the issue. It is worth checking that the Solution Option(s) you decide to adopt score(s) well on the criteria that have the highest weights.

Cost-Utility Ratios

DecisionMaker® combines the overall utility value for each Solution Option with the cost estimate you provided to show you a return on investment metric called a cost-utility ratio. This ratio is the cost per unit of stakeholder satisfaction (costs divided by utility value). A low cost-utility ratio indicates high return on investment because it means a Solution Option costs little per unit of stakeholder satisfaction. While, in theory, you should choose the Solution Option with the lowest cost-utility ratio, there may be reasons not to. One reason would be if the utility value is low, as explained above: paying very little does not help if no-one is going to be happy with the Solution Option chosen. Another reason could be that several options fit within your budget so you may decide to choose the one with the highest utility value knowing that you can afford it and make your stakeholders happier. Satisfied stakeholders are more likely to implement the chosen solution with care, which in turns means it is more likely to be successful in achieving your original goals.

We recommend you share the result of the analysis with stakeholders before making a final decision. For example, you can share some or all of the Summary Report which you can download by clicking the “Summary Report” button under the relevant decision flowchart. If stakeholders express dissatisfaction with the Solution Options that show the best results, ask them to think about whether they have other important considerations that they did not initially express. If they do, this may require identifying an additional evaluation criterion and re-evaluating the Solution Options against this new criterion to see whether the overall results change. If the issues they care about were already adequately captured in the Evaluation Criteria, it is likely that the options they favored did not perform well on some evaluation measures. Reviewing the results of the evaluation with them may help them understand the reasons some Solution Options did not fare well in the decision analysis. As with any evaluation, the numerical results should be helpful in promoting discussions based on evidence, but should not make the decision for you.

"What-If?" Analysis

It is also helpful to see how the results of your analysis might change under different possible circumstances. What if a reading program does not produce test score results as high as you expected? What if the costs of implementing an option double next year? What if you conducted the analysis separately with teachers and administrators? Would the same Solution Option come out on top in each case or would different ones rise to the top? If the same Solution Option performs best under different scenarios, you can be quite confident it is a good choice. If not, think carefully about how likely it is that circumstances change in a way that could lead to less-than-desirable outcomes.

Evaluating Whether You Made a Good Decision

At some point after you have implemented one or more Solution Options, revisit the analysis to see whether, in practice, each option has performed as well as expected when it was first selected. You can update the data in the evaluation measures and cost tables to see how the options rank now with up-to-date performance data. This will help you decide whether to continue with the same Solution Options going forward, or whether you might need to change focus. It is also possible that stakeholders change over time or that new factors may need to be considered in the decision, for example, if a new regulation is imposed or if the composition of the student population changes. The analysis can be modified iteratively to ensure that you are continuously re-evaluating the options in light of changing conditions.

← Back

Utility Values

Resources to help you identify existing evidence or collect new evidence to evaluate how well each Solution Option meets your Evaluation Criteria

In DecisionMaker®, utility is a measure of stakeholder satisfaction or “usefulness” reported on a scale of 0-10. A utility value of 0 would indicate that the Solution Option provides no stakeholder satisfaction while a value of 10 would indicate the perfect solution.

Utility is based on subjective and/or objective valuations of each Solution Option. Users indicate how well each Solution Option performs against each Evaluation Criterion and also how important each of these Evaluation Criteria is relative to the others. We multiply criterion-level utility values derived from the performance rating/score earned by each Solution Option by the importance weights that stakeholders have assigned to the Evaluation Criteria, and sum these to obtain overall utility values for each Solution Option. We assume that utility increases or decreases in a straight line between the highest and lowest likely values for each evaluation measure used.

The term “utility” was introduced in 1738 by Daniel Bernoulli, a Swiss mathematician and physicist, to refer to the total satisfaction received by a consumer from consuming a good or service. Philosophers such as Jeremy Bentham defined utility as the “property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness” (1789, I.4.). Utility is hard to measure in practice but economists have developed sophisticated ways to calculate expected utility using a combination of decision-maker preferences and probabilities of outcomes. Henry Levin, an economist of education, illustrated practical applications of utility analysis in education starting in 1980.

How the Utility Values are Calculated

DecisionMaker® uses the data you entered in the Evaluation Measures table to calculate utility. The overall utility value earned by a Solution Option is the sum of the utility scores it earns on each of the Evaluation Criteria (Criterion-level unweighted utility values) multiplied by the importance weights assigned by stakeholders to the criteria.


Suppose you, as the principal of Everglades Elementary School, need to identify an educational strategy to help K-3 students improve reading comprehension. You are deliberating between two reading programs A and B. You invite your school’s Reading Specialists and all the K-3 teachers to participate in the decision as stakeholders. As a team, they develop a list of three Evaluation Criteria and assign Importance Scores to each of them. Your AP helps you identify an evaluation method for each Evaluation Criterion and collect data to assess how well Program A and Program B perform against it (see table below).

Program table

Criterion-Level Unweighted Utility Value

Each measure you used in your evaluation is rescaled to convert your results to a common utility scale with a minimum of 0 and a maximum of 10. The likely lowest score and the likely highest score you entered for each measure are used to set the extremes of the scale, and a straight line connects the two points. This assumes that utility changes in direct proportion to the changes in the evaluation measure.

When the rating/score on an evaluation measure is positively associated with the utility values (i.e., higher scores are better), the likely lowest score you entered is assumed to provide 0 utility and the likely highest score you entered is assumed to provide a utility value of 10. The criterion-level unweighted utility value for a Solution Option is:


When the rating/score on an evaluation measure is negatively associated with the utility values (i.e., lower scores are better), the likely lowest score you entered is assumed to provide a utility value of 10 while the likely highest score you entered is now assumed to provide 0 utility. The criterion-level unweighted utility value for a Solution Option is:


Take the criterion-level unweighted utility value of Program A for “evidence of effectiveness in improving reading comprehension” as an example. The evaluation measure is the change in % of students who perform below grade level in reading. Clearly, you want fewer students to be below grade level in reading, so the evaluation measure in this case is negatively associated with utility.

The likely lowest score is -20 (i.e., a 20 percentage point reduction in the number of students who perform below grade level).

The likely highest score is 10 (i.e., a 10 percentage point increase in the number of students who perform below grade level). Lower scores are better. The likely lowest score of -20 is assigned a utility value of 10 and the likely highest score of 10 is assigned a utility value of 0.

A straight line is used to connect the two points, under the assumption that utility changes in direct proportion to the changes in the evaluation measure (see figure below). The score/rating of -10 for Program A is transformed into a utility value of 6.7 using the following formula:

Utility value = 10 * (-10 - (-20))/(10 - (-20)) = 6.7


Overall Utility Value

The overall utility value is the sum of the criterion-level utility values multiplied by the importance weights. The overall utility values for Program A and Program B are calculated using the following formulas:

  • Overall utility value for Program A = (6.7 *0.17) + (9 * 0.33) + (8.5 * 0.50) = 8.36
  • Overall utility value for Program B = (5 * 0.17) + (6 * 0.33) + (5.8 * 0.5) = 5.73

Cost-effectiveness: A Primer » by Henry M. Levin (1983) provides examples of how utility can be derived for educational programs or strategies, and the results combined with cost estimates to provide cost-utility ratios.

← Back

General resources

XI. General Resources

Miscellaneous other resources

The education agencies who helped us develop DecisionMaker® named a variety of resources they use to gather information about educational programs, strategies and tools. These include the following:


CORE Districts » (California-specific)


Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement by John Hattie (2008)

Newspapers and Social Media

← Back