Angoff Method
The Modified Angoff Method
The Modified Angoff method is the most basic form of the criterion based setting standards perhaps due to relatively simple process of determining the cut-off points. Judges in this method are expected to review each test item and the passing score is computed from an estimate of the probability of a borderline candidate answering each item correctly. It is a very straightforward procedure and only requires simple calculations.
The passing score is computed from an estimate of the probability of a borderline candidate answering each item correctly. After a discussion and consensus of the characteristics of a borderline candidate, each judge makes an independent assessment of the probability that a borderline candidate will answer the item correctly for each item. The judges’ assessments of an item are averaged to determine the probability of a correct response for that item. Then, each probability assigned to an item on the exam form is averaged to obtain the pass point. The benefit of the Angoff is that it has held up in court, is relatively straightforward, and does not require exam data.
Before ratings are assigned by any judges, it is very important to discuss the concept of minimum competence. Since the basis of difficulty using this method relies on “borderline” candidates, the committee must identify the minimum level of skill required in order to be considered competent on the job. These are candidates who would barely pass the examination.
This characterization of minimally competent candidates is then applied to a rater’s judgment. For each item on an exam form, judges must ask themselves a simple question: Out of one hundred minimally competent people, how many would answer the item correctly? It is important to be realistic in this regard. Some judges may have an unrealistic view of how many candidates they think should answer a particular item correctly.
How to Assign Angoff Ratings
- Read the question and answer it using your own knowledge and experience.
- Check your answer and evaluate if your answer is correct or incorrect.
- Think about the logic that you used to answer the question.
- Will the minimally competent candidate employ the same logic?
- Consider if the wording or structure of the question provides clues to candidates who are not knowledgeable.
- Estimate how many people considered minimally competent would get the question correct.
Raters’ scores on a typical scale will range from zero to 100. The higher the number, the more candidates they estimate would answer the item correctly. The lower the number, the less candidates they expect would get it right, thus the more difficult.
It should be suggested to raters that the scale be modified to the range 25-90. There are several reasons for this. First, if the item is a four-option multiple choice format, the most common number of options, there is a 25 percent chance that the correct answer will be chosen if the candidate simply guesses. Second, if greater than 90 percent of candidates are expected to answer the item correctly, then it has no real use on an exam because it will not differentiate from those candidates who know the information against those who do not. So, if your raters suggest a number higher than 90 for an item, it should be replaced with a more meaningful item on an exam.
We have also listed the suggested ranges for items with three up to six response options:
- 33 – 90 (for three option)
- 20 – 90 (for five option)
- 15 – 90 (for six option)
Here is a simple but effective visual aid on how the procedure works. You can see that five judges (header x axis) each rate ten items on a test form (y axis). In the far right column, the mean score on the individual items are tabulated. Then, in the far right hand corner cell, a mean of the mean scores is then determined to be a 75.
Benefits of the Modified Angoff Method
- It yields appropriate classification information.
- It is sensitive to student performance.
- It is sensitive to instruction and training.
- It is judged in the measurement literature to be statistically sound.
- It takes measurement error into account.
- It is generally easy to explain to laypeople.
- It is generally credible to laypeople.
- It can be applied to many item formats.
Limitations of the Modified Angoff Method
The disadvantages to the Angoff method involve the panel of subject matter experts using it. If the SMEs do not have a sound familiarity with the statistics involved, error can be introduced. However, this can be true for any method. Further, as the method initially rates individual items, SMEs may get sidetracked by these individual ratings rather than the overall performance of candidates on the exam.
The Traditional Angoff Method
The "traditional" version of the Angoff method sometimes referred to as the Yes/No Method.
Instead of rating each item and providing a rating from 0-100, judges review each item to answer the question: “would a borderline candidate be able to answer this item correctly?” The items they should answer correctly are assigned a 1 = yes, and the items they should not be able to answer correctly are assigned a 0 = no. The pass point is then calculated by averaging the scores. Some regard the traditional Angoff as much easier than estimating the proportion correct as used in the Modified Angoff.
How is the Modified Angoff different than the traditional Angoff method?
Here are the characteristics of the traditional Angoff method, aka “Yes/No” Method
- Does not rate probability of borderline candidates selecting the right answer
- Requests that raters answer Yes (1) or No (0) depending on whether a borderline candidate would answer the item correctly
- Passing point is calculated by averaging scores
- Regarded as slightly easier to estimate than the Modified Angoff