Back to writing
Certification Design·9 min read·Invalid Date

How to Set a Defensible Passing Score for Your Certification Exam (Without a PhD in Psychometrics)

The cut score is one of the most consequential decisions in certification design — and one of the most commonly set by gut feel. Here's how to use a structured, documented process that produces a defensible passing standard.

How to Set a Defensible Passing Score for Your Certification Exam (Without a PhD in Psychometrics)

Every certification exam has a passing score — a threshold below which a candidate is deemed not yet competent. The question is how that threshold was determined. 'We felt 70% was reasonable' is the most common answer in independent certification programs. It is also one of the weakest possible defenses of a cut score, particularly in a dispute.

Setting a defensible passing score — one derived from a structured, documented process grounded in professional judgment — is not as technically demanding as most program operators assume. It requires a clear process, a panel of qualified subject matter experts, and careful documentation. It does not require a psychometrician.

Why the Cut Score Matters

The cut score is the line between 'certified' and 'not yet certified.' Set it too low, and you certify people who have not met the standard — undermining the credential's credibility and potentially exposing the certification body to liability. Set it too high, and you create an unnecessarily exclusionary barrier that reduces your certified community without improving its quality.

The cut score is also the primary target in exam appeals. When a candidate challenges a fail result, the first questions are: what was the passing standard, how was it set, and can you demonstrate it reflects a genuine competency threshold rather than an arbitrary number? A documented standard-setting process answers these questions directly.

The Modified Angoff Method

The Modified Angoff method is the most widely used standard-setting approach in professional certification. It is comprehensible to non-specialists, produces defensible results, and can be conducted by a small SME panel without expensive external consultants.

  1. 01Assemble a panel of 6–12 subject matter experts who are competent practitioners in the certified domain. They should represent the diversity of the field and should not be the exam developers.
  2. 02Define the 'minimally competent candidate': a practitioner who has just enough knowledge and skill to meet the competency standard — not a beginner, but not an expert. Ask panelists to hold this imagined candidate in mind throughout the exercise.
  3. 03For each exam item, each panelist independently estimates what percentage of minimally competent candidates would answer that item correctly.
  4. 04Collect all estimates, share them with the panel, and allow discussion. Panelists who are outliers are invited to explain their reasoning.
  5. 05After discussion, each panelist provides a final estimate for each item. Average the estimates across all panelists and all items to produce the recommended passing score.
  6. 06Apply a policy adjustment if appropriate: the panel's output is a starting point. The certification body may adjust slightly based on program goals, candidate demographics, and standard error considerations.

Other Standard-Setting Methods

  • Bookmark method: panelists review exam items ordered by difficulty and mark the point where minimally competent candidates transition from likely-correct to likely-incorrect. More efficient for large item banks but requires item difficulty data from prior administrations.
  • Contrasting groups method: a sample of candidates is classified by experts as clearly competent or clearly not-yet-competent, and the cut score is set at the score that best separates the two groups. Requires a large enough candidate sample.
  • Borderline group method: SMEs identify candidates they consider borderline, and the median score of this group becomes the passing standard.

For most independent certification programs conducting their first standard-setting exercise, the Modified Angoff is the right starting point: the most established, the most defensible, and the most accessible to practitioners without psychometric training.

Documenting the Process

  • The date of the standard-setting study
  • The panel composition: number of panelists, their qualifications, how they were selected
  • The method used and a description of the process
  • Individual and aggregate panelist ratings for each item
  • The statistical result and any policy adjustments applied, with rationale
  • The final adopted cut score
  • Signatures or acknowledgment from panelists confirming the process

Standard-setting studies should be repeated on a regular cycle — typically every 5 years — or when there is a significant change in the profession. A cut score derived from a panel review several years ago may not reflect current professional standards. Document the date of your study prominently, and build a review cadence into your program governance.

Key Terms

Work With Method Lab

Ready to build the structure?

We work with founders and institutions that are already producing results and ready to design the certification, licensing, or governance structure that lets their method scale.

Read more articles

Related Articles