A Job Task Analysis — sometimes called a practice analysis, role delineation study, or occupational analysis — is the foundational research that answers a simple but critical question: what does a competent practitioner in this field actually do?
The answer to that question is what your certification should measure. Without it, your exam measures what you believe practitioners need to know — which may or may not align with the realities of professional practice. For small, early-stage programs, that gap is often tolerable. For programs seeking accreditation, market credibility, or regulatory standing, it is not.
What a JTA Produces
- 01A task inventory: a structured list of the tasks, duties, and responsibilities that define competent practice in the role — organized by domain, weighted by frequency and criticality
- 02A knowledge and skill map: the specific knowledge areas and skills required to perform those tasks at a competent level
- 03An exam blueprint: a content outline showing which domains should be represented on the exam, and in what proportions — derived directly from the task weights
The exam blueprint is the practical output designers use most directly. It specifies '25% of exam items should address Domain 1: Assessment; 20% should address Domain 2: Planning' — based on data about frequency and criticality in practice, not on the program designer's intuitions.
How a JTA Is Conducted
Stage 1: SME Panel Development
A panel of Subject Matter Experts — typically 8–15 people who are active, competent practitioners — is convened to develop the initial task inventory. The panel should represent diversity in geography, practice setting, years of experience, and specialization. Their job is to articulate what the role actually involves, not what it theoretically should involve.
Stage 2: Task Inventory Construction
The SME panel produces a structured list of tasks organized into domains. Each task is written as an observable, specific action (e.g., 'Conduct an initial needs assessment with the client to identify performance gaps') rather than a general description. The specificity matters — it anchors the exam to real practice behaviors.
Stage 3: Practitioner Survey
The task inventory is distributed as a survey to a representative sample of practitioners — ideally several hundred. Practitioners rate each task on two dimensions: how frequently they perform it, and how critical it is to competent practice. These ratings generate the data that determines domain weights.
Stage 4: Analysis and Blueprint Development
Survey results are analyzed to produce weighted domain scores. Tasks that are performed frequently and are highly critical receive higher weights. These weights are then translated into an exam blueprint specifying the number of items per domain.
When You Actually Need a JTA
A rigorous JTA — with a full practitioner survey and statistical analysis — is required for programs pursuing NCCA accreditation, ANSI/ASTM accreditation, or recognition by regulatory bodies. It is also expected by any sophisticated employer or professional association evaluating whether to endorse your credential.
For programs at an earlier stage, a lighter-touch approach is defensible: a structured SME panel review, documented in writing, that produces a task inventory and exam blueprint without a full practitioner survey. This is not as rigorous as a full JTA, but it is significantly better than no validation at all — and it creates a documented basis for your exam content that can be built on as the program grows.
The JTA is not a one-time exercise. As professions evolve, the tasks that define competent practice change. Best practice is to conduct a formal JTA review every 5–7 years, or sooner if there are significant changes in technology, regulation, or professional standards. An outdated JTA is almost as problematic as no JTA — it anchors your exam to a version of the profession that no longer exists.
The JTA as a Credibility Signal
Beyond its technical function, the JTA is a credibility signal. Being able to tell employers, accreditors, and candidates that your exam content is derived from research into actual professional practice — not from the opinions of your founding team — is a meaningful differentiator. Publishing a summary of your JTA methodology and the resulting blueprint adds significant perceived legitimacy to your program at no cost.