Many wonder how an “adaptive” exam truly works, and why it is used for the NREMT exam. The concept of an adaptive exam is based on principles of the Item Response Theory (IRT), which is a way of evaluating exam (or other research) results based on individual questions rather than the exam as a whole. By evaluating a candidate from one question to the next, the exam can adapt and change to challenge the candidate based on information received form each question answered. So you may be asking, “How does the IRT work?”
There are essentially three models of this theory, each using more discriminating parameters than the last to evaluate a candidate’s success in answering questions on given topics. In the first model, only item (or question) difficulty is used. This means that the test takes into account how challenging a question is then uses that information to select the difficulty of the next question (and use that information to determine the candidate’s level of understanding on that topic). The second model uses question difficulty and question discrimination (which is a relationship between rate of success and ability for each candidate) to determine a candidate's level of understanding of a given topic. The final model uses item/question difficulty, discrimination, and guessing. The guessing parameter takes into account how easily a candidate could simply guess the correct answer to a question. By combining these parameters, the NREMT is able to quickly and accurately determine which candidates have truly mastered this material. For more information on these models, click HERE
Let’s take a look at how is the IRT implemented in the NREMT exam. When it comes to the NREMT exam, this theory works by initially giving candidates questions that should be “just below” the passing standard for a question, and as a candidate answers questions correctly, questions should continue to get more difficult until a candidate has either proved that they are at or above the level of necessary understanding, or until the test stumps the candidate. This touches on the Computer Adaptive Testing and Computer Based Testing models.
At the point that the test algorithm determines with 95% confidence that the candidate has either mastered or failed to master the material, the test will end. Because the test will vary based on the way a specific candidate answers questions, the test will be unique each time it is taken, and therefore cheating on this exam becomes nearly impossible, which further improves the quality and accuracy of the exam.
The IRT allows for more accurate measurements of ability not only in the NREMT exam, but in many other forms of research and testing. This model will likely be used more frequently, especially as testing continues to become computer, rather than paper, based.
- Dozens of courses and topics
- State-specific requirements
- We report to CAPCE in real time
