Grantee Research
Validity, Reliability, and Fairness Evidence for the JD-Next Exam
Document Type
Issue/Research Brief/Blog
Publication Date
4-2024
Keywords
admission criteria, law school diversity, standardized test scores, law school performance, 1L GPA
Abstract
At a time when institutions of higher education are exploring alternatives to traditional admissions testing, institutions are also seeking to better support students and prepare them for academic success. Under such an engaged model, one may seek to measure not just the accumulated knowledge and skills that students would bring to a new academic program but also their ability to grow and learn through the academic program. To help prepare students for law school before they matriculate, the JD-Next is a fully online, noncredit, 7- to 10-week course to train potential juris doctor students in case reading and analysis skills. This study builds on the work presented for previous JD-Next cohorts by introducing new scoring and reliability estimation methodologies based on a recent redesign of the assessment for the 2021 cohort, and it presents updated validity and fairness findings using first-year grades, rather than merely first-semester grades as in prior cohorts. Results support the claim that the JD-Next exam is reliable and valid for predicting law school success, providing a statistically significant increase in predictive power over baseline models, including entrance exam scores and grade point averages. In terms of fairness across racial and ethnic groups, smaller score disparities are found with JD-Next than with traditional admissions assessments, and the assessment is shown to be equally predictive for students from underrepresented minority groups and for first-generation students. These findings, in conjunction with those from previous research, support the use of the JD-Next exam for both preparing and admitting future law school students.