•  
  •  
 

Abstract

Abstract

Objective: This study attempted to establish a consistent measurement technique of the readability of a state-wide Certified Nursing Assistant’s (CNA) certification exam. Background: Monitoring the readability level of an exam helps ensure all test versions do not exceed the maximum reading level of the exam, and that knowledge of the subject matter, rather than reading ability, is being assessed. Method: A two part approach was used to specify and evaluate readability. First, two methods (Microsoft Word® (MSW) software and published readability formulae) were used to calculate Flesch Reading Ease (FRE) and Flesch-Kincaid Reading Grade Level (FKRGL) for multiple standardized tests as well as a state-wide CNA certification exam. Statistics calculated by hand were compared to those computed by MSW. Second, due to inconsistencies in readability statistic calculations, a single method was developed to calculate readability in order create tests at or below an eighth grade reading level. Results: There were significant differences between readability statistics calculated by hand and those calculated using MSW for the standardized tests as well as the CNA certification exam. Hand calculations indicated an easier to understand document than did MSW. Subsequently, by removing identifying values (e.g. numbers and letters), calculated reading levels were then consistent across test versions. Conclusion: Reading grade levels calculated via unpublished formulae should be used with caution due to inconsistent results. Further, creating a standardized format for the CNA exams will aid in making sure readability statistics of the document fall within the certification exam’s guidelines. Application: The reading grade level calculation should be used to ensure the maximum reading level on a certification exam is not exceeded. Evaluating influences that affect reading level calculations should be an integral aspect of creating standardized tests.

Share

COinS