Introduction

In this case study, Thomas Hays, a data coordinator for an elementary school, helps teachers understand student reading skills and adjust instruction to improve these skills with Remark Office OMR.

Background

After proving the feasibility and usability of Remark Office OMR for collecting data, another challenge to improving instruction in the elementary schools was engaged. Language Arts instruction was being assessed at the school level, but no data collection and analysis were being performed. Teachers assessed the students using an assessment known as a holistic test. This assessment is composed of brief constructed response (BCR) items and two types of selected response items. The selected response items are those where the answers are either explicitly in the text or could be implied from the text. The brief constructed response items are two items that would require several sentences demonstrating the students’ understanding of the text. The BCR’s are not scored for spelling, grammar, word choices, or creativity – they are scored only for understanding of the reading of the text and the textual support provided in the response. These assessments were scored by the teachers and were used to adjust instruction as the teacher deemed necessary. There was no attempt to provide a consistent score from teacher to teacher, much less from school to school. This was not feasible with the scoring method used.

The main reason for assessing the students in this manner was to help teachers understand student reading skills and adjust instruction to improve these skills. In Maryland, the reading assessment used by the Maryland State Department of Education (MSDE) to evaluate students as well as schools was composed of these same two types of questions (selected response and constructed response). Our Title I schools had the additional responsibility of reporting student progress in reading three times a year. The Title I reports were intended to evaluate the Title I program, not specific students. These reports consisted of aggregated numbers of students meeting expectations. To produce these reports, teacher specialists took the scores of individual students and aggregated the scores for these reports. The data did not track individual students, but numbers of students in each reporting category (below level, satisfactory, and advanced). Needless to say, this was a very tedious and time consuming process. But it was the best that could be done without an electronic system. All of this changed when the Title I teacher specialists and I began our work together.

This project required the creation of a data management and evaluation system. As with all data projects, there are three areas that had to be considered: the reports required, the data available, and the calculations to be performed. The Title I teacher specialists provided the beginning prototype reports. The data came from two sources: the student demographic database maintained by the school system and the assessment data scanned using Remark Office OMR. The calculations and linking of the two sets of data was accomplished using Microsoft Access™.

The assessments were reformatted to accommodate the scanner and the Remark Office OMR software. This allowed for using very little space on the page for the student identification information. The identifying information included the student identification number, student name, teacher name, and school. This information was merged from the school system’s database into the assessment document. This document contained the questions, answer choices, bubbles for students to mark, areas for the BCRs and student responses, and the bubbles for the teacher to mark the scores for the BCRs. This made for a user friendly assessment document. The teachers only had to read and score the BCRs. The scanning of the documents collected the student responses and the teacher-assigned values for the BCRs. All of the data was transferred to the data system for the assessments where the “magic” of report generation took place. More about the “magic” later…

As an aside, the classroom teachers in the Title I schools had to be trained on scoring the BCRs to provide for scoring consistency between teachers. This was handled by the Title I teacher specialists. For the most part, the BCRs were scored by the grade level teachers with the assistance from the Title I teacher specialists. They met to discuss what rubric was expected from student responses, then read and scored the papers together. This practice made for consistent results from teacher to teacher. Because teacher specialists were responsible for multiple schools, they helped to facilitate the consistency of BCR scores from school to school.

Formatting the assessments was not difficult. Microsoft Word® was used to construct and print the scan sheets. The assessments were formatted to best suit the age of the student being assessed. For grades 1 and 2, a larger 14 or 16 point type was used. Grades 3 through 5 used a smaller type (usually 12 point). Another real advantage of reformatting the assessment documents was the ability to place the questions and answer selections right next to each other. This eliminated the problem of students transferring answers to a separate answer sheet and sometimes marking the wrong item. It also allowed space for the students’ constructed response to appear next to the question. This made scoring these items easier for the teachers. Naturally, the student identification number, which is linked to a demographic database, appeared in a barcode on the individual assessment sheet for each student. It should be noted that an individual assessment sheet had to be printed for each student in each of the then 8 Title I schools. This was quite a printing task! (There were approximately 1000 students in each grade.) The necessary student demographic data (school, grade, teacher, id number, and student name) were merged onto each individual assessment sheet. This data came from the school system’s student demographic database.

The “magic” of report generation is a result of using a relational database system. In my office, Microsoft Access provided the ability to do the “magic.” The data system allowed for aggregating and disaggregating the data results to provide the required reports. This was the result of linking the student demographic data table with the assessment results data table created from the scanned assessments. (Again, another wonderful feature of Remark Office OMR is the ability to export the scanned data into almost any database or spreadsheet.) Once the data tables were linked via an Access query, all sorts of aggregation and disaggregation could take place. It made it possible to sort the data by school, teacher, race, gender, poverty status (based on Free and Reduced Meal status), special education, ESL status, etc., depending on the data in the school system’s student demographic database. Additionally, the reports provided an analysis that would be consistent from teacher to teacher and school to school.

The most important reports to make change happen in instruction are the classroom reports that list each student in the class, how they did on each item on the assessment, and the overall score for the assessment. This report needed to be delivered within 48 hours of receiving the scan sheets to make an impact on daily instruction. The reports were formatted in such a way that the classroom teacher could immediately see how each student was doing in reading. Other reports were created that eliminated the need for the Title I teacher specialists to manually calculate the number of students achieving at the various levels in reading.

Because these assessments were given just quarterly, and each assessment had about 10 selected response items and only two BCR items, they could not be used with a high degree of confidence by themselves. However, when two or three of the assessments were aggregated for each student, the level of confidence regarding student ability rose significantly. The aggregated results were a very good predictor of how a student would score on the MSDE annual reading assessment. By using these aggregated predictive results at the end of January, specific students were identified as needing additional instruction. This targeting of instruction to specific student deficiencies helped these students gain the needed skills earlier than ever before. Results from the MSDE annual assessment were not available until the end of the school year—too late to make any changes to student learning until the next school year. So, not only were more students able to score well on the MSDE reading assessment, they were able to get the specific help they needed to be successful readers for the rest of their lives. Isn’t that what assessment is supposed to do, identify areas that need improvement so instruction can be provided in a timely manner? That is what we in the Title I Office thought, and were very pleased that we were able to impact instruction and learning in such a significant way.

Attached to this article is a sample of the grade 3 quarter 1 holistic assessment form used by students and teachers, and two reports. The sample of the assessment shown was printed on 8 ½ by 14 inch paper in the landscape format. This allowed for four pages of assessment when printed front to back.

The first report shown is the one that makes the most difference in instruction: the classroom report that was delivered to the classroom teacher. The colors immediately flag the performance of students and the areas that need the most attention for each student. The colors indicate the degree of intervention needed. Yellow indicates the student is having some difficulty and would benefit from some assistance. The pink indicates extreme difficulty and requires more attention in that area of instruction. It is through this kind of report that students were identified for extra help and kept from slipping through the cracks.

The second report is a sample of the kind of report required for Title I. It counts the number of each subgroup that is at each specific level on the assessment. This report page is for one grade at one school. In this instance, the report shows performance by race groups, and within each race group, each gender. Previously, teacher specialists had to hand-tabulate this data in order to comply with the Title I requirements. Now, thanks to Remark Office OMR and the use of a Microsoft Access relational database, all that work is performed electronically. Teacher specialists are free to do what they do best, help teachers and students improve student learning.

Thomas Hays is a retired educator who served a 50,000 student school district in Maryland as assistant supervisor for accountability and data coordinator for compensatory education from 1996 through June 2007.