Assessment of Struggling Elementary Immersion Learners: The St. Paul Public Schools Model
The ACIE Newsletter, February 2006, Vol. 9, No. 2
By Aline S. Petzold, School Psychologist, St. Paul Public Schools, St. Paul, Minnesota
Students who struggle in the immersion classroom are typically assessed for possible special education help following the traditional “severe discrepancy” formula, used nationally since the 1970s. That is, the student’s intellectual potential and actual academic functioning are compared on standardized comprehensive tests given in English. The student is found eligible for service if there is a ”severe discrepancy” or a significant gap between his or her current level of achievement and where he or she should be functioning.
In the St. Paul Public Schools (SPPS) district, however, educators believe that, if a student does not have the same experiences as those in the norming sample of a standardized test, the test should not be used to make decisions regarding that student’s schooling. In such cases, it is unfair and professionally unsound to use English-medium tests on students in the immersion classroom who spend their day being instructed in a language other than English. Instead, SPPS assessment tools are created from the students’ curriculum (Curriculum Based, or General Outcome Measures), comparing a student’s progress to that of peers.
To create these more program-appropriate norms, entire grade levels of immersion students are tested for reading fluency and math computation once every three years. Results are analyzed to determine a median score and then used to establish immersion specific norms. Students qualify for special education services only if they are at least two times discrepant (i.e., they make twice as many errors as their peers) in both English and the immersion language.This methodology is adapted from the district’s model of assessing English language learners, in use since 1990. The model requires quite a bit of planning and time initially, but we find that it leads to more accurate placement decisions in the end.
Research Support for Model
Research support for comprehensive and alternative approaches to assessment such as curriculum-based measurements when dealing with linguistically and culturally diverse learners is strong (Baker, Plascencia-Peinado, & Lezcano-Lytle, 1998; Shinn, 1989). The references and resources listed below deal with Curriculum–Based Measurement (CBM) theory and its use with native English speakers or English language learners. The administration and scoring instructions are for English language materials. At this time, there is no literature that focuses specifically on the use of CBM materials in the immersion setting.
Implementing the SPPS Assessment Model: A Step-by-Step Guide
The following outlines the steps for implementing the SPPS Assessment Model for language immersion students.
What do I need?
People - to read instructions for the group tests; people to monitor the groups during test administration; people to listen to individual readers; people to score the tests. The more people on hand, the faster the data can be collected and analyzed.
Time - to administer and to score tests.
Materials - including test packets with a math computation task, a written language story starter, a comprehension task, grade level-appropriate oral reading passages and a common reading passage; pencils, stop watches or watches with second hands, administration and scoring directions for helpers.
WHAT DO I DO for testing?
Before the Test Day
Decide the scale of your project. Will you be focusing on one struggling student in one classroom or several struggling students across one grade level? Do you want information about typical rate of progress from grade to grade at one school? Do you have more than one program district-wide to compare?
Schedule blocks of time to gather two sets of norms (one in English and one in the immersion language). Group testing of math and written language takes about 20 minutes; oral reading takes five minutes per child. (Note: Gathering norms is like making a quilt – allot double the amount of time you think you will need in case of last minute glitches!)
Create probe packets for grade level(s) to be tested. Each packet should contain: a front page with space for identifiers (student’s name, grade, school, date) and a scoring table showing each test with one column for the student’s scores and one with the median score of peers for comparison; two math computation pages (one at grade level, one with mixed calculations to sample knowledge of functions over several grades); one written language story starter; four oral reading passages (three at grade level, from basal texts or the equivalent; one common passage at about 4th grade level). These passages should be 150-250 words, long enough so that the average reader cannot finish before the one-minute time limit. Numbering these passages will make scoring easier. (A discussion in support of oral reading fluency measures as a determinant of second language reading ability can be found in Shinn, Good, Knutson, Tilly & Collins, 1992). Samples of the oral reading probe, story starter, cloze passage and math problems can be found in Figure 1, available online.
Upper grade passages should be longer (250-350 words) and packets should also contain a cloze passage, where certain deleted words must be replaced, to test reading comprehension. The cloze procedure or maze task has been used to test reading comprehension in second language learners since the 1970s (see, for example, Oller, J.W., 1973; Shin et al., 2000).1
Create a master packet of all the oral reading passages, both grade-level and common, to be used for one-on-one reading. These do not need to be numbered.
Create instruction and scoring packets for helpers to use as reference guides. In SPPS we have adapted our own version of the scoring system originally designed by Deno and Shinn at the University of Minnesota (see, for example, Shinn, 1989; see also AIMSweb Curriculum Based Measurement online site).
Recruit people to help you on the test day. The more helpers you have the better!
On the Test Day
Arrive early! Bring extra timers. Gather all helpers. Make sure all helpers have been trained beforehand about the process and have a copy of the instruction/scoring packet. Begin norming your group(s). If you are working with more than one class, consider starting with one-on-one reading and ending with the whole group testing of written language and math, as long as all students are present for the group test.
For group testing: Have one person read the instructions and answer questions before the test, have other helpers (up to three) circulate and monitor the group during testing. Collect the packets and keep them in a safe place until scoring time.
For one-on-one reading: Take students, one at a time, to a quiet place to read out loud for you. This is the most time consuming part of collecting the data. The more helpers you have listening to readers, the shorter this phase will be. Have the student read from the appropriate unnumbered passages in your master packet. Clock their reading for one minute. Mark their errors and the end point on the corresponding numbered passage in the student’s packet. Collect the student packets for scoring later. It is a good idea to have a prearranged activity to occupy the students who are waiting to read to you; for example, reading at their desks, drawing, doing an assignment.
How Do I Develop Local Norms?
Remember, you need to do the norming twice, once in English and once in the Immersion language.
Have copies of the scoring instructions available for all helpers. Scoring instructions need to include some examples of errors.
Start scoring the packets soon after they are administered, if you have time. For example, you could test in the morning, break for lunch and re-group to tackle scoring in the afternoon. Input from the helpers is invaluable when scoring, especially when trying to decipher words and phrases in the written language story segments.
Calculate the scores for each test and complete the scoring table in the front of each student packet. A packet typically takes about ten minutes to score. See Figure 2, available online.
When Tabulating Data
Create a list of scores for each student in each area tested. Determine the median score and range of scores for each area. To do this, you will first need to rank order the scores from lowest to highest. If the total in your sample is an odd number, then the median is the middle number. If the total in your sample is an even number, take the average of the two middle numbers.
Chart these for use as a reference guide (Available online, Figure 3 shows a sample norm grid by grade; Figure 4 shows norms in a multi-grade format). The median score is used to compare your struggling student against immersion peers. In SPPS, immersion students qualify for special education instruction if their error score is at least twice the median in both English and the language of instruction. For struggling students who do not meet the strict 2X discrepant criterion, the range of scores provides a means of deciding whether they are nevertheless at-risk in comparison, or are still developing skills. The range of scores also gives a sense of the level of difficulty of your testing material. That is, if too many scores are high (or low), you may want to adjust your probes accordingly.
When and how often DO I Assess?
Carry out this norming in the fall and again in the spring to gather information about learning growth over time. The data should be updated every three to five years to maintain usefulness, but until then, you can relax and enjoy the fruits of these labors.
It takes effort to develop local norms, but the outcome is a more program appropriate evaluation tool since students are compared to their immediate peers and test content is taken directly from their curriculum. We have used this alternative assessment model in SPPS immersion programs since 1993 as one part of a comprehensive learner-centered approach. As a result, we are experiencing greater success in minimizing the likelihood that struggling students are falsely identified as candidates for special education.