On November 29, Education Week ran a free webinar with Joe Willhoft, executive director of SMARTER Balanced (SB), and Laura Slover, senior vice president of PARCC. As you know, SB and PARCC are the two assessment consortia receiving federal funds to develop the new assessments for Common Core Standards (CCSS). Though Ohio recently announced its intention to participate in the PARCC group, along with 18 other governing and 6 participating states, 28 states, 21 governing and 7 advisory, have joined SB. In the interest of being informed, I tuned in to the webinar (archived here and accessible after a brief, and free, registration) for both the SB and PARCC pieces.
To receive federal funding, both assessment consortia had to show in their applications for funding how they would meet certain requirements. They must assess the entire spectrum of CCSS including those difficult-to-assess standards. Their assessments must also be equitable and accessible for all students, including English Language Learners (ELL) and students with disabilities (SWD). When asked in the seminar about their use of accommodations, neither assessment company could provide a clear-cut answer as to how their assessments would be equitable and accessible for all; however, both companies maintain that ELL and SWD considerations are constants in the development process. SB has received some federal grant money through “enhancement funding” to add a translation component to its assessments, and PARCC says there is money set aside to cover translations on the math assessment.
The two groups also sought to include teachers throughout the development and implementation of their programs, which is important since another requirement of receiving federal funding is in the potential usage of assessments as a factor in teacher evaluations. Both companies plan to provide extensive professional development opportunities for teachers to include instructional modules. While SB will focus on professional development for training teachers to use the assessments and formative/summative practices, PARCC intends to extend on these learning opportunities to include data analysis and how to use the data from assessments. Both assessments can be used as part of teacher evaluations, but individual states will decide how and to what extent this should be done.
The assessments themselves are similar in structure. Both consortia offer a set of four tests.
SB offers two summative assessments to occur in the last 12 weeks of school from grades 3-8 and high school. The first is performance-based and will take the equivalent of two class periods to complete. Willhoft specified “performance” meant “sit down” assessments, so I infer these would be in the form of extended response/essay-type questions. The second summative assessment is the computer-adaptive version, which students have the option to retake one time. Additionally, SB would offer two optional interim assessments (also computer-adaptive) that could be used for formative purposes (much like the screeners many districts are using for RtI). SB does not include a K-2 component, currently.
The PARCC group also offers a four-part assessment schedule with two summative assessments for grades 3-11. For K-2, PARCC is working to create an optional set of formative tools they say could look more like games for K-2 students. They could also involve sets of rubrics, activities, and projects. For grades 3-11, PARCC offers an optional diagnostic assessment for formative purposes that can be used at any time and an optional mid-year assessment to be used for formative purposes at the mid-year mark. The summative assessments include a performance-based assessment, which Slover indicated would be an extended task, and an “innovative” computer-based assessment. The PARCC assessment schedule includes one required speaking and listening assessment that is to be locally scored and non-summative in nature.
One key difference between the two assessments is the computer-adaptive component of SB’s assessment format. Computer-adaptive tests respond automatically to correct and incorrect responses, adjusting the questions accordingly as the student answers. Scores on these types of tests indicate more specific and direct results, whereas non-computer-adaptive assessments give a more general picture of achievement. If you will recall, when I discussed screening assessment for RtI in the past, one assessment I mentioned was the Northwest Education Association’s Measures of Academic Progress. This screener is computer-adaptive and intended to tell exactly where a student’s achievement abilities fall using their RIT formula of scoring.
Computer-adaptive tests that adjust according to answers from the respondent will zero in on areas of concern by alternating between more difficult and less difficult questions, until it finds questions with which the respondent is the most comfortable and able to answer.
One question that arises from this primary difference in the SB and PARCC testing type is how these test scores will be comparable to each other. Policymakers will need to look at how achievement is reflected in the cut score of a PARCC assessment versus the cut score of a SB assessment. The aim of CCSS is to create a common set of expectations for all students, and both assessments will need to reflect some commonality in their scores despite the computer-adaptive nature of SB’s assessment.
Both assessments plan to be ready for the 2014-2015 school year, but with varied stages of development through the next two years.
My Evernote Notes from Webinar (includes Q&A)