Tag Archives: assessment practices

High-Quality Assessments and Standards-based Grading and Reporting

natalie-bolton130x140By Natalie Bolton

Standards-based grading and reporting policies are becoming a norm in P/K ­‐ 12 schools, districts, and states. However, as policies are created calling for shifts in grading and reporting practices, it is imperative that time be spent on making sure that classroom assessments, both formative and summative, are of high quality. So, what tools or checks are in place to assist teachers in making sure their classroom assessments are of high quality, prior to reporting if a student has met a standard?

I’ve found that using the assessment development cycle as described by Chappuis, Stiggins, Chappuis, and Arter (2012) is a great tool to critique an existing assessment or to provide guidance as an assessment is being designed. Using the assessment development cycle helps ensure I can accurately communicate about student mastery of standards. All assessments, regardless of assessment method, should go through the cycle to ensure assessments are of quality. Three stages make up the cycle and are described in Figure 1. Continue reading

What Happens Before the Reassessment?

Jeff_EricksonBy Jeffrey Erickson

The topic of reassessment has spurred many “lively” conversations and debates in schools. Some argue that it isn’t fair that some get a second chance for learning and believe that it doesn’t reflect the real world to have second chances (forgetting the fact that many would not be able to drive to work if there weren’t redos.) Others contend that reassessments provide students an important opportunity to improve their learning and show proficiency. However, what I’ve learned over time, as a building principal, is that we need to shift the conversation away from reassessment and towards what happens before the first summative assessment is even given.

My school, Minnetonka High School, is an International Baccalaureate (IB) school. As teachers of IB courses complete course assessments, they are required to review all of the assessments, compare them against the rubric, and predict students’ final IB scores (from a 0-7 point scale). In turn, IB moderates the teacher’s predicted score. The process of predicting individual students’ scores is intriguing because of the amount of evidence of learning required to predict them.

A teacher who starts with the end in mind should be able to go around the classroom as he or she passes out the summative assessment to the class and accurately predict each student’s performance. The outcome on the test should not be a surprise to either the teacher or student. Sounds simple? In reality, no—to do this, the teacher must have a preponderance of evidence about each student’s performance gathered over the unit of study. There has to have been a series of formative assessments that provide the teacher with accurate feedback about the student’s learning. Each of the formative assessments helps drive and shape the instruction of the teacher so that mid-course corrections can be made. Rather than being reactive after the summative, the goal is to be proactive during the learning process and intervene long before the first test is given. If the evidence of learning shows that students are not ready, why would a student take the assessment the first time?

In the end, the testing results should never be a surprise. The criteria for success should be clear to all parties. Students should receive timely, specific, and targeted feedback throughout the learning process. With this information, proactive interventions can happen just in time for remediation—not the day after the summative assessment.

ATI Continues to Grow Its Vision

Rick Stiggins 2010 Portrait Max Resolution      By Rick Stiggins

Many don’t realize that the social institution we call school in America has undergone a fundamental change in mission. Historically, a primary mission has been to produce a rank order of students based on achievement by the end of high school—that is, to begin the process of sorting us into the various segments of our social and economic system. But over the past 20 years, new missions have been added. Schools also are being held accountable for delivering ever higher levels of achievement, universal lifelong learner competence, narrowing achievement gaps, and reduced dropout rates. Continue reading

Education As a “Cut” Sport

by Jan Chappuis

Basketball is a “cut” sport—players try out and not everybody makes the team. We don’t usually think of our classrooms as places where learning is a cut sport; nobody wakes up in the morning and says, “Today I need to exclude a few students.” Yet some of our traditional assessment practices structure the rules of success so that education becomes a “sport” many students choose to drop.

How does assessment do this? Three typical classroom causes are not allowing students sufficient time to practice, grading for compliance rather than learning, and using assessment practices that distort achievement.

doomed

Not allowing sufficient time for practice: Let’s assume that the reason we as teachers have jobs is because students don’t already know what we are teaching. It follows that we can expect a need for instruction accompanied by practice, which will not be perfect at the start. We can expect that we’ll need to monitor the practice to intervene with correctives so students don’t spend time in learning it wrong. If practice time is cut short by a pacing guide or other directive about what to “cover,” only those students who need a minimum of practice to improve will succeed. The others will tend to conclude they aren’t very good at the task or subject. But that is the premise we began with: they aren’t good at it. Our job is to give them sufficient opportunity to improve through instruction, practice, and feedback. If we cut learning short by assessing for the grade too soon, we have in effect decided to exclude a few students.

Grading for compliance rather than learning: The practice of awarding points for completion tends to cause students to believe the aim of their effort in school is to get work done. When learning is not the focus of points received, it matters less who does the work and whether growth has occurred. This is often done to get students to do the practice, but it miscommunicates the true intention—to practice in order to improve. When done is the goal, rather than improvement, growth is often marginal. When we don’t look at the work, we can’t use it as evidence to guide further instruction, so we are shutting our eyes to students’ learning needs, thereby shutting a few more students out of the game.

Distorting achievement: Including scores on practice work in the final grade is a common grading procedure that distorts achievement. When students need practice to learn, their beginning efforts are not generally as strong as their later performance. Averaging earlier attempts with later evidence showing increased mastery doesn’t accurately represent students’ true level of learning, and some give up trying altogether when they realize that they can’t overcome the hit to their grade caused by early imperfect trials. This also reinforces the damaging inference that being good means not having to try and that if you have to try, you aren’t good at the subject. If one of our goals is to get students to try, then trying shouldn’t result in the punishment of a low grade assigned too soon.

A less common but equally damaging procedure used when students don’t do well as a group on a test is to “curve” the grades by reapplying the grade point cutoffs at lower levels, so for example, what was a “C” becomes an “A.” This distortion of achievement masks the cause of low performance: were the results inaccurate because of flaws in certain items? Were items too difficult for the level of instruction preceding the test? Were there items on the test representing learning that wasn’t part of instruction? Each of these problems has a different solution, and each of them leads to misjudgments about students’ levels of achievement–the most harmful perhaps being those judgments students make about themselves as learners. Or did the results accurately represent learning not yet mastered? When we engage in practices that misrepresent achievement, we cut more than a few students out of learning.

All of these customs can be justified, but if learning suffers we have created a more serious problem than the one we intended to solve. They lead us to ignore students’ learning needs, and they discourage students from seeing themselves as learners.

mistakes

So what is the antidote? Some key places to start:

  1. Emphasize that learning is the goal of education and focus instruction and activities on clear learning targets.
  2. Ensure that your classroom assessment practices treat learning as a progression and mistakes as a way to learn.
  3. Offer penalty-free feedback during the learning that helps students improve.
  4. Use assessment as a means to know your students and to guide your own actions.

And finally, strive to implement assessment practices that help students see themselves as learners. If learning is truly the intended goal of the education game, we can all play.

More on Late Work

Recently a teacher wrote in liking our post on late work but had the following question: “How can my feedback be effective if student work isn’t timely? How can I be expected to give late work the same attention as work that comes in on time?” This is a very real concern for teachers when considering whether or not to eliminate late penalties in their assessment practices.

Tom Schimmer writes:

The question about teacher time is an important one that can’t be ignored. In order for any new practice to be successful in the long term, we need sustainable routines or we risk “burnout.” No late-penalties doesn’t mean no deadlines, however. When deadlines are missed, we need both an individual response and a “system” response in order to make sure that students are as current as possible. Some quick points…I’ll try to be brief.

1) Distinguish between “can’t do” and “won’t do” issues. A “can’t do” means the student actually doesn’t fully understand what to do to complete the work. A “won’t do” is not necessarily outright refusal; however, it does mean that the student knows what to do but hasn’t done it. Each of those requires a slightly different response.

2) “Won’t dos” need a place to go (AM/at lunch/PM) to complete the missing work. Who supervises, who confirms attendance, etc. are all “system” questions that principals and teachers have to come together on. If they “no-show,” then who gets the referral? Is it a “code of conduct” issue? Something else? “Can’t dos” need further instruction and, therefore, need a different response from the teacher.

3) In the schools I’ve worked in, we set an unwritten guideline of two weeks. That meant that we wanted work to be missing for no longer than two weeks. We actually preferred one week, but knew there would always be extenuating circumstances and/or the scope of what’s missing may take longer.

4) Ask yourself whether the missing “evidence” is necessary or whether the standard(s) addressed in the missing work will be addressed again very soon. I call this overlapping evidence. Taking a standards-based approach means we look at “meeting standards” and not necessarily “getting everything done.” For example, missing homework could likely be covered on an upcoming quiz, so it might not be necessary for students to complete all of the homework as you know you will be assessing the very same standards shortly after. The big question with missing work is this: Is this piece of evidence necessary for me to accurately assess the student’s level of proficiency? If yes, then you need it; if no, move on.