Q: This is the 23rd annual ATI Summer Conference. Can you tell us how the conference got started?
Jan Chappuis (JC): Rick Stiggins and his wife, Nancy Bridgeford, had recently founded the Assessment Training Institute here in Portland, Oregon to address a major gap in preservice education programs: teachers were generally not prepared to engage in effective assessment practices. The summer conference began as a way to bring together like-minded professionals to further ATI’s mission—to develop understanding of how day-to-day classroom assessment can and should serve learning. Today, 23 years later, “going to Portland” has been a transformational experience for thousands of educators in the US and around the world. Continue reading
By Ben Arcuri
There is no bigger topic in education these days than the topic of assessment. Assessment has many definitions depending on who is doing the talking. The purpose of assessments and the intended users of assessment information differ tremendously as well. Assessment can serve as a guide to the students; it has the ability to guide the teacher and can also drive education policy and reform. Continue reading
Posted in Assessment Practices, Feeback, Formative Assessment, Ideas, Summer Conference
Tagged Assessment, assessment practices, ATI Summer Conference, Classroom Assessment, Classroom Instruction, Formative assessment, strategies
By Natalie Bolton
Standards-based grading and reporting policies are becoming a norm in P/K ‐ 12 schools, districts, and states. However, as policies are created calling for shifts in grading and reporting practices, it is imperative that time be spent on making sure that classroom assessments, both formative and summative, are of high quality. So, what tools or checks are in place to assist teachers in making sure their classroom assessments are of high quality, prior to reporting if a student has met a standard?
I’ve found that using the assessment development cycle as described by Chappuis, Stiggins, Chappuis, and Arter (2012) is a great tool to critique an existing assessment or to provide guidance as an assessment is being designed. Using the assessment development cycle helps ensure I can accurately communicate about student mastery of standards. All assessments, regardless of assessment method, should go through the cycle to ensure assessments are of quality. Three stages make up the cycle and are described in Figure 1. Continue reading
Posted in Assessment Literacy, Assessment Practices, Summer Conference
Tagged assessment literacy, assessment practices, ATI Summer Conference, Classroom Assessment, Formative assessment, grading, Methods and Theories, Pearson ATI, Practice, standard-based teaching, strategies
By Nikki Roorda
“Beginning next year, our district is going to be grading using a standards-based method.” This sentence still evokes a vivid picture in my mind of my teammates and I sitting at a meeting with a district-level Teacher on Special Assignment (TOSA) who was making her way around our large suburban district delivering the message. I can picture one of my teammates nearly falling off of her chair when she heard some of the tenets of the new grading system–not grading homework, not using zeros in calculating grades, and allowing multiple attempts to demonstrate learning. These suggested changes were in total contradiction with the way that she had taught, assessed, and graded for the first 25 years of her career.
There are few things more sacred to a teacher than how they teach, assess, and grade students. The study and implementation of standards-based practices, including teaching, assessing, and grading, evokes spirited conversations as practitioners, administrators, and parents work their way through examining the purpose of a grade. The deep-rooted conversations about why we assess and grade the way we do often brings about passion and emotion to teachers. As these conversations unfold, there is a need to develop consensus among the teaching staff (building, district) about the purpose of a grade and how this purpose is operationalized in the practices that are used in our school.
The conversations centered around the belief systems associated with the implementation of standards-based practices need to be thoughtful and bring up some of the more controversial aspects of the practice when compared to a more traditional grading approach, such as staying away from averaging scores, not giving students zeros, and using formative assessments, such as homework used for daily practice. Through meaningful conversation and outlining a thoughtful vision for implementation that outlines current state and desired state, skills needed by teachers, and a vision for implementation, success can be achieved.
The two sessions that I will be presenting at ATI’s 9th Annual Sound Grading Practices conference deal with building consensus around standards-based grading (Preparing for Standards-Based Teaching and Learning) as well as an overview to Ken O’Connor’s A Repair Kit for Grading: Fifteen Fixes for Broken Grades (Implementing Sound Grading Practices: An Overview). Both sessions are designed to help participants think about implementing a standards-based grading system in their district/system.
In this post from the ATI archives, Rick Stiggins paints a picture of what assessment for learning looks like in the classroom.
When using assessment FOR learning in a proper manner, teachers use the classroom assessment process and the continuous flow of information about student achievement that it provides to advance, not merely check on, student progress. The basic principles of assessment for learning are captured in the following checklist. Teachers who can say that these practices are part of their normal routine are applying the principles of assessment FOR learning:
- I can articulate the achievement targets that my students are to hit before I begin instruction.
- I regularly inform my students about those learning goals in terms that they understand.
- I am routinely transform my achievement expectations into assessment exercises and scoring procedures that I am certain accurately reflect student achievement.
- I understand how to use classroom assessment to build student confidence in themselves as learners.
- The feedback that my students receive is frequent and descriptive, giving them information upon which to improve their performance.
- My students regularly assess their own achievement and feel comfortable managing their own improvement over time.
- I continuously adjust instruction based on the results of classroom assessments.
- My students are actively involved in communicating with others about their achievement status and improvement.
- My students are able to predict with some accuracy what comes next in their learning.
In short, the effect of assessment FOR learning, as it plays out in the classroom, is that students remain confident that they can continue to learn at productive levels if they keep trying to learn. In other words, they don’t give up in frustration or hopelessness.
Are you planning to have your students create their own rubrics this year? Here are some tips on how to guide this process from Judy Arter & Jan Chappuis’ Creating & Recognizing Quality Rubrics.
While we’re all in favor of involving students in rubric development, it is not true that anything gos when we do. We have to be ready to lead students to germane criteria. We have to have a clear picture in our own minds of where we want to take students so that we can engage them in activities and show them models that lead them to justified inferences about quality. Teachers generally know more about quality than do students. Even though students always have knowledge to build on, they also can harbor misunderstandings. Our rubrics send a message to students about what is important. Therefore, the rubrics they create have to cover the features that really do define a quality performance or product.
We once saw a rubric developed by third graders to evaluate reading comprehension by producing a poster of the story. Students focused on the quality considerations for an attractive poster—three colors, at least five pictures, neat, readable from a distance, and so on—instead of the quality of the comprehension displayed by the poster.
A solution? How about leading these students to deliberately evaluate two different criteria: comprehension of the story as revealed by the poster and the attractiveness of the poster itself. For the former, have them think about what would indicate that a student has understood the story. For the latter, let them know that it is always important to present work in an engaging manner. Here their criteria for a quality poster might prove sufficient.
Then, if we put two scores in the record book—comprehension and presentation—it would be clear what each score is evidence of. The presentation score would be used in figuring an art grade, not a reading grade, because the rubric for presentation represents art-related learning targets.
Excerpt from Arter, J. & Chappius, J. (2006). Creating & recognizing quality rubrics. p. 61. Reprinted by permission of Pearson Education, Inc., Upper Saddle River, New Jersey.
By Tom Schimmer
For an assessment to serve a truly formative purpose, it needs to cause some action by the teacher and students. In other words, the information gleaned must have the potential to illicit an instructional change or adjustment going forward. The word potential is important here because the resulting assessment information will not always lead to instructional changes since the assessments may confirm that what the teacher has planned for the next fifteen minutes is the most favorable direction to take. The point is that the teacher be in a position to consider those changes in real-time; that a teacher have the instructional agility to make the necessary maneuvers in as short a time as possible.
Formative assessment is a verb. When we view formative assessment as a noun we create two challenges. First, the assessment-as-noun mindset is one that views assessments as a series of events. This event focus creates the illusion that every time teachers assess their students they must create something tangible to hand out. Second, an event-based view of assessment infers that a teacher must “stop teaching” in order to “conduct” their formative assessments. While there is nothing inherently wrong with this approach in small, periodic doses, those who view assessments as nouns will find the prospect of day-by-day, minute-by-minute formative assessment daunting as they ponder the number artifacts they must create and collect. It’s no wonder some teachers proclaim that they “don’t have time for formative assessment.” Continue reading
By Ken Mattingly
Grades have served many purposes for many people over the years. The general intent, I’ve always believed, has been to represent how students are doing in school. However, there’s often disagreement on the specifics of the grade and exactly “how” it represents student performance. Some feel a grade should reflect the amount of work done by a student. Others view a grade as a representation of when a student learned the material. I would argue that each of these camps are missing out on a key aspect of a grade.
While a grade can tell teachers, administrators, and parents about a student’s performance, if it doesn’t inform the student, then a key player in the learning environment is being left out. If grades are to serve as communication, then they have to address the person that makes the most learning decisions in the classroom — the student. Grades must tell students where they are in the learning process and what they have to improve on. Continue reading
By Jan Chappuis
The preservice education my teaching colleagues and I experienced focused primarily on the act of instructing—different ways to deliver information—with no attention to responding to student work. Consequently, I, like many others, began teaching with a repertoire of four steps: plan, instruct, assign, and grade. First I planned what I would do and what my students would do. Then, I prepared the materials and resources. Next, I did what I planned, and they did what I planned. Last, I graded what they did. However, learning and teaching turned out to be far messier than I had been prepared for. Somewhere between “I taught it” and “they learned it,” the straight shot downstream to achievement sprung surprisingly into an array of diverging tributaries. Over the course of that first year, I discovered there are a thousand ways for learners to “not get” a lesson.
The belief underpinning my teacher preparation seemed to be that learning trots right along after good instruction, a sort of stimulus-response system, in which instruction alone will create learning. However, when students have continued learning needs after instruction, it is not necessarily an indication that something went wrong. Learning is an unpredictable process; instructional correctives are part of the normal flow of attaining mastery in any field. Continue reading
By Cassandra Erkens
In the countries out-performing North America on the PISA report, educational leaders began their school improvement efforts by focusing on increasing rigor (Ripley, 2013). In the US, businesses and colleges alike are clamoring for today’s graduates to be functioning at higher levels than they currently are. Toward that end, we’ve seen an increase in the level of rigor demanded in the newly emerging next generation standards (Common Core, national science standards, new state or province created standards, and so on). The question is no longer, should we increase the levels of rigor for our learners, but rather, how can we successfully increase the levels of rigor we offer and exact from our learners today? Continue reading