Most Recent? Most Frequent? Most Accurate?

by Tom Schimmer

One of the fundamental tenets of standards-based grading is that greater (if not exclusive) emphasis is placed on the most recent evidence of learning. As students move through their natural learning trajectory it is important that students be credited with their actual levels of achievement. That is, when students reach a certain level of proficiency it is important that what is reported accurately reflects that level. To average, for example, the new evidence with the old evidence is to distort the accuracy of the grade; the grade then is reflective of where the student used to be as the student was, at some point, likely at the level the average represents.

What we have collectively realized is that the speed at which a student achieves has inadvertently become a significant factor in determining a student’s grade, especially when determined within a traditional grading paradigm. When averaging is the main (or sole) method for grade determination, success is contingent upon early success or the average of what was and what is will continue to distort the accuracy of the students’ grades. Never forget that every 40 needs an 80, just to get a 60. That’s pure mathematics; the lower the initial level, the more a student has to outperform his/herself in order to achieve even a minimal level of proficiency.

More often than not, the most recent evidence of learning is the most accurate. This is especially true when our standards, targets, or performance underpinnings represent foundational knowledge and skills. Foundational knowledge and skills are typically elements of the curriculum that have a fairly linear progression and slip back is highly unlikely; once students truly know or can do something it is unlikely that they will, even after an extended period of time, suddenly not know or know how. That doesn’t mean mistakes won’t occur. Even the most proficient students make mistakes. That also doesn’t mean they’ve suddenly lost proficiency; errors are an eventual occurrence. But does that mean the most recent evidence is always the most accurate? Not always.

Sometimes the most frequent evidence is the most accurate. Generally, the more complex the standards or the demonstration of proficiency is, the more likely it is that a teacher will need to consider the most frequent evidence as the most accurate. Take, for example, writing. Students are often asked to write in a variety of styles and/or genres. As such, taking the most recent writing sample may be misguided since the expected style/genre could be the student’s weakest. For example, if a teacher asked the students to write an argumentative paper as their final paper, and that is the student’s weakest form of writing, then the potentially poorer result may give the appearance that the student’s writing skills have declined. However, if the teacher had simply chosen to reorder the assignments and make argumentative writing the first paper, then the optics would reveal a very different trajectory. With complex standards/outcomes like writing, accuracy is more effectively achieved when the teacher examines all of the writing samples and looks for the most frequent results as they relate to the intended standards.

Staying with the writing thread, the most recent writing samples may be the most accurate within a particular style; if a student writes multiple argumentative papers then it’s likely the most recent is the most accurate. However, as the styles change, most frequent may be more accurate. The point is that we need to be more thoughtful about how we apply the concepts of most recent versus most frequent. This is more art than science and teachers must become comfortable with using their professional judgment. Remember, the goal is accurate grading and reporting. The art of grading is about the teacher using his/her professional judgment to determine a student’s level of proficiency. Teachers are more than data-entry clerks who enter numbers into an electronic gradebook; they are professionals who understand what quality work looks like, who know what is needed for students to continue to improve, and know when the numbers don’t tell the full story.

I am looking forward to sharing more on this topic at the Pearson-ATI Sound Grading Practices Conference (Dec. 5-6, 2013) in my session titled Most Recent? Most Frequent? Other sessions I will be leading include Zero Influence-Zero Gained, which examines the misguided logic behind punitive grading and Effective Leadership for Sound Grading and Reporting for administrators and teacher-leaders looking to implement more sound, fair, reasonable, and accurate grading practices in their department, school, and/or district.

As well, I will be presenting a keynote session entitled Accurate Grading with a Standards-Based Mindset where I will outline the mindset necessary to begin the shift away from traditional grading toward a more accurate, standards-based approach that maintains student confidence and focuses on learning rather than the simple accumulation of the requisite number of points.

If you’re unable to attend the conference, please take some time to follow the hashtag #ATIcon on Twitter.


2 responses to “Most Recent? Most Frequent? Most Accurate?

  1. I love the idea, here, that it takes some processing to understand what the scores have meant, and that it depends on the type of standard being used. A “writing” standard is different than an “argumentative writing” standard is different than a “can use details in the text to accurately support a thesis” standard. This, I believe, will also end the last-minute crush of grading teachers sometimes do at the end of a marking period in that I fully believe students and teachers should speak a week beforehand to reflect on the work and consider where the “learning” is. If a student has an uncharacteristically weak mark that’s come in later than some strong ones, perhaps a revision in order in that case. But it is a nuance, a teacher’s artistry and skill, that’s going to make these new grades accurate.

    I’m not sure what the formulas will do with their heads, once spun off in disbelief.

  2. I like to disseminate knowledge that will I have accumulated through the season to assist improve team efficiency.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s