Cinderella Summatives

Posted on Posted in Assessment, Education

It was probably inevitable that when educational experts began highlighting formative assessment as the Golden Child because of its incredibly positive impact on student achievement, summative assessment would then be dubbed the evil stepchild and relegated to the grunge of ‘clean up’ work.  Summative assessments are tolerated, but distastefully so: today they are popularly referred to as the tools that provide autopsy data.  However, like our diamond-in-the-rough Cinderella, summative assessments are misunderstood, under-appreciated, and chock-o-block full of royal potential.

The misunderstanding of the role and value of summative assessments is evident from the classroom to the boardroom.  The visible disconnect between what is and what could be happening with summative assessments suggests we have so much room to grow in our understanding of a truly healthy and balanced assessment system embedded in a rich learning environment.

There are many fallacies about summative assessments and it’s time to set the record straight:

  • Summative assessments happen on the last day of a marking period or unit of instruction.  We all seem to believe that, but where is it written? It’s true that ‘summative’ suggests a ‘summary’ of the final results, but there is no doctrine that says it must happen on the last day.  Generally, the kinds of assessments that can happen on a single day are paper/pencil assessments.  What if the paper/pencil test were to occur a day or two before the end of the unit, allowing follow up time for enrichments and interventions?  “End” is not the equivalent of “last day.”  What if the summative process was a multi-day event and it was woven into the last week of the unit so we didn’t have to find time outside of the classroom to address gaps?  This strategy, however, would only serve as an interim Band-Aid while we strive to create a more balanced assessment system.  If we really engaged in the formative process better, learners should not require intervention following the summative assessment.  In fact, learners would walk into a summative assessment experience and view it as a knowledge jubilee, for it would only serve to confirm what their formative data have been indicating all along:  they have mastered the critical expectations outlined in the standards.
  • Summative assessments mark the completion of content or processes.  We need to do a much better job of linking our summative assessments together so that learners can grow between summative assessments.  Common core standards, next generation science standards, and 21st century skills are helping to pave the pathway for such work because in all of those frameworks core processes can be practiced over the course of multiple units of instruction.  Our best examples of summatives over time and across content happen in the performance-based disciplines of Physical Education, Music, and Art, etc.  Core skills in those disciplines strand multiple assessments throughout the entire semester or year of learning.  In Language Arts, learners employ the 6 Traits of Writing to improve all aspects of their writing over the course of producing the persuasive, the narrative, the expository, and the comparison papers, etc.  Such learning opportunities can happen in all of our disciplines. For example, with the new science standards, science learners could get better with planning and carrying out investigations, analyzing and interpreting data, and engaging in argument from evidence between the gas lab, the heat and cool lab, the gravity lab, and so on.  Social studies learners could improve in their skills of analyzing patterns, making predictions, drawing conclusions, and arguing a perspective through all of the various political structures, eras, and cultures.  Mathematic learners can improve their computational accuracy, communication of math concepts and language, and problem solving skills no matter the algorithm of a particular unit of instruction. “Summative” is not synonymous with “done.”  If we don’t revisit early learning with ongoing practice, what are the chances our learners will remember September’s learning on a May exam?
  • Summative assessments offer autopsy data.  Sadly, this belief is based in current reality, but that is merely a reflection of how summatives are most often employed and it doesn’t have to be the reality.  The fundamental purpose of summative assessment is to certify mastery.  Rarely would a single data point be sufficient to certify anything, except maybe in the case of death.  But mastery is about thriving, not dying, and tons of evidence is needed for mastery to be proven true.  In the evolving and connected skill or process-based assessment system described above, the new data from each new summative should add a new dimension to the full picture.  Consider this:  you’ve been dealing with stomach pains so you go to the doctor.  The doctor will likely run a series of tests (temperature, weight, xrays, blood draws, questions about the severity of the pain, etc.).  Each test generates a summative piece of data: the data are the data are the data.  But each new piece of data adds a new dimension to the full picture.  And if the doctor does a second test to confirm or refute an earlier finding, the new data provide the new finding.  The doctor (thankfully) does not average the first result with the last result, because that would create inaccurate data.  Instead, the doctor looks at the most recent result as the current reality.  Some might argue that viewing summative assessments in this light simply means early summative results actually become formative results as time passes, but that would be inaccurate.  A summative assessment is summative because it requires learners certify current levels of proficiency when standards are integrated in meaningful ways.  But summative assessments offer ‘snapshot’ or ‘moment in time’ data.  The data from such certification processes enter into the full picture and must be considered.  But ‘considered’ does not always mean ‘counted.’  In a true learning culture, averaging is not possible.  Learners would strive to demonstrate mastery through all of their summative assessments and teachers would have to look at later results to determine final competency levels.  In sum, summative data are dynamic, not static, like autopsy data.
  • Summative assessments are comprehensive, and therefore lengthy.  A unit generally only has one summative.  It is possible to have smaller summative assessments and multiple summative assessments within a unit of instruction.  Often, these are ‘gateway’ assessments – a mini assessment aimed at mastering a critical skill before the rest of the learning can occur.  It would be critical, for example, for the Family and Consumer Science teacher to ‘sign off’ on a learner’s ability to use knives appropriately before allowing the student in the kitchen to participate in a lab.  The same can be true for using the band saw in Industrial Technology or the Bunsen burner in Science.  It’s even feasible that the Language Arts teacher would want a guarantee that learners understood recording primary sources before launching into research or Social Studies teachers would want to certify that learners understood the anatomy of an argument before setting students loose to prepare for a debate.  A single unit of instruction can have small and large summative assessments ~ all of which strive to certify a level of proficiency before learners are allowed to advance.
  • Summative assessments must mirror high stakes assessments and are therefore dreaded and dull.  How many times must one eat a Mars bar before one understands the flavor and texture of a Mars bar?  It’s true that our learners might need to experience various testing formats so they are ready for the high stakes assessment formats but that certainly doesn’t mean all of our summatives – or for that matter, even most of our summatives – must be patterned after high stakes testing formats.  Summative assessments are like the big game.  It’s what the learners have been preparing for all along.  As such, they should offer fun, challenge, the dynamic integration of ideas and skills, the chance to solve meaningful problems or create new options, and continued learning through the assessment itself.  In fact, if we’ve built readiness on the foundation of solid formative practices, our learners enter the summative experience with confidence and hope.  The summatives should simply serve as a public celebration of how much learning has happened along the way.  In this light, formative assessments might actually be the more ‘dull’ because like the hard work of daily practice, they represent the little parts or scaffolding that can only lead to the big game.  If we examine our standards carefully (State, Provincial, and the new Common Core standards) we’ll likely note that most are written at the level of performance.   It is impossible to assess ‘designing and conducting a scientific investigation’ or ‘engaging in the art of argumentation through political debate,’ or ‘investigating a compelling research question’ through a selected response test.  Assessment expert Dr. Rick Stiggins frequently asks his audiences ‘what’s the one assessment that you and your learners can’t wait to get to?  That’s the one I want to experience!’  That assessment would be a summative assessment.  It is far from dreaded and dull.

We all know the Cinderella story:  the ugly and unwanted step child was really the princess.  It’s time to see summative assessments with new eyes.  They should not be a royal pain that must be tolerated but instead a magnificent opportunity to change lives through provocative nurturing.  And there is majesty in that.

One thought on “Cinderella Summatives

  1. Hey Pal,

    Just a quick note to say that I LOVED this bit. The content is fantastic, the writing is on point, and the format of dispelling myths in a list is perfect for a blog post.

    The only change that I would make if your goal is to engage more readers is to trim the post down a bit. Maybe take out one of your myths out — or write two posts sharing myths in a Part One and Part Two series.

    The truth is that a ton of blog readers are either skimming or reading on their phones — so long form writing doesn’t always resonate in this forum.

    You’ve pushed my thinking, though — and that’s awesome.

    #grateful

    Bill

Leave a Reply