Data Notebooks

Posted on Posted in Assessment

Assessment experts (e.g. Jan Chappuis, Jon Hattie, Rick Stiggins, and Dylan Wiliam, to name a few) state that student motivation has been disconnected from the assessment process in the past, but must be reconnected if schools are to create high levels of student achievement.  Ultimately, students must be the decision makers regarding their progress while learning.  In an effort to reconnect student motivation to assessments, many K – 12 educators across the US have begun the rigorous work of helping learners of all ages track their own learning progress in the form of student data notebooks.

But a data notebook is simply a tool and if it is not managed well, it will not impact student motivation or achievement in positive ways.  There are two important factors educators must consider prior to asking students to create data notebooks:  1) What promotes growth or change over time? And, 2) What are the appropriate ingredients to generate motivation?

First, it’s important to understand growth or change over time, because in order for data notebooks to work, they must be framed around making progress over time.  Learners must maintain a growth mindset, otherwise, all the data gathered in the world will not support the desired change.  Carol Dweck (Mindset:  The New Psychology of Success, 2006) points out that when learners have a growth mindset, they are willing to take risks, they recognize mistakes as learning opportunities rather than failures, and they engage their efforts in reducing the discrepancy between where they currently are and where they would like to be in their learning progress.  By description, it might seem as if ‘growth mindset’ learners would always be our brightest and our best, but that would be an over-generalization.  True, growth-mindset learners demonstrate the attributes that educators admire and respect, but these learners are also willing to fail.  Failing often and failing well is a risk-taking, learning based behavior that shouldn’t be a problem (see The Economist’s April 14, 11 article Fail Often, Fail Well), but it never fares well in our grade books.  As a corollary, it is likewise wrong to assume that fixed mindset learners – those who don’t take risks or apply effort to change their current status – are the traditionally labeled “failing learners.”  There are many, many “A” students who, when faced with a challenge that might impact their grade negatively, would prefer to avoid the challenge with the intent of maintaining an image of being smart. These learners are being rewarded for ‘skating by’ in our grade books but they are not engaged in deep learning.

The recording and tracking of data and assessment results over time must inspire continued effort or motivation. In his research based book Drive: The Surprising Truth About What Motivates Us, (2009) Dr. Daniel Pink suggests that motivation is not driven by carrots and sticks; rather, it is driven by 3 somewhat surprising ingredients:  Autonomy, Mastery, and Purpose.  What does that look like in regards to the assessment tools and resulting data that support learning?

  • Autonomy –The learner is the number one instructional decision maker in every classroom.  He/she must gather meaningful information (not aggregate percentages or total points) following each assessment and then organize the data in a visible manner that shows a trajectory of growth throughout the duration of the unit of instruction.  Ultimately, the learner must be able to make quality decisions about what comes next in his/her progression of learning, what skills and strategies he/she will bring to bear on the task ahead, and how he/she will monitor continued progress.
  • Mastery – Formative assessments are used to build hope and foster efficacy.  When formative assessments are managed well, the learner is able to make mistakes during the learning process and still demonstrate mastery by the end of the unit or learning period.  In a rich formative assessment system, the learner can engage in error analysis:  He/she gathers feedback and arranges his/her data and evidence in manner that creates a clear view of patterns or anomalies in the data.  At that point, he/she can then employ the strategies and skills necessary to create improvement in targeted and specific areas from one assessment to the next.  He/she operates under the assurance that success is still possible.  The summative assessment(s) resulting grades reflect an accurate score regarding the learner’s mastery against a given set of standards and achievement level descriptors, and not an average of the sum total for all assessments during the unit.
  • Purpose – The assessments that are tracked in data notebooks are engaging and meaningful.  Literally, the learner can see ‘worth’ in the data he/she is tracking. Most importantly, the culminating data enable the learner to draw healthy and accurate conclusions about his/her own self, developing insights into personal strengths and challenges and reflecting on favored content and learning styles. When the learning is provocative, engaging, and self-illuminating, the learner is better able to maintain a commitment to take risks and continue learning.

 

If data notebooks are not set up to nurture the fundamental attributes of learning – growth mindsets and motivation – the notebook will simply become a laborious and time-consuming tool.  When that happens, data notebooks are soon abandoned because they are ineffective and inefficient; or, more bluntly, they become time-consuming and useless.

There are some criteria that will support data notebooks becoming the rich learning tools they need to be if they are to increase motivation and growth:

  • Learning goals must be tied to learning standards.
  • Data cannot be perceptual (e.g. self assessment scores like “I think I’m a 3”).  All tracked data must be evidence based.
  • Data recording sheets must be manageable in number and meaningful.  When everything is a goal, then there really are no goals.  Learners must track the essentials.
  • Data that are tracked must show growth between assessments.  This extends far beyond simply documenting pre and post data.
  • Data must be organized in visual ways – bar graphs, pie graphs, run charts, etc. – so that the learner can see progress being made.
  • The learners must be in their data notebooks regularly.  This might not mean daily, but it does mean often enough to inspire action between assessments.
  • The learners must be the ones making the additions and notations in their books because it’s their data.  When teachers record the data for them, the data have minimal impact on the learner’s motivation or growth mindset.  It does not matter if learners fill in their check boxes or bar graphs in sloppy ways so the notebooks are not pretty for parents.  It matters that the data are owned by the primary stakeholder and that same stakeholder can then make better decisions as a result.
  • Data must be confidential.  No child should ever be revealed for his/her results in the data notebooks or on the data walls in the classrooms and hallways.
  • Best practices in formative assessment are necessary to engage learners meaningfully as they interact with their data while on the learning journey.

When in doubt as to whether or not data notebooks are being employed effectively in classrooms, teachers should resort to the educator’s Hippocratic oath:  “above all, do not harm.”  Data that are used to label, sort, or highlight incompetencies, etc. violate the very core of the Hippocratic oath:  Do not harm and maintain absolute regard for learners and the learning process.  After all, the work of transforming another human being – for which data notebooks are employed – is nothing short of sacred.

References:

Black, P., Harrison, C., Lee, C., Marshall, B., & William, D. (2004). Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 9–21.

Chappius, J., (2009).   Seven strategies of assessment for learning. Portland, OR: Pearson Assessment Training Institute.

Hattie, J.  (2009).  Visible learning: A synthesis of over 800 meta-analyses relating to achievement.  NY: Routledge.

Hattie, J. (2012).  Visible learning for teachers: Maximizing the impact on learning.  NY: Routledge.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

Pink, D. (2009). Drive: The surprising truth about what motivates us. New York, NY: Riverhead Books.

Stiggins, R. (2008).  An assessment manifesto:  A call for the development of balanced assessment systems. ETS Assessment Training Institute.

Stiggins, R. (October 17, 2007). Five assessment myths and their consequences.  Education Week 27(8), 28 – 29.

Wiliam, D. (2011).  Embedded formative assessment. Bloomington, IN:  Solution Tree Press.

Wiliam, D., & Thompson, M. (2007). Integrating assessment with learning: What will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning (pp. 53–82). New York: Lawrence Erlbaum Associates.

4 thoughts on “Data Notebooks

  1. I find with my seventh-graders that if they are motivated by the activity that accompanies the notebook entries, then the students are generally EAGER to do the note-booking. Case in point: today we did the Project WILD activity, Oh Deer! When we returned to the classroom students jumped right in to the data recording and line graph set-up with no complaints.

  2. Are the formatives graded and put into the gradebook officially or are they just corrected and then the student documents how they did but it doesn’t count toward their grade?

    1. Please forgive the serious delay in my response. I have been experiencing a multitude of technical difficulties.

      You ask a great question. Formative Assessments can be graded, but grading is never sufficient information for a learner to make an instructional decision. For example, if a learner generated a 16 out of 20 on a formative assessment (which would equate to 80% or a B-, possibly), it doesn’t tell the learner what he/she mastered or didn’t master. If there were 4 learning targets on the assessment and the learner got one wrong in each target, that would be entirely different data than if the learner got all 4 wrong in one target. Learners cannot make informed instructional decisions based solely on points or grades. Formative assessment is about activating learners to be informed decision makers and invested partners on the learning journey.

      And, whether you grade it or not, a learner must be able to work beyond his/her early indications of proficiency while in the learning process. This means that grading is not as much the culprit when using formative assessment as the practice of averaging is. Triangulated – or multiple validating points of evidence at the end of the learning are far more accurate and reliable data regarding a learner’s concluding levels of mastery.

Leave a Reply