The One Question That Will Change Your Data Review Conversation

Ever been to a data review meeting like this?

Data is projected for everyone to see.  

You group students into those who are exceeding, meeting, and not meeting the target expectation.

Everyone gives reasons for why students have reached varied levels of proficiency.

In the last 5-10 minutes, you come up with some ideas for what to do for these students, focusing mostly on the students who are struggling.  These ideas almost always include the following:  reteaching skills in small group and/or a “double-dip” with a specialist. 

Everyone agrees to do said ideas, but these ideas either get pushed to the wayside for new standards being taught or people end up planning the language and strategies of the reteach lessons on their own.  For the most part, everyone ends up in the same place at the next meeting.

I’ve experienced hundreds of meetings like this as both a participant and facilitator.  It can be incredibly frustrating and make everyone feel like they’re wasting their time.  However, I’ve also experienced the opposite where conversations result in specific goals and rich plans for student learning resulting in huge growth for kids.

So what’s the difference?  How can we have meaningful data conversations each time?  

It’s actually a lot simpler than you think, but requires an openness on the part of the participants.  This past week we had a data review of our winter Fountas & Pinnell data planned for each grade level at their weekly 60 minute PLC.  The objective of the meeting was to answer the question, “Is our Tier One instruction meeting the needs of our students?”  

We started out similarly to the description at the top of this post.  We looked at data and gave possible reasons for the results.  This year started with 46% of our 2nd grade students not meeting the grade-level benchmark in literacy, but this percentage had now dropped to 29%, a significant amount in a short few months.  This growth had truly been a group effort that included:

  • Coteaching with the EL teacher and reading specialist
  • Small group instruction with the reading specialist outside of the classroom
  • A deep data dive into phonics skills using the Core Phonics Inventory run by our school psychologist that was monitored and checked in with the team every six months. The results shifted instructional practice and grouping.  
  • Parent volunteers who were trained by our reading specialist and one of our 2nd-grade teachers to come and read with ALL kids daily
  • A 5th-grade mentor who also read regularly with students
  • The instructional coach working with the team to develop a Tier One phonics progression with learning experiences
  • A retired teacher from Jefferson regularly volunteering and working with groups as well as reading with individual students as well

When we got to the part where we discussed what the classroom teachers were doing instructionally they attributed the success to their small group instruction.   Like what has happened hundreds of times before, we could have stopped there.  Everyone knows what small group instruction looks like so it must be the same right?  

Nope.  

When we were about to move on, I asked a simple question to one of the teachers, the question that I would recommend asking every time you meet as a group.

“What does your instructional practice actually look like?”  

From this one simple question, we got a variety of answers that ended up resulting in a huge shift to the direction we were going in as well as a concrete plan for next steps.  One teacher explained that she has the students do the reading for group at their desks and then the time she spends with them is actually on talking about the book and developing instructional strategies.  Another teacher explained that she was working on questioning which was different from another teacher on the team.  The third member of the grade level team said that she gives students at least 10 minutes each day to just read independently.

As we delved more deeply into the specifics of their instruction we realized as a team that students were frequently meeting with teachers and getting systematic instruction, but that the amount of time students had to read independently varied greatly.  Teachers were honest in the fact that they were worried that many students weren’t able to do this for extended periods of time on their own.  This was the reason why they had come in as such struggling readers at the beginning of the year because they were mostly “fake reading.”   

We celebrated as a team how far the students had come from the beginning of the year, but really started to push one another’s thinking on independent reading.  Essentially, how could students continue to grow if they were never really reading longer than 10 minutes on their own?  

Instead of leaving saying, let’s make sure our students can read at least x amount of minutes a day without a concrete plan for how to do this, we made sure that the team was supported with ideas as well as resources to help.  Our instructional coach brought up Jennifer Serravallo’s engagement inventory that many on the team had used before. She offered to come in and do it for the teachers so that they could work with students.  Another part of the plan was freezing some of the group work that was happening so that the teachers could monitor independent reading for “fake reading” as well as independent strategy use.  This would be done by conferring.  They planned to redo their “Good Fit” book discussion as well as their processes for students filling their book boxes which was planned outside of the independent reading time.

The team ultimately decided they would set a goal for the students to read independently for 20 minutes a day.  This benchmark would be progress-monitored and discussed regularly at PLC meetings.  The conversations that they would have as a team would be explicit discussions of conferring strategies, students who were struggling with independence followed by specific plans of action moving forward.

Another realization that came out of this conversation was the importance of academic language and that students might be missing understanding simply from not knowing the vocabulary.  An additional plan was created for this outcome as well.  The meeting finished with a few minutes to spare and a sense of accomplishment.

It is amazing what can be accomplished in a short time when the goal is clear and the participants share deeply.  DuFour created these PLC Questions decades ago:

  • What do we want all students to know and be able to do?
  • How will we know if they learn it?
  • How will we respond when some students do not learn?
  • How will we extend the learning for students who are already proficient?

Each of these questions plans plays a critical role in the power of a PLC, but if we don’t have deeply explicit conversations about any of the questions, then they are relegated to simply a discussion tool to run the organization of a meeting.  The power in the PLC is the expertise of the participants, trusting one another, benefitting from one another’s strengths and ideas.   The next time you are planning or participating in a PLC, give explicit time to share how you teach, not just what you did.  Making this tiny shift will create an incredible ripple of effects on student learning.  

 

 

Data Review – A Little Less Talk, a Lot More Action

Data.  It’s a four letter word.  Especially in education.

It has been argued by many to play a pivotal role in increasing student growth.  The four key PLC questions from DuFour are centered around it.  Even Danielson includes it as part of Domains, 1, 3 & 4 of the teacher evaluation rubric.

And yet, when many educators hear that it’s time for a data review meeting they either cringe, cry, or circle up their friends to plan out who’s bringing what treats to get through the agonizing process.

So what’s wrong with data?  Specifically, what’s wrong with data review and why does it get such a bad rap?  More importantly, what can we do about it?

Problem #1 We Don’t Engage All Stakeholders

When I was a teacher I remember being invited to countless data review meetings where the reading specialists would project graphs of student growth (or lack of) using something called AIMSWEB.  We would painstakingly go through each student with the specialists all sharing their insights. Periodically I would be asked for feedback on my students, but in the end the decision would be made to change some sort of intervention that I was not involved in by the other “experts” in the room. 

I sincerely hated those meetings.  I became an expert head nodder at those meetings.  Most of the time I was dreaming about what I would eat for, let’s be honest, any meal, or thinking about which member I would take from my favorite boy bands to form the greatest band of all time.  (This is way trickier than you think.  You need at least one bad boy which means you can’t just take ALL the cute ones.)

From start to finish, each stakeholder, from educator to specialist needs to be both an active creator and participant in the process and beyond.  The reason why data review has gotten such a bad name is that many educators, like myself, have experienced it as something being done to them as opposed to the collaborative process that it should be.  No one ever explained to me WHY we were having these meetings or what the expected outcomes were.  No one ever asked me what data I thought would be meaningful to look at or even to bring data with me.  I was simply told I had a substitute and was to show up to these monthly data meetings.  They were supposed to be these all important events, but I usually left them wanting two hours of my life back and needing a coffee.

Problem #2 We Never Get Anywhere

When I was an instructional coach one of the big parts of my role was Data Coach.  I ran data review meetings, helped teachers to look at formative classroom data, and facilitated discussions in PLC’s.  DATA was a four-letter word regularly used in my vocabulary.  And (gulp), I liked it.

Here’s What, So What, Now What” was my jam.  We used it to evaluate everything from exit slips to Fountas & Pinnell assessments to reading responses and everything in between.  The feedback that I got was positive from the teachers I worked with.  It was a well-organized way to present the data (Here’s What), talk about causes (So What) and then come up with a plan of what we were going to do about it (Now What).

Unfortunately, as I have been reflecting upon this protocol in preparation for some data review with my current staff I have come to realize that there are some serious flaws in the way we used this protocol. 

Wait, what?  Did I just say the mack daddy of data protocols is all wrong?

Yep.  I did.

Here’s why.  I’m a control freak.  And I broke it.

Yep.  I’m a control freak.  Ok, recovering control freak.  Back in the hard core CF days I thought we needed a list of guiding questions, as well as categories of students, to look at when talking about data.  I needed a predictable structure that would get us from point A to point B to point C each time.  I mean how would teachers know what to talk about if I (God of Data) didn’t guide them step by step each time through the process? 

Recovering CF me realizes how incredibly idiotic this was for two reasons:  

  1.  See Problem #1
  2. There are so many questions and levels of students to talk about that we rarely made it to the Now What (THE MOST IMPORTANT PART) in a 45 minute PLC time and would have to continue to the next meeting

Although we had some GREAT conversations about students using this protocol I have to imagine that my staff walked away feeling frustrated when we had to wait a week to get to the action plan or meet at another time after school to finish it.  Without action, data review is just a pretty little template with some glorious notes about our thoughts, but no real impact on student learning.

Here’s What, So What, Now What is still a great protocol, it just needs to be simplified.  Don’t try to look at all the subgroups at one time.  Select the group that is most meaningful for your team to talk about and only use it for that group.  Select a few questions to focus on during your conversation.  Doing “All the Things” is not productive when discussing data.

Problem #3 Meaningless Data

Problem number three could be argued to be a large part of number one.  

Many times we are asked to analyze data that is not very meaningful because the data has gone well past its expiration date.  Standardized tests like PARCC or IAR or whatever it’s being called this year are thought to be important data to analyze because often our school success is judged by this benchmark.  However, when it comes six months into the following year it’s hard to find any correlation between the results and current teaching practices.  The kids have grown.  Our teaching has changed.  Nothing is the same.  It’s hard to have buy-in to discuss something that is related to something so far in the past.  

Another way that data can be meaningless is when it doesn’t match our strategic outcomes for students.  If we are saying that as a school we are trying to foster collaboration, creativity, communication and any of the other six C’s, then it is difficult to make the argument that we should spend hours analyzing a multiple choice test or any other form of assessment that doesn’t show evidence of those indicators.   If data is going to be meaningful for analysis it has to match with our intended outcomes.

Other times it really does go back to Problem #1.  If we don’t explain the why or ask for the feedback of everyone involved in selecting data to analyze then there ultimately will be little impact on students. We have to move beyond the idea that data review has to involve fancy charts, graphs or percentages.  Coming from a business background, I love me a big fancy spreadsheet with a pie chart or bar graph involved, but if we never move beyond simply looking at numbers data review is going to continue to lack meaning for many.

My So What

In order to combat the full fledged groans that usually commence at the mention of the word data we have to simplify the process.  Let’s stop making it this mystical thing that requires elaborate templates and official numbers. The whole point of looking at data is to cause growth in students.  The best way to do this is to select meaningful evidence that will help us to make instructional decisions that we can act on. 

That being said, I don’t have all the answers (yet), but here’s where I’m currently at:

  1.  Select a facilitator.  Have this person engage all stakeholders prior to the meeting about an area they see a need to talk about. 
  2. Decide on some evidence (data) that would demonstrate this need. (exit slip, writing sample, conferring notes etc.) 
  3.  Decide on how you will be assessing the data prior to the meeting as a team and come to the meeting with it already “graded.” (Note:  This is not extra work.  This is simply assessing something you would already authentically be doing or have done.)
  4. At the meeting answer the following questions:
    1. What does this evidence tell us about our students?  What did they do well? What did they struggle with? 
    2. How could we build on their strengths to create success? 
    3. What action steps do we need to take so that each student will grow?
    4. What questions do we still have?
    5. Create a plan of action with a follow-up date included.

Albert Einstein allegedly once said, “Everything should be made as simple as possible, but not simpler.” To me this means that we need to simplify the process, but not the thinking involved in looking at student data.  It is my hope that through several iterations and feedback from my team we are able to further refine these processes and get to the heart of what will move all students forward.

Christina