Sunday, September 6, 2015

Highly Ineffective

As teachers head back to school, we have lots of hoops that we jump through to get ready: getting supplies, attending PD, and creating our classroom spaces...to name just a few. Sadly, a new hoop has been added for those teachers whose evaluations are based on NYS 3-8 test scores. Because the students' scores take
so long to arrive from NYSED, our evaluations aren't finalized until the beginning of September (four months after said tests were taken). The tests comprise 40% of my evaluation, which will soon be 50% once my school's hardship waiver ends and Govenor Cuomo's most recent evaluation reforms are put in place. Unless the NY Supreme Court's decision in Lederman v. King vindicates all teachers in this numbers game that has dire consequences for many dedicated educators.

Before I left in June, I knew how I had fared with the other 60%. This part based on my two classroom observations. For the past 15 years, I have always received positive feedback on the learning that takes place in my room. I work hard every year to ensure that my teaching reflects the students who are in front of me. I utilize feedback from my past year's students to refine my craft and try to improve. I also enjoy the feedback from my supervisor's observations and the conversations we have around teaching. You would think that leaving for the summer knowing that 60% of my evaluation was positive would make the remaining 40% less worrisome. If I have been seen doing good work, deemed an innovator in my building, and praised for the community I foster with my students, how could my final evaluation not be positive? 

Our current evaluation system, where teachers are deemed Ineffective, Developing, Effective, or Highly Effective, has been in place for 3 years. In that time I have never received the same rating. The first year I was deemed Developing. This meant I was placed on a Teacher Improvement Plan or TIP for the year. It must have worked because the following year I was given the rating of Effective. And last week, I found out that I was, finally, Highly Effective. I was stunned when I read it. I thought that there was something missing. A mistake had been made. I have been told by administrators not to expect a Highly Effective rating because it was very difficult to achieve within the rubric and scoring bands. But here I was, Highly Effective. Someone might look at my evaluations these past few years and try to argue that it shows the system is working. Look here. This teacher rose to the occasion! She took the critical feedback and data and worked hard to become the best teacher she could be. My ratings showed improvement, so I must have improved, right?

Not quite.

If you look at my evaluations closely, you'll notice that it wasn't really me or my teaching that changed. My observations show the same reflective teacher who tries new things, revises curriculum, and is responsive to her students. So if I didn't change, what did?

First I would point to the test scores. The year I was given a Developing rating, my district chose to use a different assessment for the growth score portion of my evaluation. My 40% was divided among two tests: my students' achievement on the NYS 7th grade ELA exam and their growth on the pre/post tests using a computerized assessment called the NWEA. The latter is a horrific test that asked students questions which were far removed from the curriculum in my classroom. It was painful watching them sit there trying to do well asking me why they were being tested on content that we hadn't covered in class. I told them not to worry and do their best. After all, their results were only for my evaluation. While we were told that data would help us to learn more about our students' strengths and weaknesses, how could data from a test that didn't actually reflect my teaching help me to improve? I, not surprisingly, received 0/20 possible points towards my evaluation. I was given the label of Developing and a TIP. Happily, my district chose to drop the NWEA the next year, but the damage had been done. My students were given an unnecessarily stressful and confusing test, precious class time was wasted administering it in the fall and spring, and my supervisor and I would have to waste more time next year meeting about my TIP.

Receiving a rating of Effective the next year, people might exclaim that my TIP was a success. No, no. I was deemed Developing not due to NYS test results, not because of the teaching in my classroom, but rather because my district made a poor decision in choosing to administer the NWEA. It is as simple as that. My rating went up because they realized their mistake and figured out a way for my students' NYS test scores to be used for both the achievement and growth score portions that make up the 40%. And this past year, we altered that even more so that my achievement score wasn't simply based on my own 7th grade students results but rather building-wide scores. We as a district decided to do this for most teachers since our students score well on the NYS ELA, Math, and Science exams.

Which brings me to the other factor in my dramatic improvement these past three years: my students. You see, in the world of education reform, I am one of the lucky ones. While our district's demographics have changed over the fifteen years I have taught here to include higher numbers of special education students, English language learners and kids' who receive free or reduced lunch, the majority of our kids come from middle/upper class homes and start school ready to learn. In essence, I have won the teacher lottery. Research has shown that these demographics practically guarantee good test scores. Factors beyond my control, therefore, play a critical role in my evaluation. This is just one of the reasons that teacher evaluations based on test scores are not statistically sound. Use of the Value Added Method or VAM has been proven to be an invalid way to determine a teacher's effectiveness. And yet here we are. 

I am not proud of my current rating of Highly Effective. I find it to be insulting to my profession that no matter what is observed in our classrooms that our rating is ultimately determined by test scores. I also know no matter what I do or don't do in my teaching, that I won't ever be handed an Ineffective rating. This is a luxury for those who teach in suburban or affluent districts with student populations that tip the scales in our favor.  It is not fair. It is not equitable. And it surely is not an effective way to evaluate this profession. Hopefully the case of Sheri Lederman will force NYSED to develop a system that we deserve.

PS:  Lederman's case is based on the Ineffective rating she received after years of earning stellar evaluations. This year, she was rated Effective.