For specifics about the framework, be sure to see Dwight Carter’s (@Dwight_Carter) blog after his three-days training.
Today was my first of 3 OTES days, and I’m still feeling pretty good about that “left side” (or the performance piece) of the Ohio Teacher Evaluation System framework. (The “right side,” or the student growth side, is another story entirely.)
What I want to look at in this post is the actual rubric for the performance evaluation. Again, I think those who are saying “NAY” the loudest are probably uninformed about the requirements and have probably not looked at that performance rubric.
Keeping in mind that MOST teachers will “live in” the proficient range and “visit” accelerated, I want to focus on that proficient column. In the training, we were asked to underline key descriptors of proficiency for each standard.
Looking at this standard, I started underlining words/phrases like: “develops measurable goal,” and “aligned with standards” and “explain importance.” I quickly realized that underlining these phrases gave me no real insight into a “proficient” teacher.
In order for me to understand the qualifiers of the different areas, I had to actually compare proficient to accelerated. The differences between the two in Focus for Learning are: 1) The goals must be rigorous to be accelerated, 2) the goal is differentiated to be accelerated, and 3) the teacher can explain how the goal fits the progression of learning to be accelerated. I think having teachers (and admins) really look at these differences and define them clearly can make the evaluation process more transparent–it’s clear to the teacher what s/he must do and the admin is more clear about the expectation.
I also had a couple of thoughts about how to ease teachers into this framework…
While watching the video with the pre-observation conference, I kept thinking of my early interviewing experiences when I was first entering the profession. The teacher seemed nervous; she seemed like she was trying to answer questions about her lesson plan on the spot (the evaluator had had an opportunity to review the lesson plan in advance). As an admin, I NEVER go into any meetings with any individual teacher or group of teachers without giving them as much info about my intent as possible in advance. I want them to have time to process, to come with their own questions, to really think about the conversation. I think teachers should have this same opportunity with these pre- (AND post- if possible!) observation conferences. The teacher could have had more time to think about her answers if she had received the questions in advance, and the conversation could have been much richer instead of interview-like.
And….there was always a way to “ace” and interview. I distinctly remember keywords I made a note to say at each interview, and I always felt like if I said those keywords and if they were the right keywords, I stood a chance. (What were they? Oh, the classics: differentiation, belief that all students could learn, communicate with parents and community, collaboration with colleagues, data-driven, intervention, etc.) I may not have internalized what those words meant or what they looked like in practice, but I knew I needed to say them.
I kept thinking in the OTES video that all the teacher needed to do was say the right words, but what if the evaluator asked the wrong questions? What if she didn’t probe enough to get the teacher to say the right thing? So, I asked the question, “Am I evaluating someone based on their ability to answer my questions? Am I looking for those keywords and checking them off?” The response from the trainer (and other attendees) was that it is the teacher’s responsibility to come to and prove they know their stuff, not the evaluator’s role to drag it out of them.
Ok, fine. I can accept that, BUT admins (myself included) need to make sure teachers have a very clear, very definite, transparent understanding of the expectations. If, using the rubric information above, the difference between transparent and accelerated is the “goals must be rigorous/challenging” then the admin and teachers need to know what that means, and the teacher needs to know how to demonstrate that in the lesson plan and/or pre-conference.
Additionally, when trying to evaluate the videos (Side note–WHY ARE WE SCRIPTING?!?!?!?!?! Why are we not just VIDEOTAPING so we can actually pay attention to what is happening? I know, it can be a contractual issue, but if I was getting a performance evaluation, I’d be DARNED if I didn’t want to see myself the way the evaluator saw me!) I found myself focusing so much on how the lesson COULD be and what COULD be different. For example, I thought, “Oh, she’s using a transparency. Well, she could use an app or Smart Board tools and get better results.” But…the students were getting the lesson, they were decently engaged (mildly automaton-like), and I had to keep refocusing myself on what was actually in the lesson, not comparing her to other lessons. Even using the rubric, I kept straying to “If she would’ve done this….” but the focus has to be on what was actually visible.
1. Let teachers (and admins) find the differences in the qualifiers to create a common definition
2. Don’t put teachers on the spot to talk about their teaching–send questions in advance
3. Make sure teachers know what they need to show/demonstrate in the conferences
4. Christina Hank does not support scripting.
5. Focus the evaluation on what that teacher is doing/knows, not how s/he compares to others