October Event Q&A video (right-click to download)
Multiple Measures Documents
*download or view the agenda here
Thursday, October 21
Friday, October 22
Thursday, October 21
Dana Anderson’s Component 1 & 2 presentation video (right-click to download)
Purpose: Establishing new evaluation systems cannot be done in isolation. As Danielson (and many others) note, the evaluation system must be connected to and developed along with other district components (Hiring practices, mentoring and induction among many others). It is important that the districts in the pilot consider the district instructional framework as they develop their evaluation models.
Essential Question: How does our locally adopted instructional framework fit within the evaluation process for teachers and principals?
What to bring: Please bring your instructional framework from which your district’s their professional development, assessment systems and data infrastructure are driven. Examples may include: 1) Burke Group’s STAR Protocol; 2) CEL’s 5-Dimensions, 3) Danielson’s Framework for Teaching; 4) Marzano’s Art and Science of Teaching; 5) SIOP; 6) Other?
Purpose:: According to E2SSB 6696, the two assurances are that the new evaluation model will be grounded in the new 8 teacher and 8 principal criteria and be 4-tiered. In addition to those structural and content pieces, the bill addresses student growth. Additionally, districts may wish to consider other evidence to determine teacher and principal placement within the final 4-tier model. These measures may include: 1) Teacher reflections; 2) Teacher Portfolios; 3) Perception Surveys; 4) Student Work Samples; 5) Classroom Observations; 6) Student Achievement Measures; 7) Other?
“Student Growth” is defined as the change in student achievement between two points in time.
When used, if available and relevant to the teacher and subject matter, multiple data measures must be used. This can include:
The districts in the pilot will begin to explore the state, district, school and classroom-based measures by which student growth could be measured. This discussion will serve to inform other districts outside of the pilot as they take on this task in the years to come.
Essential Questions: What evidence will be used to determine teacher and principal placement within our rubric? How will the various measures be weighted? What is the formative nature of these measures, and how will a summative determination be made?
What to bring: Districts attending this event should bring materials and resources to begin to look at establishing examples of multiple measures.
This can include:
Purpose: The evaluation rubrics that are developed over the course of the pilot will need to have some consistency from district to district as we look to (many other reasons, but here is just a sample):
- Aggregate data at the district or state level
- Develop evaluation training for principals, teacher, and district level administrators
- Establish a foundational common language of quality instruction that can be utilized across districts.
The evaluation rubrics that are developed over the course of the pilot will need to have some consistency from district to district as we look to (many other reasons, but here is just a sample).
Friday, October 22nd
District “Process” Stories
Purpose: In response to the feedback from our Kick-Off and K-20 we have asked three districts to “tell their stories” about the process of moving to a new evaluation system. We have chosen two districts that have mature models (Peninsula and Snohomish) and one district at the beginning stages of evaluation development (Anacortes).
Peninsula Presentation Video (right-click to download)
Anacortes Presentation Video (right-click to download)
Snohomish Presentation Video (right-click to download)
Gary Kipp, AWSP – Looking Into the Relationship Between Teacher and Principal Evaluation
Gary Kipp’s Presentation Video (right-click to download)