Christie Administration Announces Positive Reporting of New Jersey's First Year Teacher Evaluation Pilot Program

Evaluation Reports Detailing First Year of Administration's Pilot Program, Along with Other Research Studies, Show Strong Foundation for Statewide Rollout in September

For Immediate Release Contact: Barbara Morgan
Rich Vespucci
Date: February 5, 2013 609-292-1126

Trenton, NJ – As part of the ongoing preparation for full state-wide rollout of new educator evaluation systems in the 2013-14 school year, the Department of Education today announced the independent findings from the Evaluation Pilot Advisory Committee (EPAC) Interim Report and the Rutgers University Graduate School of Education (RUGSE) Year 1 Report. The Department disseminated the reports to all superintendents across the state in a memo outlining the findings and corresponding Department actions to improve evaluation procedures.

"More than any other aspect of a school, educators have the most significant impact on student learning. We owe it to them to create a system that provides them with meaningful feedback and actionable data that allows each of them, regardless of experience, the opportunity to improve their practice," said Commissioner Cerf. "By developing this new system in partnership with educators across the state, by moving towards full implementation over the course of several years, and by learning together through a multi-year pilot program, we are confident that we are putting the pieces in place to fully launch a system that helps all educators improve their practice by next school year. Though we have certainly met some challenges along the way, we thank the educators in our pilot districts for helping to identify and work through these to help all educators in the state implement a more meaningful evaluation system."

The effort to improve educator evaluation in New Jersey has been a top priority for the Christie Administration since the Governor first convened the Educator Effectiveness Task Force (EETF) in 2010 to recommend an evaluation system based on multiple measures of effectiveness including student achievement, demonstrated practices of effective teachers and leaders, and weights for the various components. Unlike experiences in many other states, the process to develop a new evaluation system in New Jersey has included a multi-year pilot program designed to identify implementation challenges and successes and make improvements prior to state-wide rollout. The multi-year program of research and development included the creation of the EPAC, comprised of representatives of major stakeholder organizations and pilot participants across the state, and a partnership with RUGSE to conduct an independent evaluation of the pilot program.

The two reports released today reflect the commitment of the Christie Administration to utilize both internal and external reviews of the pilots, national research, and experiences from other states and districts to inform the creation of New Jersey's system. The Department also shared with districts the results from the previously released "Culminating Findings from the Measures of Effective Teaching (MET) Project's Three-Year Study" funded by the Gates Foundation. While the three reports' authors are different, striking commonalities emerged and many of the recommendations from the first-year pilot echo findings from other sources, including the MET Study. 

In its memo to districts, the Department outlined common findings in four categories: Culture Change; Timing of Initial Implementation and Building Infrastructure; Fidelity and Accuracy; and Collaboration and Communication. The EPAC and RUGSE reports cover only the first year of the teacher evaluation pilot, which ended last summer, and the Department has already incorporated many of the recommendations outlined in each. The MET study, which used randomized, controlled study groups of over 3,000 teachers across seven districts nationally, found that effective teaching can be measured through high-quality evaluations that utilize multiple measures of teacher practice and student outcomes. The study also recommends procedures for two important measures of teacher effectiveness:  conducting teacher observations and student perception surveys and ensuring quality controls for each.

"While we never expected the first year of the pilot to be perfect, we are motivated by the finding that educators are having more meaningful conversations than ever before about effective teaching, which of course is the first step to helping continuously improve student outcomes," said Commissioner Cerf. 

Some of the common findings among the three reports and the Department's responses to the recommendations are as follows.

Culture Change
All three reports found that the adoption of new evaluation systems has entailed a culture shift in many schools – a shift away from compliance-based, low-impact, and mostly perfunctory evaluations to a focus on educators as career professionals who receive meaningful feedback and opportunities for growth.  In many of the pilot schools, the new evaluation system has led to a collective refocus on the elements of effective teaching. The EPAC and RUGSE reports include first-hand examples of teachers and supervisors using common language and definitions of good teaching and engaging in a collaborative process to help all educators continuously improve their practice. The MET Study affirms the core principle for improving evaluations:  effective teaching can be measured – and effective educators impact student achievement. 

The Department is working to provide common terminology at the state level, and encourages districts to build out these definitions relative to their selected evaluation instruments. Additionally, in order to provide examples of effective strategies for all districts to consider, the Department communicates with districts on a regular basis to share lessons learned and guidance resulting from the pilot and national examples.

Timing of Initial Implementation and Building Infrastructure
The reports all found that, as expected with the creation of a new system, districts have had to engage in extensive capacity-building at the beginning of implementation. The districts selected for the first year of the pilot program implemented their systems on an abbreviated timeline as they were not able to begin the pilot until after the start of the school year, and the EPAC and RUGSE reports point to this delayed timeline when explaining why in most cases districts did not execute the expected number of observations.  The MET Study was designed to occur over three years and demonstrates that the activities required to improve evaluation systems build upon one another over time.

The EPAC recommendation to delay statewide implementation of new evaluations – originally scheduled for 2012-13 – and provide another year of study and capacity-building was acted upon and codified in the TEACHNJ Act signed by Governor Christie in August 2012. In order to address this timing problem, the Department built a capacity building year for all districts to begin trainings and other activities in the current school year to prepare for a full launch in September. 

Fidelity and Accuracy
All of the reports found that a challenge of new system implementation is difficulty with ensuring that the many components of the new instruments are properly implanted with fidelity and accuracy. The EPAC and RUGSE reports showed mixed results with regard to score differentiation among the pilot districts potentially stemming from variations in length and quality of training provided by instrument vendors, district approaches to training, or interpersonal dynamics within schools and districts. The MET Study also found that thorough training and certification of observers is critical, but that the use of additional observations and multiple observers helped increase reliability with ratings and that shorter observations (at least 15 minutes) showed promise for including additional observers without excessive time burdens.

The Department included changes to the training timelines and observation requirements in the second year of the teacher evaluation pilot to ensure thorough training as early as possible in the school year.  In addition, the TEACHNJ Act and related regulations codified training timelines for all school districts prior to full statewide rollout in September 2013.

Collaboration and Communication
The reports also found that in order to facilitate the engagement and trust of educators, there must be clear and open channels of communication and educators must be involved with all stages of evaluation improvements.  The EPAC and RUGSE reports point to the collaborative and measured process adopted by the Department, inclusive of stakeholder feedback and recommendations as an area of success to be built upon in the future. While the RUGSE Report survey data indicated that administrators viewed the new system more positively than teachers, many teachers still saw value in new evaluations. These responses may be reflective of different levels of training and exposure that administrators and teachers experienced in the first year, suggesting that the more time they spend with the new system, the more likely they are to believe that it is fair and accurate.

The TEACHNJ Act and related regulations required the formation of a District Evaluation Advisory Committee (DEAC) and School Improvement Panel to ensure feedback loops at the district and school levels. In the second pilot year, the Department has utilized monthly EPAC meetings to better provide information to and gather feedback and recommendations from the state advisory group. By facilitating open dialogue and sharing strategies to address common concerns, the Department continues to become better informed about strengths and weaknesses with the system in order to support all districts statewide.

The Department of Education plans to propose regulations to the State Board with the intent of codifying specific requirements for evaluation systems in the 2013-14 school year. The regulations will also address the evaluation process in untested grades and subjects, an area that has been the source of significant input from the field. The Department will also conduct the first phase of an outreach initiative from March through May, including regional presentations and trainings, to better inform educators in the field about the proposed regulations. As districts launch their evaluation systems in the fall, the Department will continue to gather feedback, analyze research, and engage practitioners in a cycle of continuous improvement. Lessons learned from all districts across the state will inform future plans, including new or modified proposed regulations as needed.

The memo sent to all districts leaders, which includes additional information on the findings, recommendations, and the Departmental responses, can be found here: http://www.state.nj.us/education/EE4NJ/presources/020513memo.pdf

Full versions of the EPAC and RUGSE reports can be accessed via the Department's website:
EPAC Interim Report (2011-12):
http://www.nj.gov/education/EE4NJ/presources/EPACInterim11-12.pdf

RUGSE Year 1 Report (2011-12):
http://www.nj.gov/education/EE4NJ/presources/RUGSE11-12.pdf

Additionally, the MET study can be accessed here:
http://www.metproject.org/downloads/MET_Ensuring_Fair_and_Reliable_Measures_Practitioner_Brief.pdf