Four-year-olds enrolled in the state's special pilot programs for early-learning collaboratives scored no better on kindergarten readiness tests than their peers in other public pre-K programs, a new state report shows.
The analysis from the Joint Committee on Performance Evaluation and Expenditure Review states that students in pre-K collaboratives funded through the Legislature had a 6 percent lower adjusted pass rate than their peers who did not participate in collaboratives.
The 11 statewide pre-k collaboratives launched in the fall of 2014 and currently serve about 1,580 4-year-old students. Funding for early-learning collaboratives in Mississippi came from the Early Collaborative Learning Act in 2013, but that bill never established minimum rates for collaboratives to meet readiness standards. MDE is researching how to use student growth as a determinant for continued funding eligibility, according to their annual 2015 report, but the PEER committee worries that measuring collaboratives by student growth will only bring failing collaboratives up to the minimum rate of readiness or not even reach that threshold at all.
The PEER report also critiqued how the Mississippi Department of Education sets standards for funding requirements, stating that by using 2014-2015 test results as a baseline when only 58.6 percent of 4-year-olds in collaboratives reached the target score, MDE set a low bar. The PEER committee critiqued using year-over-year increases in test scores as an indicator of improvement as well.
As of July 2015, MDE had not set testing benchmarks for 4-year-olds, and the PEER committee questioned funding the pre-K collaborative program without first setting guidelines.
"Establishing standards for a program post hoc (in this case, two years after initial implementation) is procedurally inappropriate," the report states. "It opens the possibility of letting funding determine evaluative methods, rather than evaluative methods determining funding."
MDE officials dispute PEER's findings, however. In a press release and additional response letter, state Superintendent Carey Wright said that during the 2014-2015 school year, many pilot collaborative program instructors had to blend curricula with testing requirements; in other words, comparing the collaboratives, which are part of a pilot program, to well-established pre-K collaboratives is unfair.
Wright defended the importance of measuring yearly improvement in the collaboratives, saying that a fair system of evaluation should include "multiple indicators, including both status and growth/improvement."
In her letter, Wright said PEER does not understand the process for setting such standards. She wrote, "The process of setting a minimum rate of readiness is a complicated one that required MDE to first establish a measurement of school readiness."
Legislators on the PEER committee essentially want to see results before throwing more money at the collaboratives, but MDE says the PEER report is premature.
"PEER used one year of data to draw its conclusions that students were underperforming," the MDE news release stated. "The national standard for evaluating educational programs is to use trend data over three to five years." PEER's analysis of curricula used in the collaboratives showed only one collaborative curriculum showed "evidence of effectiveness."
The collaboratives are not required to use the same curriculum, but MDE has said it will require collaboratives in fiscal year 2017 to use "evidence-based" curricula.
The Legislature allotted $9 million to fund the early-learning collaboratives for fiscal years 2014 through 2016. MDE asked for $6 million more for fiscal year 2017 in the Legislative budget hearings back in September.
Wright and MDE also contend that the PEER report fails to follow national standards for evaluating educational programs and, therefore, she said: "Any conclusions or recommendations contained in this PEER report lack merit."