VINCE CHHABRIA, District Judge.
As explained in this Court's order on May 18, 2018, the process for determining whether the state is in compliance with its monitoring and enforcement obligations under the Individuals with Disabilities Education Act ("IDEA") is proceeding in four phases. In the first phase, the Court is examining whether the state's annual statewide data collection activities enable it to effectively monitor school districts. The second phase will involve reviewing how the state analyzes that data to identify districts that require more extensive monitoring and enforcement. The third phase will involve reviewing how the state actually conducts that monitoring and enforcement. The fourth phase will involve reviewing the state's written policies governing its monitoring and enforcement functions.
This ruling constitutes the Court's Phase 1 findings — that is, findings about whether the state collects the data necessary to effectively monitor school districts. The Court finds that the California Department of Education is largely in compliance with its obligation to collect the statewide data it needs to fulfill its monitoring and enforcement responsibilities under the IDEA. Although there are many areas of annual data collection where the state could do better as a matter of policy (particularly if it had unlimited resources), for the most part, these do not rise to the level of federal law violations. There is one exception: data collection to help identify school districts that are not providing the services promised in individual education programs, or "IEPs." Given the centrality of IEPs to the federal-law requirement that school districts provide disabled students with an appropriate education, and given the specific context of this case (including the state's history of inadequately performing its monitoring responsibilities), California must collect statewide data that speaks directly to IEP implementation. Because California currently does not do this, it is out of compliance with the consent decree (and with federal law) in this area. The state will have an opportunity to demonstrate compliance during the fourth phase of court review.
Under the statute and its implementing regulations, school districts are required to provide all students with disabilities an appropriate education. The state is required to make sure its school districts are doing so, and to take enforcement action against districts that are not. But the law is not terribly specific about how the state is supposed to perform its monitoring and enforcement functions.
Obviously, the state cannot monitor school districts without gathering information about what they are doing. The law says as much, requiring states to collect quantitative and qualitative data to evaluate whether school districts are fulfilling their obligations under the IDEA. The law does not specify, however, all the data the state must collect to effectively monitor districts. At a minimum, the statute instructs the state to craft a "state performance plan" to measure how well the IDEA is being implemented in the state. Each year, as part of its implementation of the state performance plan, the state must collect certain kinds of data from school districts to report to the federal Department of Education. For example, the state must collect data about suspension and expulsion rates, and about how often disabled children are taught in regular classrooms.
Everyone agrees that, at a minimum, the state must collect the data necessary meet its federal reporting obligations under its state performance plan. Everyone also agrees that these data shed light on how the districts are doing, thereby assisting the state in its monitoring and enforcement obligations. But there is a dispute in this case about whether the state must collect additional data to satisfy its obligation to effectively monitor school districts, and if so, what data the state needs to collect.
California's state monitoring system involves multiple rounds of data collection and analysis. In light of the vast number of students and school districts in the state, the California Department of Education first collects certain data across all school districts on an annual basis, and then uses that statewide data to determine which districts to scrutinize further. In the statewide data collection process, school districts submit a large swath of data about all students, disabled and nondisabled, to a particular database. Separately, districts provide data specific to students with disabilities — this goes into another database. Then the state analyzes these data to identify school districts that raise red flags — that is, districts that might be falling short on their obligation to provide an appropriate education to students with disabilities. This initial statewide data collection, which the state has referred to as the "first tier" of its monitoring and enforcement activities, is the focus of Phase 1 of these court proceedings.
Once the state has identified a school district for further scrutiny, the state might request additional information about the district's policies and practices, review the district's records, and/or meet with school officials, teachers, and parents to further investigate the issues that were initially flagged. At this "second tier" of monitoring, the state may engage in different monitoring activities depending on what flags go up during the data collection and analysis at the first tier. For example, if the state discovers that students are not being evaluated to see whether they are eligible for special education services within 60 days of the district receiving consent from the parents to conduct such an evaluation, the state may subject that district to more intensive monitoring through "Data Identified Noncompliance Review," a monitoring activity in which the state looks at the information submitted to the statewide databases during the first tier of monitoring to determine whether districts are complying with the IDEA. Or, if the state identifies districts where students placed in special education are disproportionately members of certain racial or ethnic groups, then it may require the district to undergo a "Disproportionality Review" to assess what is causing the disproportionality. The state's targeted monitoring activities vary in terms of how the state selects districts for further scrutiny and what the state focuses on during this further scrutiny. The most intensive monitoring activity is called "Comprehensive Review" and is used to dig deeper into the problems of the districts identified as the lowest performing during the initial round of statewide data collection. These targeted monitoring activities (and any enforcement actions that follow) will be the focus of later phases of these proceedings.
The tiered structure of the state's monitoring system is relevant to this phase in the court monitoring process for a few reasons. First, it underscores that the state does not collect all possible data from all school districts every year to monitor them equally closely. It probably could not do so even if it tried. And no party in this case presumes the state must do this — everyone appears to be working from the same baseline assumption that in a state as large as California, using different "tiers" of data collection and monitoring makes sense. Having a tiered monitoring system means that if there are some data that cannot be collected or analyzed on a statewide basis to identify "red flag" school districts but that would inform the state's understanding of whether students are receiving an appropriate education, the state can collect that additional data during its targeted monitoring activities. Conversely, if there are data without which the state would be unable to effectively identify "red flag" districts in the first place, that data must be collected annually as part of the state's "first tier" of monitoring.
The Court's May 18 order explains what the state must do in this case to establish compliance with its monitoring and enforcement obligations under the IDEA, including at Phase 1. Dkt. No. 2387.
Under subsection A of the order, the state must show that it collects the data needed to meet the federal reporting requirements under the state performance plan. As discussed earlier, the parties agree that federal law requires the state to collect these data. The remaining subsections of the order identify additional categories of data that the state might be required to collect — and that the plaintiffs believe the state must collect — to effectively monitor school districts. One example is information about the extent to which "individualized education programs" ("IEPs") are being implemented. Another example is data that would raise flags about whether schools are inappropriately removing children with disabilities from the classroom, isolating them within the classroom, or restraining them. Another example is data to help the state assess whether districts are making effective use of mediation to resolve disputes with parents.
There is a great deal of overlap between subsection A of the May 18 order, which explains that the state must collect the data that relates to the state performance plan, and the remaining subsections, which speak in somewhat general terms about the state needing to make sure it collects enough data to allow it to fulfill its monitoring and enforcement responsibilities. For instance, under the state performance plan, the state must report how often children with disabilities participate on statewide tests, so the topic of statewide assessments is covered in significant part by subsection A of the May 18 order. But it is also addressed in subsection B.3 of the order, which discusses the collection of "data necessary to adequately assess student participation in assessments, including alternate assessments." In practical terms, this means that on the issue of student assessments, the issue at Phase 1 of this case is whether the state's data collection for the state performance plan is enough, or whether additional data must be collected to ensure that the state is able to adequately fulfill its monitoring and enforcement obligations under the IDEA. This is the inquiry for many of the items in the May 18 order: is the state's data collection for the state performance plan enough, or is additional data collection required to enable the state to adequately monitor a particular issue?
There is language in the May 18 order that could be read to suggest that the Court had already made a determination, by the time that order was issued, that certain data collection activities (beyond those conducted for the state performance plan) are required by law. To clarify, that was not the Court's intent. The purpose of the Phase 1 process (and the purpose of each future phase in the court monitoring process) is to determine what the state must do (and what it need not do) to get into compliance with federal law. Thus, the May 18 order should be viewed, despite the language that admittedly suggests something more, as simply identifying the areas to be covered in the different phases. It is only the hearings themselves, and the written submissions made in connection with them, that put the Court in a position to determine what the state must do to achieve compliance with federal law. And by the same token, if hearings reveal a legal failure by the state in an area that wasn't specified with precision in the May 18 order, that would not prevent the Court from concluding that the state is out of compliance.
For Phase 1, the Court received written submissions from the parties, along with a written report from the court monitor describing his conclusions about whether the state had adequately demonstrated its compliance in the areas identifies in the May 18 order. These written submissions were followed by an evidentiary hearing that lasted two days. At the hearing, three top officials from the Special Education Division of the California Department of Education testified under oath: Kristen Wright, the Director, who is in charge of the division; Shiyloh Duncan-Becerril, the Education Administrator, who oversees the division's data collection and analysis; and Alison Greenwood, the Quality Assurance Administrator, who oversees the division's targeted monitoring activities. These officials were questioned extensively by the Court, court monitor, and counsel for the plaintiffs. After the hearing, the Court received supplemental submissions from the parties, and the court monitor filed a supplemental memorandum updating his conclusions about the state's compliance.
Generally speaking, the state takes the position that it is not required to collect any data beyond what it gathers for the state performance plan. The court monitor believes that the state must collect significantly more data than what it is currently collecting, but that the state is not required to collect every piece of data that might be interesting or even desirable. The plaintiffs largely agree with the court monitor's conclusions about the state's noncompliance in certain areas of data collection listed in the May 18 order, but list additional categories of data that they believe the state must gather; in many cases, the plaintiffs contend that the state must collect more data than even the court monitor says. These perspectives provide the backdrop for the Court's ruling on Phase 1.
It bears emphasis that the purpose of this federal court oversight of the state's monitoring activities is to ensure the state's compliance with the law. It is not to make the state to do more than the law requires. As applied to Phase 1, this means the state will not be ordered to collect data on a statewide basis simply because having that data would be interesting from a social scientist's standpoint. Nor will the state be ordered to collect data simply because it seems like it would be good policy. The state will only be deemed out of compliance with federal law (and therefore out of compliance with the consent decree), if its failure to collect certain data on a statewide basis would likely prevent it from effectively fulfilling its monitoring and enforcement obligations under the IDEA.
As explained earlier, California argues that the IDEA does not require a state to collect any data beyond what it collects in connection with the state performance plan. That's not necessarily true. Nothing in the statute or accompanying regulations indicates that the data collected for the state performance plan is, on its own, enough to enable the state to fulfill its monitoring and enforcement responsibilities. To the contrary, the regulations strongly suggest states must do more — they specify that states must fulfill their monitoring responsibilities using both "indicators established by the Secretary for the [s]tate performance plans" and "indicators as are needed to adequately measure performance" in specified priority areas.
Determining what data California must collect is a context-specific inquiry. There may be certain data one state needs to collect — perhaps because of its size, demographics, or enforcement history — that another state need not collect. In particular, it bears emphasis that the pertinent question at this phase — what data the state must collect to comply with its obligations under the IDEA — is being considered against the backdrop of a conclusion that the state's compliance with its monitoring and enforcement obligations was so deficient that a federal consent decree was required. This context affects the decision about whether California should be required to collect certain data (at least in situations where federal law is ambiguous about what must be collected).
The IDEA requires the state to develop a state performance plan. The state performance plan process is one of the primary ways in which the federal government exercises its own oversight of how well states are ensuring students with disabilities are receiving an appropriate education. The performance plan is a set of measurable goals, based on seventeen "indicators" provided by the federal Department of Education. Although the goals set by each state may vary, the federal government has provided official guidance as to how each state should measure each indicator in its annual reporting pursuant to the state performance plan. For example, one federal indicator is the rate at which children with disabilities participate on statewide tests. Another is the extent to which students with disabilities are expelled or suspended.
For all these indicators, the court monitor has concluded, following the evidentiary hearing, that the state is compliant with its data collection obligations related to the federal reporting requirements in the state performance plan.
One of the key questions at Phase 1 is whether the state collects enough data to evaluate whether school districts are implementing individualized education programs ("IEPs"). The May 18 order directs the state to show that it collects the data necessary to assess whether school districts are adequately implementing IEPs, or demonstrate why statewide data collection on this issue is not necessary to effectively monitor school districts.
The IEP is the "centerpiece" of the IDEA. Endrew F. v. Douglas Cty. School Dist., 137 S.Ct. 988, 994 (2017) (quoting Honig v. Doe, 484 U.S. 6305, 311 (1988)). It is a comprehensive plan for the education of a child with disabilities that is put together by an "IEP team" consisting of the child's parents, teachers, and school officials. Among other things, the IEP must describe the child's disability, the effects of the disability on the child's ability to participate in general education, goals for the child's educational progress, the measures that will be used to evaluate progress toward those goals, and the special education and related services that will be provided to the child in furtherance of those goals. Generally speaking, "the essential function of an IEP is to set out a plan for pursuing academic and functional advancement." Id. at 999 (citing 20 U.S.C. § 1414(d)(1)(A)(i)(I)-(IV)); see also 20 U.S.C. § 1414(d)(1)(B). The plan must be "tailored to the unique needs" of each child and crafted in a manner consistent with the procedural requirements described in the governing statute and regulations. Endrew F., 137 S. Ct. at 994 (quoting Board of Ed. of Hendrick Hudson Central School Dist. v. Rowley, 459 U.S. 176, 181 (1982)).
The state, for its part, must ensure that these IEPs are not just paper promises. If the IEP says that a child requires certain services (such as a one-on-one aide), and if the school district fails to provide those services, then the district has failed to comply with its most fundamental obligation under the statute.
The state does not currently collect any data specific to this purpose on a statewide basis for the first tier of its monitoring system. The state argues that this is not necessary, because the statewide data it collects includes a great deal of outcome-related information (such as how students are doing on tests and whether students are graduating) that indirectly flags potential failures on the part of school districts to implement IEPs — potential failures that can be investigated during more targeted examination of specific school districts. The court monitor disagrees, as do the plaintiffs. They contend that because IEP implementation is an important indicator of whether a child is receiving an appropriate education, and because the provision of an appropriate education is a priority area for state monitoring under federal law, additional data collection is required.
It bears repeating that the state is not required under federal law to collect all possible data at a statewide level, even if it would improve in some incremental way the state's ability to flag problem school districts for more intensive monitoring. But in light of how important the IEP is to providing an appropriate education to a child with disabilities — an education that is painstakingly negotiated, customized in light of the child's unique needs, and mapped out on paper over the course of months (if not years) by parents, teachers, and school officials — the Court concludes that the state's failure to collect statewide data on IEP implementation prevents it from effectively monitoring school districts, putting the state out of compliance with its obligations under federal law (and therefore out of compliance with this consent decree).
Without information about whether the services promised in IEPs are actually being delivered, the state may run the risk of failing to identify a crucial red flag during its first tier analysis of the statewide data. Returning to the earlier example, suppose a child's IEP promises a one-on-one aide, based on the IEP team's conclusion that the child requires the aide to help manage her behavioral needs and to tailor her school assignments according to those needs. In this scenario, the district's failure to provide the aide would, in the most fundamental way, deprive the child of an appropriate education under the law. It's possible that this failure on the school district's part would be indirectly flagged using "outcome" measures that the state already gathers, such as performance on statewide tests. But it's easy to imagine circumstances where that would not be the case. The child's parents, facing a plainly flawed educational option, may pull her out of the school altogether; that, of course, would not be reflected in aggregate test performance data. And assuming a child not receiving the benefits of her IEP remains in the system, a district's failure to provide the special education services that were promised can impair her progress in ways are not captured by the outcome measures the state looks at. For example, a child with autism might need a one-on-one aide even if he does well on statewide assessments, and even if he advances from grade to grade. Moreover, in any district where only a few special education students are promised one-on-one aides or small-group instruction each year (a likely possibility, given how resource-intensive such commitments can be), the district's failure to provide these services may not even make it into the aggregate achievement statistics for the district, particularly if it is a large or medium-sized district. But the failure to provide services like one-on-one aides to a child whose IEP promises him these services is a serious violation of that child's rights. In short, given the centrality of the IEP to ensuring that districts provide children an appropriate education, the state must include in its annual data collection a direct means for flagging districts that may not be adequately implementing IEPs.
It is not enough that the state samples IEPs and studies IEP implementation during Comprehensive Review. Because IEP implementation data will tell the state something different from what it already learns from its existing data collection, these data will help the state identify "red flag" districts that may have otherwise escaped closer scrutiny — in other words, these data are necessary at the first tier of the state's monitoring system. After the hearing, the state submitted a declaration from the administrator of the state's special education monitoring activities in which she asserted that the state "identified 101 findings of noncompliance for IEP implementation in the districts selected for 2017-2018 comprehensive review."
There are presumably different ways in which data collection could help track IEP implementation. Perhaps the state will collect this information through self-reporting by school districts. Perhaps it will use parent surveys (more on those below). Perhaps the state will implement a new system of data collection for IEP implementation, or perhaps it will leverage existing systems.
The court monitor recommends that this ruling be more specific in its prescription and that the state be required to collect data about "student needs, special education and related services, supplementary aids and services, IEP goals, and progress toward those goals," as well as "data about progress toward achievement of IEP goals." But it's not clear that data about each child's IEP goals, the progress that is made toward them, or the child's unique needs, needs to or should be collected on an annual statewide basis (indeed, it's not clear how the state would do this). Presumably, it will make more sense to collect more limited and objective data about various types of services promised in IEPs and the degree to which those services are actually delivered (without regard to the quality of those services, at least at the first tier of data collection). But the policymakers are in the best position to figure this out. At Phase 4 of these court proceedings, the state will be required to return to this issue, demonstrating how it collects statewide data on IEP implementation in a way that allows it to fulfill its monitoring and enforcement responsibilities under the IDEA.
Parent input is another significant area of dispute. The May 18 order requires the state to demonstrate it collects the data necessary to adequately assess parent participation and input, or demonstrate why this data collection is not necessary to effectively monitor school districts as required by federal law.
Currently, the state asks districts to report one measure of parent input — parents' response to the question "Did the school district facilitate parent involvement as a means of improving services and results for your child?" This question is typically asked during IEP meetings, which raises the concern that some parents will hesitate to give a candid response out of a concern that it would affect the special education services their child will receive. In light of this, even if this question is sufficient for the state to meet its state performance plan reporting obligations, it is clearly not an adequate or reliable measure of parent feedback, as even the state's policymakers appeared to acknowledge at the hearing.
The plaintiffs argue that the state's failure to ask parents more than this one question puts it out of compliance with the law. The court monitor has arrived at the same conclusion. The court monitor proposes a more comprehensive parent survey, with several additional questions for parents, submitted to a sample of parents across the state during the first tier of the state's monitoring process. For their part, the plaintiffs argue that the proposed survey should be even more comprehensive than the court monitor proposes, although it remains unclear how broad a survey the plaintiffs envision and what questions it should ask.
For their part, the state's policymakers testified at the hearing that they have a pilot parent survey ready to launch on the state department of education's website. They further testified that they plan to encourage school districts to make this survey available on their own websites, as well as to ask districts to consider making it available to parents by other means. The policymakers testified that this pilot survey has been ready to launch since roughly 2009, but that this court monitoring process has stymied the launch, because the state has been waiting to get sign-off from all parties involved in this process.
The pilot survey, copies of which the policymakers made available at the hearing, contains a long list of questions designed to measure parent satisfaction with the special education services that are provided. The pilot survey asks a total of 95 questions, ranging from questions about whether the parents discussed accommodations for the child at the IEP meeting and were treated as a "team member" in their child's education to questions about whether the school "is a friendly place" and provides services to the child in a timely way. Nevertheless, the plaintiffs and the court monitor remain dissatisfied with this pilot survey, because they believe it does not do a good enough job of probing parents for information.
Recall that this phase of the proceedings involves questions about what data the state must collect on an annual statewide basis to give it the ability to effectively identify school districts for more intensive monitoring. Recall further that the question is not whether certain data collection would be good policy; the question is whether the state's failure to collect certain data statewide during the first tier would prevent it from effectively fulfilling its monitoring and enforcement obligations under the statute. Given the current record and the inquiry at hand, the Court cannot conclude that the state's decision to launch this pilot survey as proposed, rather than crafting a different survey and disseminating it in a different way, puts it out of compliance with federal law.
It is no doubt helpful to seek parent input when conducting a closer review of a particular school district — which the policymakers have testified the state does. And perhaps it could be helpful to collect parent input statewide. But there is reason to wonder if parent surveys done across an entire state — particularly a state as large and diverse as California — would result in valid and reliable data that could be meaningfully aggregated to learn something useful for special education monitoring. Parents in different districts will respond at different rates to surveys asking them how well their schools are providing their children with educational services. Not only will response rates vary, but so will the depth, breadth, and type of responses received, based on factors such as the parents' education, time, and financial resources. For instance, there may be districts where students with disabilities receive an education that meets all the requirements of federal law, but where parents express dissatisfaction on surveys at higher rates because they have higher expectations of their public schools than parents in other districts. It's therefore far from clear what this data collection would tell the state about where it should conduct its targeted monitoring.
Thus, the question of how to assess the results of the pilot survey, and the question of whether to expand on it in the future, will be something for the policymakers to decide. Because it's not clear whether a different kind of parent survey would necessarily improve the state's ability to identify "red flag" school districts, and because the plaintiffs have not offered a feasible alternative for gathering meaningful and usable data about parent input, this is not an issue on which the Court can find the state out of compliance with federal law.
Before turning to the next issue, it's worth addressing an argument by the plaintiffs and the court monitor that applies not just to parent input but to other aspects of the state's data collection activities. In arguing for parent surveys and other data they believe must be collected, the plaintiffs and the court monitor emphasize that the law requires states not merely to use "quantifiable" indicators when measuring school district performance, but "qualitative indicators" as well. Although it's true that federal regulations mention qualitative indicators, they do not say that states must collect them during statewide data collection as part of what California calls the "first tier" of monitoring. In fact, it's not even clear the regulations require states to use qualitative indicators at all, as opposed to just requiring states to consider whether qualitative indicators should be used. The applicable regulation provides that states must monitor districts using quantifiable indicators, and "such qualitative indicators as are needed to adequately measure performance" in priority areas. 34 C.F.R. § 300.600(c), (d) (emphasis added). The regulation offers no guidance on which qualitative indicators are needed, suggesting this may be left to the states' discretion. In any event, what matters now is that the record thus far has not suggested that California would be out of compliance with its monitoring and enforcement obligations as a categorical matter simply because it does not gather qualitative data at the first tier of its monitoring activities.
School discipline is another area of focus for the state's special education monitoring. As part of their obligation to ensure that all students with disabilities receive an appropriate education, school districts are also typically responsible for ensuring that students receive the appropriate behavioral support and services, and for ensuring that students are not removed from the classroom unnecessarily. Pursuant to this goal, the state must report, in connection with the state performance plan, the incidence and duration of "disciplinary actions" that are imposed upon children with disabilities. This includes suspensions of one or more day, expulsions, and removals to alternative educational settings to the federal government each year. 20 U.S.C. § 1418(a)(1)(D)-(E).
The May 18 order requires the state to demonstrate it collects the data necessary to adequately assess school discipline of children with disabilities, including suspensions, expulsions, and the degree to which positive behavioral supports are used, or demonstrate why this data collection is not necessary to effectively monitor school districts.
The federal Department of Education's Office of Civil Rights defines a referral to law enforcement as "an action by which a student is reported to any law enforcement agency or official, including a school police unit, for an incident that occurs on school grounds, during school-related events, or while taking school transportation." This includes arrests by the police on school grounds. Dkt. No. 2410-5 at 4.
This is a serious issue. The federal Department of Education conducts a biennial survey that collects data to help ensure that school districts receiving federal funding are not discriminating against certain groups of students. In the most recent school year for which these survey data are available, 28% of all students referred to law enforcement were students with disabilities (roughly 82,800 of the 291,100 students referred to law enforcement), even though students with disabilities comprised only 12% of all students in the survey.
However, the fact that the federal government collects these data cuts against the argument by the plaintiffs and the court monitor that the Court should order the state to collect it as well. These data are publicly available and can be broken down by school district.
Although one could object that this sort of data should be collected every year rather than every other year, this on its own is not enough to hold the state noncompliant. With all the other data the state collects annually to identify school districts that might be overusing discipline, it would be difficult to conclude that less frequent data collection about the specific issue of police referrals prevents the state from adequately fulfilling its monitoring and enforcement responsibilities under the IDEA. The record does not suggest that the state's ability to identify "red flag" school districts would be significantly impeded if the state used biennial data instead of annual data, at least, in conjunction with the other data on school discipline that the state gathers.
Moreover, the state's policymakers testified that they have access to additional annual data on behavior would help flag districts that overuse the police. According to the policymakers, school districts in California report to the state department of education any time an incident occurs for which a student could be suspended or expelled. All qualifying incidents are listed in California Education Code section 48900, and this list includes most incidents that might lead to a student's referral to law enforcement.
All of this is to say that, although it's possible that the state does not do enough analysis of data about police referrals when it decides which districts to monitor more closely (which would be relevant in Phase 2 of this court oversight process), it has access to enough data, which precludes the Court from finding the state out of compliance on this issue.
As discussed in the preceding subsection, the May 18 order directs the state to demonstrate that it collects data necessary to adequately assess school discipline of children with disabilities, including the degree to which positive behavioral supports are used, or demonstrate why this data collection is not necessary to effectively monitor school districts as required by federal law. Setting aside data on disciplinary actions, what remains is the requirement that the state show either that it collects data relating to the use of positive behavioral supports or that such data collection is not required on a statewide basis. The court monitor and the plaintiffs contend that the state must collect statewide data on the use of positive behavioral supports; the state disagrees.
"Positive behavioral support" is a general term used to describe any tool that aims to reinforce a positive behavior.
The question of whether the state must collect data each year to monitor the use of "restraint and seclusion" is another area of dispute. The May 18 order directs the state to demonstrate that it collects data necessary to adequately assess whether restraint or seclusion is used in a way that interferes with the provision of an appropriate education in the least restrictive environment, or demonstrate why this data collection is not necessary to effectively monitor school districts as required by federal law.
Generally speaking, "restraint and seclusion" refers to a situation where school staff either physically or mechanically restricts a child's movement or isolates the child in response to the child's behavior.
The state's policymakers do not dispute the importance of collecting data on restraint and seclusion, and they say the state will begin to collect data from school districts about the use of restraint or seclusion in a way that mirrors the data that districts currently report to the federal Department of Education every two years — namely, the number of students at each school who were subjected to restraint or seclusion, as well as the total number of incidents of restraint and seclusion at each school.
At the hearing, the state explained that there is a bill currently pending in the state legislature on the use of restraint and seclusion in California schools — Assembly Bill No. 2657. This bill articulates a legal standard that limits schools to using restraint or seclusion to control student behavior in certain, extreme circumstances, when less restrictive alternatives are unavailable; establishes a student's right to be free from the use of restraints and seclusion as a form of coercion, discipline, convenience, or retaliation; and requires schools to take certain actions after using restraint or seclusion, such notifying a students' parents. As relevant to this phase of the proceedings, the bill requires school districts to collect and report on the use of behavioral restraints and seclusion to the state each year, mirroring the existing data collection and reporting requirements of the federal Department of Education Office for Civil Rights. In other words, school districts would be required to report to the state information such as the number of times students were subjected to restraint or seclusion each year, with separate counts for how often restraint or seclusion were used on students with disabilities.
Notwithstanding the above, the plaintiffs and the monitor suggest that the Court should order the state to collect data on restraint and seclusion annually. However, the state's policymakers have already made clear in their testimony that they are taking a significant step forward on data collection relating to restraint and seclusion. The policymakers in the California Legislature are actively considering the matter as well, and perhaps will further ramp up the state's data collection related to restraint and seclusion. Under these circumstances, the current failure to collect these data annually, rather than every other year, does not warrant an order by a federal court to do more.
The state's efforts to collect data for monitoring whether students are receiving an appropriate education are only worthwhile if the data collected are both valid and reliable. Accordingly, the May 18 order requires the state to demonstrate the data it collects are valid and reliable, and relatedly, that the state timely corrects the errors that it identifies in data reporting by school districts.
The state describes the different steps it takes to help ensure the validity of its data: (i) defining data elements in its database manuals and training district officials about how to submit accurate data to the state's databases; (ii) automatically checking, through the software used, for certain kinds of "anomalies" in the data submitted (for instance, checking that the date of an IEP doesn't precede a birthdate); (iii) requiring local education officials to certify that the data submitted by their districts are complete, requiring corrections as needed; (iv) cross-checking data in the state's two statewide databases for consistency (and sometimes cross-checking data that districts submit over time) to identify inconsistencies or failures to correct noncompliance; and (v) conducting "Data Validation Reviews" in districts identified as having significant problems, as described further below.
Obviously, the state cannot be expected to verify that every piece of data that it collects is accurate. Nor should the state should be able to get away with making no effort at all to ensure the data are accurate. But in this case, it's clear the state has reasonable systems in place to validate the data it receives from school districts, identify inaccuracies, and attempt to address them. Clear data definitions, automatic software checks of the data, and certifications of accuracy by local educational officials all contribute to ensuring the accuracy of the data the state collects. The state also cross-checks the information that districts submit to the two primary statewide databases (CASEMIS and CALPADS) to ensure that data fields that should match across the two databases do. The state has also provided samples of letters that it sends to school districts that have significant discrepancies between the two databases — including significant discrepancies in important data fields for special education monitoring such as suspensions and expulsions.
Additionally, the state's evidence describing its Data Validation Reviews shows that the state makes some effort to check the accuracy of the data submitted against actual paper records. It is evident that the state checks not just for smaller-scale issues, such as whether the IEP evaluation date precedes the birthdate in the system, but also more significant issues, such as missing information about students' disabilities or dates entered into databases that do not match dates recorded in paper files. And to improve the validity of the data moving forward, the state works with the district to identify reasons for the data's inaccuracy and corrective actions for the district to take.
At the hearing, the state's policymakers testified that they have sought to align their data validation efforts with best practices outlined by the National Center on Education Statistics. This testimony is further supported in the Center's guidance, submitted by the state; the guidance discusses the value of cross-checking data, identifying data submitted in invalid formats, and using clear data definitions to improve the accuracy of the data reported and recorded in large databases.
Therefore, the state has produced enough evidence of its overlapping efforts to ensure data validity. Given the goal — statewide data collection for the purpose of identifying "red flag" school districts for more intensive monitoring — the state's efforts to validate its data are sufficient to show that it is complying with its obligations under federal law.
Finally, on the question of whether the state timely corrects errors in data reporting, the answer is yes. As already discussed, there are numerous mechanisms for rejecting data submitted by school districts on the spot if it appears invalid. As for the correction of historical data, the policymakers explained at the hearing that historical data does not get corrected if errors are later identified. Instead, the state focuses on getting districts into compliance on their data reporting obligations going forward. This is reasonable and consistent with the state's obligation to correct districts' noncompliance (not their historical data), so the failure to correct historical data (timely or otherwise) does not put the state out of compliance with federal law.
The May 18 order identifies a number of other areas of potential data collection associated with key tenets of the IDEA and its implementing regulations. As discussed earlier, for many of these areas, the plaintiffs and the court monitor say the state must do more than what it already does to meet the federal reporting requirements in the same areas. However, the Court concludes that the state is in compliance with its data collection obligations in each of these areas.
The May 18 order addresses the issue of data collection to assess student performance on statewide assessments, including alternate assessments.
The May 18 order addresses data collection to assess whether the state is effectively monitoring school districts.
The May 18 order directs the state to demonstrate it collects data to identify the disproportionate representation of racial and ethnic groups in special education.
The May 18 order addresses the collection of data to assess how well school districts are providing for the transition of children from Part C to Part B services.
The state's policymakers testified that they collect data about IEP timelines, which tell the state whether students are being transitioned to Part B services in a timely way, and analyze that data to flag districts for more intensive monitoring; however, the state does not collect data to analyze whether transitions are "smooth and effective" (nor is it entirely clear what the state should collect, other than data about timeliness). And the state's policymakers testified that some of the more particularized data on transition meetings are collected during targeted monitoring.
The Court concludes that this data collection, at least at the first tier of the state's monitoring process, is enough. The focus of the regulations on transitioning infants and toddlers from Part C to preschool programs is timeliness. The record contains no indication that late referrals from Part C service providers for IEP evaluations and Part B services are a significant problem meriting separate data collection at the first tier of the state's monitoring system, as was suggested at the hearing. Nor has the evidence presented thus far provided reason to believe that this is the kind of data element on the basis of which school districts should be selected (as opposed to data about, for instance, schools' use of restraint and seclusion or police referrals). And to the extent the plaintiffs and court monitor believe federal law requires the state to collect additional data to monitor IEP implementation for preschoolers (since the implementation of IEPs for children transitioned to Part B services on their third birthday is part of the statutory language) the requirement that the state collect data on IEP implementation, discussed above, addresses this concern.
The May 18 order addresses the collection of data to assess the effectiveness of the state's systems for due process complaints and to evaluate the use of resolution meetings and mediations.
The court monitor says the state does not collect sufficient data to evaluate whether its complaint management processes are effective — pointing to various kinds of data he believes the state needs to be collecting, such as information about whether the state conducted an on-site investigation of a complaint. However, it is not apparent from the record that this kind of information needs to be collected. Moreover, at the hearing, the court monitor's comments about the state's complaint management system focused on his concerns that the state had not shown that it was collecting data on whether resolution sessions are held within 15 days of the filing of due process complaints — a concern that the federal Department of Education raised in a 2011 letter to the state. In post-hearing submissions, the state demonstrated that it has addressed this issue and now collects this information.
The May 18 order also addresses data collection relating to the extent to which children with disabilities are placed in a regular education environment.
The May 18 order requires the state to demonstrate that it collects the data necessary to assess whether school districts are adequately identifying children in need of special education services. The law refers to this concept as "child find."
As this list makes clear, several of these categories overlap with categories of data collection that have been discussed. At the hearing, the court monitor identified four primary concerns with the state's data collection related to its obligation to identify students with disabilities who are not yet receiving special education services but should be. The court monitor's first concern related to the state's ability to disaggregate the data it collects on the basis of migrant status and foster child (or "wards of the state") status, so that the state can conduct the necessary data analysis at Phase 2. The state's policymakers confirmed that they collect this information and can disaggregate it accordingly.
The court monitor's fourth and final concern was discussed in greatest depth at the hearing and related the number and rate of refusals by a school district to evaluate a child for special education services after a parent or staff referral. Although the court monitor and the state's policymakers seemed to agree at the hearing that these data would be fairly easy to collect (with a slight modification to a data field in one of the state's existing databases), it remains unclear how significant an issue this is and whether it merits data collection on a statewide basis at the first tier of the state's monitoring system. Because the evidence in the current record does not suggest that the state's lack of first tier data collection on this topic makes it unable to effectively monitor whether students are receiving an appropriate education, the Court cannot deem the out of compliance with federal law for not collecting these data.
For the issue where the state is not in compliance, the state will be required to demonstrate compliance at Phase 4 (at the same time that it's submitting its policies and procedures for review). In the areas where the state has been found compliant, the Court assumes that is the end of the matter — there will be no further examination of those issues in this case. As matters stand, it remains unclear whether Section 13.0 of the consent decree contemplates further proceedings on issues where the state has been found in compliance. If so, this portion of the consent decree may be outdated, as discussed at previous status conferences. The parties should be prepared to begin discussing this issue at the next case management conference, which will take place on September 6, 2018 at 10 a.m.
At that conference, the parties should also be prepared to set a schedule for Phase 2. The Court is of the tentative view that the sequence of written submissions and the structure of the evidentiary hearing should remain the same, and the parties therefore must propose a schedule accordingly. However, if the parties have proposals for alternative approaches for the next phase, they may raise those as well.
A case management statement is due seven days before the case management conference. The standard format for case management statements need not be followed, but the parties should try to address everything they wish to raise at the conference. The policymakers need not attend, although they are welcome to do so, either in person or by phone.