One would think that there's a strong correlation between a positive school climate and student performance, right? Maybe not, according to a study by Associate Professor of Sociology Joshua Klugman.
Dr. Klugman found that surveys measuring a school's climate may not be as effective as previously believed. We asked him about his research, what constitutes school climate and if there's value in measuring it.
What is school climate, and why is it important?
"School climate" envelopes things like student survey reports of safety, respectful peers and caring teachers, as well as teacher survey reports of collaborative colleagues and principals. These surveys play a major role in education. Students and teachers spend time filling them out (these surveys can be long and tedious to take). Principals, teachers, and parents pore over their schools' results. District and state personnel use the surveys to monitor how well schools are doing, and educational researchers use this data to argue that schools should engage in various practices (namely cultivating collaborative relationships among teachers, parents and principals).
States, schools, districts and foundations pay money to not-for-profit and for-profit organizations to run the surveys and analyze the data. In fact, the school climate measures I am using are branded as the "5Essentials" and the University of Chicago sells their services as data collectors and analyzers of "5Essential" data.
The successor legislation to No Child Left Behind, the Every Student Succeeds Act, mandates that states incorporate non-academic factors into their school accountability policies, which will likely see expanded use of this kind of survey.
How did you get interested in studying school climate and its effectiveness?
I have always been interested in the issue of how schools affect students. This is a very controversial topic; for the past 50 years, educational researchers have been debating whether or not the kind of school a student goes to affects her outcomes, like test scores or going to college. Researchers have been frustrated by the mixed findings regarding the effects of school characteristics like their demographics (such as schools where most students come from impoverished families or schools where most students come from affluent ones) and resources (namely money). There have been calls to go beyond these factors and focus more on concrete school practices which climate is supposed to capture.
From 2014-2016, I took a two-year leave of absence from Temple University and worked at the University of Chicago Consortium on School Research. In 2010, Consortium researchers published an influential book, Organizing Schools for Improvement, showing that school climate mattered a lot for schools' "productivity"—meaning, students' test scores. That book just used data on Chicago elementary schools over time. When I came to the consortium, I was part of a team that worked on a follow-up study that was able to get climate data on most schools in Illinois. Unfortunately, that study was limited because we only had one year of climate data. We did show there were some associations between climate students and student outcomes (although for high schools the effects were pretty much limited to just Chicago and not elsewhere in the state).
After that report came out, I was allowed to pursue other research questions. I did not set out to test the effectiveness of school climate, because I thought the 2010 book put that matter to rest. But as I did more analyses, it became apparent that I was seeing a different picture of school climate's effects than the book provided.
How does one determine if school climate affects students?
The usual way researchers try to figure out whether or not school climate matters is to make comparisons between students in schools with weaker and stronger school climates.
I take this approach as well. I compare students' test scores at the end of their elementary years, on-time 9th-grade promotion, and on-time high school graduation depending on if they attended an elementary school with weaker or stronger school climates. The problem is that there could be other student or school factors that matter.
For example, schools may have stronger climates because they are serving students who are coming into the school with stronger academic backgrounds, and that means it is easier to have better student outcomes as well. Researchers like myself try to get around this by accounting for various student and school characteristics. The University of Chicago data allowed me to get richer measures of the factors that tend to go along with stronger school climates.
But I also go beyond this approach. I compare cohorts of Chicago students who attend the same elementary schools at different years. So we can actually ask: if schools improve their climate, do their students experience better outcomes?
Do they? Are student outcomes better when their schools have better climates?
The effects look pretty weak. There are small effects on test scores and high school graduation and no effects for on-time promotion to high school.
Why does this study get different results than those of previous University of Chicago research, including the one you wrote last year?
There are a couple of issues at play here. Take the report I co-authored with them last year, showing that school climate was related to student outcomes in Chicago (including graduation rates). Because we were looking at schools across the state of Illinois, we did not have data on the academic preparation that high school students had when they entered their high schools. I think our finding about Chicago public high school graduation rates was due to the fact that academically better students enroll in high schools with better climates.
So, one reason my current study finds very little effect is because I am better able to account for student and school factors that matter for student performance.
But there is one other major difference between my approach and that of others, including the more sophisticated studies like Organizing Schools for Improvement. Namely, I am trying to look at outcomes that reflect the cumulative effect of schooling, such as student test scores at the end of their elementary school years or on-time high school graduation. Other studies tend to focus on how well students do in the particular year they experience worse or better climates. Sure, there might be an effect in the same year you attend a school with a stronger climate, but my findings indicate it does not accumulate into any long-term advantage.
So, school climate doesn't matter? It doesn't matter if schools are safe, or it teachers collaborate, or if principals enact high standards?
There are two possibilities. One possibility, implied by the question, is that the school climate argument is overstated and that the concept of school climate is not useful. This is not as far-fetched as it may seem. One school of thought in education research is that schools are "loosely coupled"—that is, there is not a lot of coordination that goes on between the policy environment, school administrators and classrooms. Student performance may just be based on how well teachers teach, and to what degree teachers inspire students to work hard while broader sets of relationships—like those among teachers and administrators, or among teachers, or even among students—matter less.
The other interpretation is that school climates matter, but we just cannot measure them well with surveys. In other words, the problem is with how we measure the concept, not with the concept itself. It could be that even the most thoughtful survey questions are incapable of getting at the concept.
I cannot say with any certainty which of these scenarios is going on. Although my suspicion is that the main problem is with the measures and less the concept.
If one cannot measure a concept well, does that mean it's useless?
Not necessarily. Concepts can be impossible to measure but still be useful. A school intervention could involve strengthening school climate (for example, by exhorting principals and teachers to build trusting and collaborative relationships). If the intervention succeeds in improving student performance, school climate is a useful concept even though we can't measure it well. In fact, that's what happened in Chicago in the 1990s. James Comer's School Development Program (SDP), which focuses exactly on those collaborative relationships, was tried in randomly selected schools. Researchers found that the Comer program did improve student performance in those schools, but this improvement in student performance was not explained by improvements in survey measures of school climate. I think that's a good illustration of the limits of surveys.
So the conclusion is that we should ditch surveys in education settings?
That is the implication of my findings, yes. I say in the article that I welcome attempts to confirm or disconfirm my findings, which could be easily done in Chicago and in other cities. If they are confirmed, the education sector should direct its resources and energies on something other than these climate surveys.
This goes beyond K-12 education. Colleges and universities, Temple included, rely a lot on student surveys. I am of course talking about student feedback forms which are used to evaluate how well us instructors did in each of our courses. It turns out, many studies using rigorous methods find that taking a class with a highly-rated instructor has no effect on what students learn; it does affect how well they do in subsequent courses. I think the same issues with school climate surveys also afflict student feedback forms.
What was the University of Chicago's reaction to the article?
I worked at the University of Chicago Consortium on School Research for a two-year period from 2014-2016 and I conducted this study the last year I was there. I got great feedback from my colleagues who were very useful in identifying analytical choices that could have stacked the deck against finding effects of school climate. I think I have addressed their concerns and I am grateful for their feedback.