What’s Left of What Works
A conversation with Elizabeth Tipton about an IES under siege and the future of federal education research

In February, 2025, the now-dissolved DOGE began to gut the Department of Education. The Institute of Education Sciences, the department’s research and statistics division, was devastated. DOGE terminated nearly 90% of IES staff and cancelled $900 million in contracts, many of which were nearly finished. These cuts hit muscle, disrupting key IES tasks and obligations, including the collection and dissemination of vital education data that states and stakeholders rely on. (For example, we’re still waiting on data promised this past December!)
After twelve months of uncertainty, ED released the Reimagining The Institute of Education Sciences report on February 27. Authored by senior advisor Dr. Amber Northern, the report details recommendations for reforming the IES to be more efficient, timely, actionable, and accessible to practitioners. While many of these calls for reform were not new, they took on a new urgency after the severe DOGE cuts.
In the wake of this report, the CEP recently spoke with Dr. Elizabeth Tipton, professor of statistics and data science at Northwestern and past president of the Society for Research on Educational Effectiveness, one of the organizations who have filed suit against ED and Secretary McMahon over DOGE’s cuts. We’d first reached out to Dr. Tipton to better understand how educators should use gold-standard resources on evidence-based instruction, such as the What Works Clearinghouse Practice Guides, but the timing of this new report gave us even more to discuss with her. Before we spoke, she directed us to this lucid commentary from Dr. Betsy Wolf, which we highly recommend you read as well.
[This conversation has been edited for length and clarity.]
You’ve described the funding cuts to the IES as an “existential threat” to education research. Has your assessment changed at all in light of the recent Reimagining The IES report?
I was surprised the report came out as soon as it did, and it was nicer than I thought it could be. I think Amber Northern did a pretty good job given the situation she was in. She took the job seriously. I appreciate that throughout the report there’s an emphasis on the need to meet statutory requirements; maybe they should meet some differently than in the past, but there are statutory requirements that they have not been meeting.
But the report largely echoes what many other reports say, and what many in the research community have been saying should happen. I was on this National Academies Consensus Study Report consensus panel in 2022, and we said very similar things about the needs and concerns of communities and schools, and that they needed to do a better job of meeting them. So that’s not radically new.
Right now, it feels like we’re just waiting to see whether there will still be a Department of Ed. And where is IES going to sit if they get rid of the Department of Ed? It’s a nice report, but the Department is being dismantled, so I don’t feel fundamentally different after reading it. I won’t feel good until we have a very clear signal that they have to restaff the IES to capacity and that they have a specific deadline for doing so. That’s about the best we can hope for. At this point, we can’t ask them to bring back the people or contracts that they let go. Too much time has passed.
You know, $900 million in contracts is a third of the federal investment in education research. In one sense it’s not that much; we don’t invest that much in education research at all. But as a share of the total, that hit is a huge amount. And when you take into account the $800 million that was cut from the NSF, together that’s nearly two-thirds of the federal education R&D cut in a year. That’s substantial. We really have lost a field in this.
A recent piece of ours raised questions about how educators can best use the recommendations in the What Works Clearinghouse Practice Guides. The WWC is the best federally funded resource we have for finding out what actually works, and yet even when teachers look to the practice guides, they aren’t always actionable. What explains this disconnect?
I think it’s important to remember that the WWC is not the only such clearinghouse that the government has. There are multiple federal evidence clearinghouses in the US, and they’re common in other countries as well. Critics of the WWC often speak as if it’s this weird thing that IES invented, but the idea of having a clearinghouse of information isn’t unusual. They exist as an enterprise for screening evidence, based upon some criteria, and collecting that evidence so the government knows which interventions are worth investing in. We can’t use “Can you get it published?” as a filter for good research — you can get anything published somewhere. For this reason, most of these clearinghouses will focus on studies with strong research designs, because they’re focused on finding interventions that could change student outcomes and that are worth investing in. That’s a very special part of education research.
That said, the thing I think I’ve learned about the WWC over time is that we haven’t worked hard enough to understand how people make decisions in schools. I like to say my cohort and I ‘grew up’ in IES: I started graduate school in 2006, and so I was in one of the first cohorts of one of the IES pre-doctoral fellowship programs, at some of the first IES meetings, and at the beginning of the SREE. We all grew up in this system. Many things we took for granted as facts, and as time passed it struck me that we needed to ask whether these facts were true. One of them was that teachers are the primary users of the WWC — that teachers go to the WWC and base their decisions on the information they get.
I started becoming even more interested in how and whether people use this data and evidence through a former student of mine, Katie Fitzgerald (Villanova University), and she and I now have a grant focused on that decision-making process, alongside a human computer interaction researcher, Alex Kale (University of Chicago), and with an advisory board with expertise in school decision making. Largely, what I’ve learned is that teachers don’t use the WWC. In elementary schools, teachers aren’t typically the ones deciding which reading programs or math programs to use. Major curricular decisions for the whole grade level, for the whole school, are often made by the school district. Sometimes the school decides, and maybe once you get to high school teachers could be making decisions about using this versus that book, but for the most part it’s a district higher-up making decisions in a very standardized way.
This realization came after years of conversations — at conferences, with other methodologists and researchers, and even with those designing the WWC — in which we all talked about teachers being the main users of the WWC. We actually had no idea how people used evidence, or that this database was built on a premise of how the world worked that was incomplete, and that therein was a knowledge mobilization problem. There was an idealized vision by some academics that “if we just build this database, like they do in medicine, then teachers will come, look at it, and improve their teaching,” all without knowing teachers aren’t the primary people making these decisions. They of course are not going to go to a database to do that. And so I think that for a long time, the issue with the WWC has been that it was just really divorced from reality.
But as Betsy Wolf says, there has been a real effort in the last few years to become very aware of this. The National Academy’s report pushed back on this in 2022. In many ways, it wasn’t supposed to be about the WWC; it was really not in our charge, but we couldn’t help but make comments about it. Now this question of how people actually get and use evidence has come to the forefront. The IES knows that one of the things that gets downloaded the most off of their website is the practice guides, and you see that in the Reimagining report. The practice guides are really important; they know that they’re downloaded a lot, that they’re used, and that people talk about them a lot. They combine evidence, expert panels, data, and reviews that say ‘These are ways to think, these are the top interventions, and these are the principles of a good reading program in this area.’ Those tend to be more useful, partly, I think, because we can just never have all the studies necessary to cover every possible combination of grade-level intervention and context that people would need to turn to evidence for every decision.
How much does the criticism aimed at the IES & WWC grow out of a general misunderstanding about how research works? With medicine, there seems to be a better understanding that not every medical intervention that gets brought to trial will work; a lot will fail, and it takes a lot of investment before those failures give way to something that really works. It doesn’t seem like there’s anywhere near that kind of patience in education.
That’s a great question. Part of the problem for IES is that Mark Schneider, the previous director, was constantly complaining that the science was full of nothing but failures. But you’re right that this is exactly what you would expect. First of all, what we expect in science is that we all have a lot of great ideas, and most of them don’t work. That is the nature of science. If you look over at medicine, the number of pharmaceutical trials that make it from Phase 1 through Phase 3 — from the lab to FDA approval — is very low. Second of all, it takes a very long time. For research to go from basic science to efficacy research and finally to the FDA takes something like 17 years.
When you take into account that IES is just 25 years old, and they’ve had to build an entire field of people to do this work, then if at the end of the day the main thing we got out of IES is the science of reading, that might be about what we’d expect for this investment at this point. Over this period, think of how many graduate fellows, postdocs, and early career researchers they funded in this paradigm of thinking. They showed that with substantial investment, you can actually build a new part of a field. You’ve gotten a lot of studies that have collectively come together to build what we now call the science of reading. Many of those were funded by IES. That is a huge success for us! There are other things, too, but that’s about what we would expect — that most things would fail, but a few would be monumentally impactful. That’s how science is.
You’ve said the IES needs to be given the strength to regulate curriculum materials based on what we know works in education; I don’t see anything like new regulatory strength in Reimagining. Even if every recommendation in the report is adopted, the IES is restaffed, and the cancelled contracts are reinstated — without new regulatory powers, are we largely left in the same place?
I honestly think so. The IES has always talked about itself in relation to medicine. Many wanted to plan IES to be like medicine, and the IES has mirrored the use of phased trials in medicine with phased trials of its own. But there are two major mechanisms that medicine has had that we’re still missing. One is substantially more money. Look at the NIH’s budget, and then look at our budget for the IES. Historically it’s been at least 10 times as much. They can do so many more trials, so much more quickly, and as a result their knowledge production is much bigger and much faster. When you have that much money, you can actually also grow the number of people in the field. More people means more research ability and a more robust research economy, and your evidence base grows really fast when you have that.
Two is regulation. We don’t have either of those things. There’s a lot of pressure right now across the government to minimize regulation, even pressure on the FDA. Tech companies want minimal regulation in education so they can make money off of slipping new products into classrooms as quickly as they can; other companies want minimal regulation for the same reason. But the American people need regulation, and I think that’s one of the fundamental tensions here.
Growing up in the IES system, we understood research as first doing a small study to develop an intervention, then doing a pilot study, then doing an efficacy trial, and then — if it works — doing an effectiveness study. This is how we would know stuff worked, and then it would go into the WWC, where people would choose from it. But no publisher’s curriculum has to be evaluated in order to be sold to schools. So although you have careful scientists over here doing due diligence studying interventions, wanting to ensure what they’ve developed is robust and really works before they bring them to schools, anyone else can just take whatever curriculum they want to schools and tell them it works.
IES is a well-meaning system, and in a way I hate how everything’s blamed on IES — like, “Oh, it’s the fault of IES that there’s not enough evidence when I go to the WWC for every decision I need to make.” Well, that’s because you didn’t give enough money to IES, right? IES was only invented 25 years ago — you had to build an entire field, and now that you’ve finally gotten the field, you cut off funding. But you need money to do this stuff! If you’re not going to give it funding, then how can you expect there to be evidence for every decision? There’s this expectation that IES would somehow solve NAEP scores by itself, as if people are required to listen to researchers!
I’ve been thinking of this as a tension at IES. The Institute of Education Sciences was created as a scientific agency tasked with developing a deeper, scientific understanding of how kids learn, with the idea that this deep, scientific knowledge would eventually improve learning outcomes for students. Yet somewhere along the way, it started being held to the standard of successfully changing schools now, in a very urgent way. Are we building a science here, or are we a machine for creating interventions that are marketed to schools? Those are two different enterprises, and I feel like there’s been a bit of a bait and switch, that we created IES for one thing, and now we’re being judged for another. That is not how it was set up. It feels like instead the research community is getting blamed for other people’s mistakes, which just feels unfair.





