After years of building up their digital ecosystems, school districts are entering a new phase. The question heading into the 2025-26 school year isn’t whether to use edtech. It’s which tools are working, which ones aren’t, and how to tell the difference.
District leaders are under increasing pressure to improve student outcomes, support teachers, and use limited funds wisely. Technology remains a key part of that strategy, but not all tools contribute equally. The challenge is deciding what stays, what goes, and what truly delivers results.
That challenge is compounded by the sheer volume of available metrics. Edtech companies often present usage dashboards, testimonials, or standards alignment charts. While those indicators can be helpful, they don’t always answer the most important questions
- Is this helping students learn?
- Is it supporting teachers in practical, sustainable ways?
- Is there evidence that it’s working in classrooms like ours?
The most effective decisions I’ve seen, both as a district administrator and now leading research and analytics at a global edtech company, are grounded in three essentials: how tools are used in context, whether they’re backed by independent research, and whether they deliver measurable gains in student learning.
Usage Data That Informs Instruction
Most digital tools can show how often students log in or how many minutes they spend on a platform. But frequency doesn’t equal effectiveness. The real value lies in how a tool is used within instruction and whether that use leads to deeper engagement and stronger learning outcomes.
That’s where nuanced, actionable usage data comes in. The strongest districts aren’t just reviewing platform activity reports, they’re using data to understand:
- How teachers are embedding tools in daily instruction
- How students are interacting with specific features or content
- How students are performing and where patterns diverge across schools, grades, or student groups
This level of detail allows leaders to spot what’s working and where implementation needs support. For example, if one school sees consistent student growth and high engagement while others lag behind, it may point to a training gap or a difference in how the tool, resource, or intervention is introduced. If a feature designed for remediation is barely used, it could signal that educators aren’t aware of its value or that it’s too difficult to access during a lesson.
Usage and performance data that also drives professional development and tailored coaching is beneficial to the real-world needs of educators. Is the program being utilized in ways that drive student understanding and meaning-making? Are there features that boost rigor and could be accessed more often for better results? Are students spending too much time on low-level tasks?
Insightful data can guide targeted improvements that raise the bar for everyone. Ultimately, the data provided by products and programs should support feedback loops between classroom practice and district strategy.
Research That Stands Up to Scrutiny
In an era of increased accountability, claims about being “evidence-based” must be more than marketing language. Districts deserve to know that the tools they’re investing in are grounded in credible, third-party research and that vendors are transparent about what’s known and what’s still being tested.
ESSA’s tiers of evidence continue to be a helpful benchmark. Tools supported by Tier I, II, or III studies, including randomized control trials or quasi-experimental designs, offer the strongest validation. But even tools in earlier stages of development should have a clearly articulated logic model, a theory of change, and emerging indicators of impact.
District leaders should ask:
- Who conducted the research and was it conducted by an unbiased independent research team?
- Does the sample size reflect school environments, including high need and/or diverse populations?
- Are the outcomes aligned to what district leaders are trying to achieve, such as change in performance or mastery of content in math, literacy, or engagement?
Importantly, research is not a one-time effort — it should be ongoing. The strongest edtech partners continue to evaluate, refine, and improve their products. They publish third party and internal research findings, learn from real-world implementation, and adjust accordingly. That level of transparency builds trust and helps districts avoid tools that rely on glossy brochures rather than genuine results.
Alignment that Leads to Real Gains
Too often, standards alignment is treated as a checkbox. Often, a product or program lists the standards it covers and calls it complete. Content coverage and alignment without a clear tie to grade level and student outcomes is a hollow promise.
The real test is whether a tool helps students master the skills and knowledge embedded in those standards and whether it supports teachers in helping all students make progress. This requires more than curriculum alignment. It requires outcome alignment.
Districts should look for:
- Evidence that students using the tool show measurable growth on formative, interim, or summative assessments
- Disaggregated results by race, income, English learner status, and special education status to ensure the tool works for all students
- Proof that learning is transferring. Are students applying or could apply what they learn in other contexts or on more rigorous tasks?
An edtech product that delivers results for high-performing students but doesn’t address the needs of those who are still on the journey to become expert learners will not help districts close opportunity gaps. Tools that truly align with district goals should support differentiated instruction, provide real-time feedback, and drive continuous improvement for every learner.
Raise the Standard: What the New Baseline for Edtech Should Be
This year, districts are making harder choices about what to fund and what to phase out. Budgets are tighter. Expectations are higher. This moment is not about cutting innovation, it is about clarifying what counts. The baseline for edtech must shift from tools that simply exist in the ecosystem to those that actively elevate it. Districts that succeed in this new landscape are those asking sharper questions and demanding clearer answers to questions such as:
- How is this being used in classrooms like ours?
- What evidence backs up its impact?
- Does it help our students learn, not just practice?
District leaders, now more than in years past, are less interested in vendor promises and more focused on evidence that learning took place. They are raising the bar, not just for edtech providers but for themselves. The strongest programs, products and tools do not just work in theory. They work in practice. And in 2025–26, that is the only standard that matters.