This is the first in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia鈥檚 Curry School of Education, Digital Promise, and the Jefferson Education Accelerator.
To improve education in America, we must improve how we develop and use education technology.
Teachers and students are increasingly using digital tools and platforms to support learning inside and outside the classroom every day. , and approximately take online courses 鈥 four times as many as a decade earlier. Technology will impact the 74 million children currently under the age of 18 as they progress through the pre-K鈥12 education system. The key question is: What can we do to make sure that the education technology being developed and deployed today fits the needs of 21st-century learners?
Our teachers and students deserve high-quality tools that provide evidence of student learning, and that provide the right kind of evidence 鈥 evidence that can tell us whether the tool is influencing the intended learning outcomes.
Evidence and efficacy can no longer be someone else鈥檚 problem to be solved at some uncertain point in the future. The stakes are too high. We all have a role to play in ensuring that the money spent in ed tech ( in 2016 for K-12) lives up to the promise of enabling more educators, schools, and colleges to genuinely improve outcomes for students and help close persistent equity gaps.
Still, education is complex. Regardless of the quality of a learning tool, there will be no singular, foolproof ed tech solution that will work for every student and teacher across the nation. Context matters. Implementation matters. , which will also include instructor practices, student experiences, and multiple other contextual factors.
Figuring out what actually works and why it works requires intentional planning, dedicated professional development, thoughtful implementation, and appropriate evaluation. This all occurs within a context of inconsistent and shifting incentives and, in the U.S., involves a particularly complex ecosystem of stakeholders. And unfortunately, despite the deep and vested interest of improving the system, the current ecosystem is many times better at supporting the status quo than introducing a potentially better-suited learning tool.
That鈥檚 the challenge to be taken up by the in Washington, D.C., this week, and the as part of the initiative convened by the University of Virginia鈥檚 Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. People like us rarely have the opportunity to collaborate, but this issue is too important to go it alone.
Over the past six months, 10 working groups consisting of approximately 150 people spent valuable hours together learning about the challenges associated with improving efficacy and exploring opportunities to address these challenges. We鈥檝e looked at issues such as how ed tech decisions are made in K-12 and higher education, what philanthropy can do to encourage more evidence-based decision-making, as well as what will be necessary to make the focus on efficacy and transparency of outcomes core to how ed tech companies operate.
Over the next six weeks, we鈥檒l explore these themes here, sharing findings and recommendations from the working groups. Our hope is to stimulate not just discussion but also practical action and concrete progress.
Action and progress might look like new ways to use research in decision-making such as informational site or tools that make it easier for education researchers to connect with teachers, districts, and ed tech companies, like the forthcoming . Collaboration is critical to improving how we use research in ed tech, but it鈥檚 not easy. Building a takes time. Acting on that framework is harder.
So, as a starting point, here are three broader issues that we鈥檝e learned about efficacy and evidence from our work so far.
1. Everyone wants research and implementation analysis done, but nobody wants to pay more for it.
We know it鈥檚 not realistic to expect that the adoption of each ed tech product or curricular innovation will be backed up by a randomized control trial.
Investors are reticent to fund these studies, while schools or developers rarely want to pick up the price tag for expensive studies. When were still at the U.S. Department of Education鈥檚 Office of Educational Technology, they pointed out that 鈥渋t wouldn鈥檛 be economically feasible for most app creators (or schools) to spend $250k (a low price tag for traditional educational research) to evaluate the effectiveness of an app that only cost a total of $50k to build.鈥
We could spend more efficiently, leveraging the 15,000 tiny pilots and decisions underway into new work and new insights without spending more money. This could look like a few well-designed initiatives to gather and share relevant information about implementations and efficacy. Critically, we鈥檒l need to find a sustainability model for that type of rigorous evaluation to ensure this becomes a key feature in how adoption decisions are made.
2. We need to recognize that evidence exists on a continuum.
can support . What is important is that each decision is supported by an appropriate level of evidence. provides a useful reference for educators on different evidence types and how they should be viewed. For educators, it would be wise to look at the scale and cost of the decision and determine the appropriate type of evidence.
Tools like the , , and can provide useful support in making decisions and evaluating the use of technology.
It鈥檚 important to remember that researchers and philanthropists may use education research for different purposes than would a college, university system, or districts. Academic researchers may be looking to identify causal connections, learning gains, or retention rates, while a district is often focused on a specific context and implementation (what works for schools similar to mine).
When possible, traditional randomized control trials provide useful information, but they鈥檙e often not affordable, feasible, or even necessarily appropriate. For example, many districts, schools, or colleges are not accustomed to or well versed in undertaking this type of research themselves.
3. It鈥檚 easy to blame other actors for the current lack of evidence-driven decisions in education.
Everyone we spoke to agrees that decisions about ed tech should be made on the basis of merit and fit, not marketing or spin. But nearly everyone thinks that this problem is caused by other actors in the ecosystem, and this means that progress here will require hard work and coordination.
For example, investors often don鈥檛 screen their investments for efficacy, nor do they promote their portfolio companies to necessarily undertake sufficient research. Not surprisingly, this tends to be because such research is costly and doesn鈥檛 necessarily drive market growth. It鈥檚 also because market demand is not driven by evidence. It鈥檚 simply not the case that selection choices for tools or technologies are most often driven by learning impact or efficacy research. That may be shifting slowly, but much more needs to be done.
Entrepreneurs and organizations whose products are of the highest quality are frustrated that schools are too often swayed by their competitors鈥 flashy sales tactics. Researchers feel that their work is underappreciated and underutilized. Educators feel overwhelmed by volume and claims, and are frustrated by a lack of independent information and professional support. We have multiple moving pieces that must be brought together in order to improve our system.
Ensuring that ed tech investments truly help close achievement gaps and expand student opportunity will require engagement and commitments from a disparate group of stakeholders to help invent a new normal so that our collective progress is directional and meaningful. To make progress on this, we must bring the conversation of efficacy and the use of evidence to center stage.
That鈥檚 what we鈥檙e hoping to help continue with this symposium. We鈥檝e learned much, but we know that the journey is just beginning. We can鈥檛 do it alone. Feel free to follow and join the conversation on Twitter with #ShowTheEvidence.
Authors:
- Aubrey Francisco, Chief Research Officer, Digital Promise
- Bart Epstein, Founding CEO, Jefferson Education Accelerator
- Gunnar Counselman, Chief Executive Officer, Fidelis Education
- Katrina Stevens, former Deputy Director, Office of Educational Technology, U.S. Department of Education
- Luyen Chou, Chief Product Officer, Pearson
- Mahnaz Charania, Director, Strategic Planning and Evaluation, Fulton County Schools, Georgia
- Mark Grovic, Co-Founder and General Partner, New Markets Venture Partners
- Rahim Rajan, Senior Program Officer, Bill & Melinda Gates Foundation
- Robert Pianta, Dean, University of Virginia Curry School of Education
- Rebecca Griffiths, Senior Researcher, Center for Technology in Learning, SRI International
This series is produced in partnership with Pearson.
Did you use this article in your work?
We鈥檇 love to hear how 蜜桃影视鈥檚 reporting is helping educators, researchers, and policymakers.