#ShowTheEvidence – Ӱ America's Education News Source Fri, 25 Mar 2022 19:58:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png #ShowTheEvidence – Ӱ 32 32 Opinion: Analysis: Ed Tech Decision Makers Are Under Pressure in Higher Education /article/analysis-ed-tech-decision-makers-are-under-pressure-in-higher-education/ Sat, 01 Jan 2000 00:00:00 +0000
This is the fourth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first, second, and third pieces.
In higher education, ed tech decision-makers are in the hot seat. They face the demands of end users, ranging from Luddites to technophiles and the pressures of vendors who have answers to everything — even when there is no question to start with. We have seen ed tech tools and applications proliferating in an environment demanding that higher education keep up with the 21st century, serve a wider audience, and better prepare students for careers.

At the same time, we now expect decision makers to ensure that their ed tech choices lead to better student outcomes. This might be higher grades, greater course completion rates, or a faster time to graduation.

These standards are not imposed on many other decisions in higher education. Faculty tenure is not based on solid evidence that students have learned anything from their courses. Ed tech is expected to be the silver bullet for many of the challenges of higher education, so decision makers are under pressure to deliver on these expectations.

As part of the , we want to know how ed tech decisions are being made and what can be done to support and improve that process. To that end, over the past year, we interviewed 52 decision makers in higher education ranging from presidents and chief information officers to directors of digital and eLearning.

We found a community buffeted by a variety of influences, facing a complex decision-making process that could be improved by tailoring the research gathered to match the context and magnitude of the decision.

Here are just some of .

Decision makers struggle to process an excess of information on ed tech products and trends

They are diligent in constantly gathering information, largely collected from colleagues, whether at their own university or other institutions of higher education, and at ed tech–related network events such as conferences and consortium meetings. While there is safety in being a “near-follower,” there is also a risk of becoming trapped in a higher education echo-chamber. The challenge is that the data gathered through these informal connections are rarely grounded in any rigorous evidence regarding ed tech effectiveness.

Institutions identified as ed tech opinion leaders, change makers, and innovation leaders were also the ones most likely to step outside higher education circles and talk to startups and other organizations about how to solve challenges with technology and how to overcome impediments to productive implementation.

If we want to see more of this kind of innovative culture in higher education, people will need to be incentivized to take risks. They need support to collect good-enough evidence to make decisions, and room for error as well as trial.

We found that improved decision-making should focus on needs, involve multiple stakeholders, and look for solid evidence

Tension exists at many institutions between starting the decision-making process with needs and starting with the solutions. In some instances, institutions follow a (more or less) rational model of decision-making, first identifying the needs and subsequently looking for appropriate ed tech tools to address them. Others start with the ed tech tools and try to match them up with unsolved problems, whether or not there is evidence to suggest they are an appropriate solution. . Other institutions work from both ends of the spectrum, keeping track of ongoing needs while monitoring available solutions.

A common theme that arose in our interviews is the need to obtain buy-in from all those who will be involved in implementing and using the product. The challenge is balancing buy-in with efficiency and focus. Nonprofits aim to build buy-in during the decision-making process, sometimes spending excessive amounts of time, money, and effort building consensus for choices between only marginally different product options. For-profits are more likely to make a decision centrally, but sometimes too swiftly to allow for adequate involvement of stakeholders or anticipation of implementation challenges.

We have seen some decisions deferred to departments and individual faculty members, particularly for items that cost little but facilitate the work of researchers and educators. This shift brings positives and negatives. Freedom of choice and freedom from “red tape” leads to redundancy in functionality among acquired tools. IT struggles to support countless products without having a chance to vet them. And buyers unknowingly click through license agreements that transgress regulations on issues such as data privacy. Finding ways to standardize and streamline ed tech acquisitions is a priority.

There is little doubt that ed tech decisions should be made collaboratively by a mix of administrative and academic leaders and IT experts. Adequate attention must be paid up front to the potential demands of scaling up desirable applications. These include change management and estimating total cost of ownership. It’s not just the purchase price of the product that needs to be considered; there’s also ongoing support, training, and expanding digital infrastructure. Currently, ed tech decision makers rarely ask for evidence that an ed tech product will improve student learning. A culture of continuous improvement needs to be built through iterative ed tech decision-making cycles.

While research is happening, it should match the context and magnitude of the decision

All ed tech decision makers conduct research, loosely defined, to inform their decision-making. Most commonly, this involves gathering input from faculty, staff, and students about their ed tech–related needs and experiences, and reviewing student outcomes after implementing an ed tech strategy or product. But the emphasis is more on user experience and whether the technology is well implemented than on whether it improves student learning.

An abundance of digital data may yield the perception that ed tech decisions are being made based on evidence, but, as many researchers would argue, data are only as useful as the questions that are asked of them.

Scientifically based research relevant to learning through ed tech is rarely consulted. This is partly because so little exists, but also because there appears to be a strong preference among higher education decision makers for locally produced information.

Duplication of effort occurs with many of the same ed tech products being piloted at multiple institutions. There is clearly room for an online repository for sharing the results of ed tech pilots and studies. A set of guidelines for robust design of pilot studies would also be helpful, for example, recommending the inclusion of comparison groups and an emphasis on measuring actual student learning, as opposed to only grades earned or course completions. Institutions should also collaborate to conduct multi-site pilots. These efforts could collect common indicators of success at large scale and across diverse users and contexts. To streamline the ed tech selection and procurement process, this online repository could be combined with a platform that facilitates ed tech acquisitions along the lines of the

Funders could support the production of better research evidence to inform ed tech decision-making by establishing tiered levels of funding for ed tech. The degree of methodological rigor should mirror the level of higher education investment in the product. For example, the acquisition of a software package that costs $20,000 might merit a few faculty and student tests in a user experience lab. On the other hand, adaptive learning systems in which universities might collectively invest hundreds of millions of dollars would merit a large-scale, multi-site, randomized controlled trial to assess impact on student learning. Large investments should also be optimized by a commitment to iterative evidence-gathering to inform continuous product improvement.

Our interviews of these 52 key stakeholders was just the beginning of our work to better align efficacy research in ed tech with the decisions being made by practitioners and institutions. Our hope is that those involved in these critical decisions that impact students, institutions, and ultimately student success will engage in our ongoing work in this area and help work out how to continuously improve the decision-making process for these important tools.

Note: The interviews referred to in this article were part of a study, “EdTech Decision-making in Higher Education,” conducted by for the held in Washington, D.C., in May 2017. The full report for this study can be found .

]]>
Opinion: Analysis: No ‘Gold Standard’ in Ed Tech; Continuum of Evidence Needed to Ensure Technology Helps Students /article/analysis-no-gold-standard-in-ed-tech-continuum-of-evidence-needed-to-ensure-technology-helps-students/ Sat, 01 Jan 2000 00:00:00 +0000

This is the third in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first and second pieces.

To improve decision-making in education technology, and to ensure the technology we invest in actually yields positive outcomes in student learning, we need evidence appropriate for the needs and objectives of people using it — a teacher deciding to try an app, an administrator introducing a new reading program, or even a company improving its product to meet the needs of students and teachers.

A general consensus was evident during discussions at the recent Ed Tech Efficacy Research Symposium. Educators, researchers, funders, and companies all agreed: We need a robust continuum of evidence.

1. Level and kind of evidence should match need and purpose.

“Do not kill a mosquito with a cannon.” —Confucius

Different types of evidence are needed at different levels of district implementation and stages of ed tech product development. Research has shown that districts tend to rely primarily or even exclusively on peer recommendations and pilots, rather than rigorous evidence, to decide what ed tech to purchase (see Figure 1). However, when it comes to evidence, the same level of rigor may not be necessary for all ed tech decisions.

Figure 1. District Reliance on Rigorous Evidence in Ed Tech Purchasing Decisions
http://digitalpromise.org/wp-content/uploads/2014/11/Improving_Ed-Tech_Purchasing.pdf

Useful research begins with proper design. Academic research often consists of traditional large-scale approaches and randomized control trials that seek to demonstrate causal connections that can be extrapolated. But oftentimes large-scale research designs are inappropriate for informing more rapid, practical decisions.

School leaders are often making smaller-scale decisions, such as whether to allow teachers to use an educational app to enhance the standard curriculum. For these types of decisions, most school leaders don’t require absolute certainty that an app is the key factor for improving student achievement. Instead, they want to know the extent to which teachers and students adopt the technology (e.g., information on usage) and whether an app has the potential to improve outcomes for their students and their teachers. Other types of research, such as , can help school and district leaders produce the evidence they need to inform their decisions on a reasonable timeline and at a reduced cost.

Smaller, more nimble research approaches can also inform other questions educators and school leaders face around context and scope. Often, they want to know which tools work in specific circumstances. If, for example, educators are examining a tool specifically to use in an after-school program, research that offers overall general outcomes might not be the most useful. A targeted RCE could assess whether a tool is accomplishing its intended objectives in this setting and for the specific sample. Evidence collected through smaller studies in local contexts may not enable generalizability, but the collection of some evidence is better than no evidence and is certainly better than a reliance on biased marketing material or the subjective opinions of a small group of peers.

Additionally, traditional research often results in a binary estimate: Either the treatment worked or it didn’t. But learning technologies aren’t static interventions, and contexts differ immensely. Products are built iteratively;  over time, functionality is added or modified. A research approach that studies a product should also cycle with the design iterations of the product and demonstrate the extent to which it is changing and improving over time. Further, snapshot data often fail to fully capture the developmental context of a product’s implementation.

It would be wise for educators to consider the scale, cost, and implications of any given decision when determining the type of evidence that is appropriate for informing it. A continuum of evidence, based on factors like product cost and implementation risks, captures the range of ed tech product research possibilities (see Figure 2). For example, testimonials from several teachers who have used a product in a similar setting may be enough for a teacher to try an app in her classroom. But if a district wants to implement a reading program across all of its elementary schools, stronger forms of evidence should be collected and reviewed before making such a resource-intensive purchase.

Figure 2. Evidence Needed for Ed Tech Decision-Making

Several organizations have created resources to help educators evaluate different kinds of evidence: This helps educators identify and evaluate a range of evidence, as does tool. similarly highlights the hierarchy of rigor in different research approaches.

Tools such as the, ,, the , and the can provide useful support in conducting evaluations of the use of technology. For language to support these variations in ed tech research approaches, the Learning Assembly has created an to classify different types of studies and evidence.

2. Companies should be expected to produce evidence in relation to their stage of development.

Products should neither be developed nor be purchased simply because of a  “cool idea.” Rather, companies should use learning science to create products, user research, and implementation science to refine them, and evaluation research to provide evidence of how well they work in different contexts.

The level of evidence companies should be expected to produce should be related to their stage of development.

Development Stage: All companies should be able to explain how learning science undergirds their products as they are initially being developed. For example, early-stage reading products should be designed using on how young students learn, and professional development services should be grounded in science on how adults learn. Even productivity tools should have a foundation based on human behavior science or systematic evidence from other fields. Collaborating with researchers is often the ideal approach companies take to collecting evidence that their products will be effective. To make this easier, the University of Virginia’s Curry School of Education and Jefferson Education Accelerator are creating a database to help educators and companies connect to researchers more easily: the National Education Researcher Database (NERD).

Early Stage: Early-stage companies should conduct user research that captures feedback that enables them to improve their products and services. This research can take many forms, including interviews, surveys, observations, and focus groups. Additionally, data analytics on a user’s interaction (e.g., clicks, activities, or scores) can inform early-stage development, as well as pilot studies, A/B testing, and RCEs. Companies should consider gathering feedback from multiple user groups and multiple settings to understand a representative range of contexts in which they have the opportunity to be effective. Implementation science and design-based implementation research should also be conducted to help stakeholders understand what actually makes educational interventions effective.

Later Stage: At this stage, it’s incumbent on them to participate in evaluation research that provides evidence of how well they work in different settings. Ideally, this research includes a sufficiently representative comparison group — that is, a group of students not using the product who are similar to the students who are using the product.

3. Districts and companies need support to gather the right level of evidence.  

The demand for evidence is increasing. When the U.S. Congress passed the Every Student Succeeds Act in 2016, it strengthened the focus on using evidence to make purchasing and implementation decisions. To help educators and other stakeholders, the U.S. Department of Education released non-regulatory guidance on that highlights a focus on identifying local needs, engaging stakeholders, and promoting continuous improvement. This guidance also includes definitions of “evidence-based” included in ESSA and includes recommendations on how to identify the level of evidence for various interventions.

Essentially, ESSA outlines four tiers of evidence: 1) strong, 2) moderate, 3) promising and 4) demonstrates a rationale. In an ideal world, all education technology would have strong evidence to support its use. Practically, most products will not reach that level because of the cost and complexity of performing research to that standard. All products used in schools, however, should at least be able to demonstrate a rationale, which means that some efforts have begun to study the effects of the product and that there is a well-defined logic model informed by research. The Center for Research and Reform in Education at Johns Hopkins University School of Education recently launched to help educators identify programs that meet ESSA standards.

In conclusion, traditional views on research rigor and scientific evidence have limited the extent to which research informs practice, and vice versa. Rather than taking on a narrow view that the “gold standard” randomized control trial is the only research design that produces acceptable evidence, the field needs to understand that there exists a continuum of evidence, and that different types of evidence inform different types of decisions. Approaching product development, adoption, and evaluation through this lens would foster healthy collaborations and relationships among educators, researchers, administrators, and other stakeholders in the education ecosystem, as well as ensure the ultimate goal of all of this: that students can and will learn better.

Authors:

Dr. Christina Luke, Project Director — Marketplace Research, Digital Promise
Dr. Joshua Marland, Director of Data & Analytics, Highlander Institute
Dr. Alexandra Resch, Associate Director, Mathematica Policy Research
Dr. Daniel Stanhope, Senior Research Officer, Lea(R)n
Katrina Stevens, Education Consultant & Former Deputy Director of the Office of Educational Technology, U.S. Department of Education

]]>
Opinion: Analysis: For Ed Tech That Actually Works, Embrace the Science of Learning /article/analysis-for-ed-tech-that-actually-works-embrace-the-science-of-learning/ Sat, 01 Jan 2000 00:00:00 +0000
This is the second in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Read the first piece here.
As education technology gains an increasing presence in American schools,

But as curricula and learning tools are prepared for rigorous evaluation, we should think about how existing research on teaching and learning have informed their design. Building a movement around research and impact must include advocating for products based on learning research. Otherwise, we are essentially taking a “wait and hope” strategy to development: wait until we have something built and hope it works.

When we make a meal, we want to at least have a theory about what each ingredient we include will contribute to the overall meal. How much salt do we put in to flavor it perfectly? When do we add it in? Similarly, when creating a curriculum or technology tool, we should be thinking about how each element impacts and optimizes overall learning. For example, how much and when do we add in a review of already-learned material to ensure memory retention? For this, we can turn to learning science as a guide.

. Our understanding comes from fields as varied as cognitive and educational psychology, motivational psychology, neuroscience, behavioral economics, and computer science. There are research findings that have been replicated repeatedly across dozens of studies. If we want to create educational technology tools that ultimately demonstrate efficacy, , integrating the insights from decades of research into and how teachers teach into product design from the beginning.

Existing research on learning

So what do we know about how people learn? You could turn to foundational texts like Clark and Mayer’s , Dan Schwartz’s , and Hattie and Yates’s for detail. Or you could look to the excellent summaries compiled by , , and .

Here are a few examples:

Spaced practice: We know that extending practice over time is better than cramming all practice into the few days before an exam. Spaced practice strengthens information retention and keeps it fresh over time, interrupting the “.” Implementing spaced practice could be as simple as planning out review time. Technology can help implement spaced practice in at least two ways: 1) prompting students to make their own study calendars and 2) proactively presenting already-learned information for periodic review.

Retrieval practice: What should that practice look like? Rather than rereading or reading and highlighting, we know it is better for students to actually retrieve the information from memory because retrieving the information actually changes the nature of the memory for the information. It strengthens and solidifies the learning, as well as provides more paths to access the learning when you need it. Learners creating flashcards have known about this strategy for a long time. offers useful information and helpful applications building on this important principle. There is a potential danger point here for designers not familiar with learning literature. Since multiple-choice activities are easier to score with technology, it is tempting to create these kinds of easy questions for retrieval practice. However, learning will be stronger if students practice freely recalling the information rather than simply recognizing the answer from choices.

Elaboration: Taking new information and expanding on it, linking it to other known information and personal experience, is another way to improve memory for new concepts. Linking new information to information that is already known can make it easy to recall later. In addition, simply expanding on information and explaining it in different ways can make retrieval easier. One way to practice this is to take main ideas and ask how they work and why. Another method is to have students draw or fill in concept maps, visually linking ideas and experiences together. There are a number of online tools that have been developed for creating concept maps, and current research is focusing on how to provide automated feedback on them.

So how many educational technology products actually incorporate these known practices? How do they encourage students to engage in these activities in a systematic way?

Existing research on instructional use of technology

There is also , which should inform how a product is designed to be used in the classroom.

For example, there is a solid research base on how to design activities that introduce new material prior to formal instruction. It suggests that students should initially be given a relatively difficult, open-ended problem that they are asked to solve. Students, of course, tend to struggle with this activity, with almost no students able to generate the “correct” approach. However, the effort students spend in this activity has been shown to build a better foundation for future instruction to build on as students have a better understanding of the problem to be solved (e.g., ; ). It is clearly important that this type of activity be presented to students as a chance to explore and that failure is accepted, expected, and encouraged. In contrast, an activity meant to be part of practice following direct instruction would likely include more step-by-step feedback and hints. So, if someone wants to design activities to be used prior to instruction, they might 1) select a fundamental idea from a lesson, 2) create multiple cases for which students must find an all-encompassing rule, and 3) situate those cases in an engaging scenario.

of Stanford University tested this idea with students learning about ratios — without telling them they were learning about ratios. Three cases with different ratios were created based on the number of objects in a space. This was translated into the number of clowns in different-sized vehicles, and students were asked to develop a “crowded clowns index” to measure how crowded the clowns are in the vehicles. Students are not specifically told about ratios, but must uncover that concept themselves.

Product developers should consider research like this when designing their ed tech tools, as well as when they’re devising professional development programs for educators who will use those technologies in the classroom.

Product makers must consider these questions when designing ed tech: Will the activity the technology facilitates be done before direct instruction? Will it be core instruction? Will it be used to review? How much professional development needs to be provided to teachers to ensure the fidelity of implementation at scale?

Too often, designers think there is a singular answer to this series of questions: “Yes.” But in trying to be everything, we are likely to end up being nothing. Existing research on instructional uses of technology can help developers choose the best approach and design for effective implementation.

Going forward

With this research as foundation, though, we still have to cook the dish and taste it. Ultimately, applying learning science at scale to real-world learning situations is an engineering activity. It may require to get the mix of ingredients “just right” for a given audience, or a given challenging learning outcome. We need to make sure to carefully understand and tweak our learning environments, using good piloting techniques to find out both whether our learners and teachers can actually execute what we intend as we intended it (Is the learning intervention usable? Are teachers and students able to implement it as intended?), and whether the intervention gives us the learning benefits we hoped for (effectiveness).

The key is that research should be informing development from the very beginning of an idea for a product, and an evidence-based “learning engineering” orientation should continue to be used to monitor and iterate changes to optimize impact. If we are building from a foundation of research, we are greatly increasing the probability that, when we get to those iterated and controlled trials after the product is created, we will in fact see improvements over time in learning outcomes.

Follow the conversation on social media with .


Authors:

  • Kristen DiCerbo, Vice President, Education Research, Pearson
  • Aubrey Francisco, Chief Research Officer, Digital Promise
  • Bror Saxberg, Chief Learning Officer, Kaplan
  • Melina Uncapher, Assistant Professor, Department of Neurology, UC San Francisco
]]>
Opinion: #ShowTheEvidence: Building a Movement Around Research, Impact in Ed Tech /article/showtheevidence-building-a-movement-around-research-impact-in-edtech/ Sat, 01 Jan 2000 00:00:00 +0000
This is the first in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator.
To improve education in America, we must improve how we develop and use education technology.
Teachers and students are increasingly using digital tools and platforms to support learning inside and outside the classroom every day. , and approximately take online courses — four times as many as a decade earlier. Technology will impact the 74 million children currently under the age of 18 as they progress through the pre-K–12 education system. The key question is: What can we do to make sure that the education technology being developed and deployed today fits the needs of 21st-century learners?
Our teachers and students deserve high-quality tools that provide evidence of student learning, and that provide the right kind of evidence — evidence that can tell us whether the tool is influencing the intended learning outcomes.
(Ӱ: After Six Years, Brooklyn’s P-TECH to Graduate First Class as the Model Expands Worldwide)
Evidence and efficacy can no longer be someone else’s problem to be solved at some uncertain point in the future. The stakes are too high. We all have a role to play in ensuring that the money spent in ed tech ( in 2016 for K-12) lives up to the promise of enabling more educators, schools, and colleges to genuinely improve outcomes for students and help close persistent equity gaps.
Still, education is complex. Regardless of the quality of a learning tool, there will be no singular, foolproof ed tech solution that will work for every student and teacher across the nation. Context matters. Implementation matters. , which will also include instructor practices, student experiences, and multiple other contextual factors.
Figuring out what actually works and why it works requires intentional planning, dedicated professional development, thoughtful implementation, and appropriate evaluation. This all occurs within a context of inconsistent and shifting incentives and, in the U.S., involves a particularly complex ecosystem of stakeholders. And unfortunately, despite the deep and vested interest of improving the system, the current ecosystem is many times better at supporting the status quo than introducing a potentially better-suited learning tool.
That’s the challenge to be taken up by the in Washington, D.C., this week, and the as part of the initiative convened by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. People like us rarely have the opportunity to collaborate, but this issue is too important to go it alone.
Over the past six months, 10 working groups consisting of approximately 150 people spent valuable hours together learning about the challenges associated with improving efficacy and exploring opportunities to address these challenges. We’ve looked at issues such as how ed tech decisions are made in K-12 and higher education, what philanthropy can do to encourage more evidence-based decision-making, as well as what will be necessary to make the focus on efficacy and transparency of outcomes core to how ed tech companies operate.
Over the next six weeks, we’ll explore these themes here, sharing findings and recommendations from the working groups. Our hope is to stimulate not just discussion but also practical action and concrete progress.
Action and progress might look like new ways to use research in decision-making such as informational site or tools that make it easier for education researchers to connect with teachers, districts, and ed tech companies, like the forthcoming . Collaboration is critical to improving how we use research in ed tech, but it’s not easy. Building a takes time. Acting on that framework is harder.
So, as a starting point, here are three broader issues that we’ve learned about efficacy and evidence from our work so far.
1. Everyone wants research and implementation analysis done, but nobody wants to pay more for it.
We know it’s not realistic to expect that the adoption of each ed tech product or curricular innovation will be backed up by a randomized control trial.
Investors are reticent to fund these studies, while schools or developers rarely want to pick up the price tag for expensive studies. When were still at the U.S. Department of Education’s Office of Educational Technology, they pointed out that “it wouldn’t be economically feasible for most app creators (or schools) to spend $250k (a low price tag for traditional educational research) to evaluate the effectiveness of an app that only cost a total of $50k to build.”
We could spend more efficiently, leveraging the 15,000 tiny pilots and decisions underway into new work and new insights without spending more money. This could look like a few well-designed initiatives to gather and share relevant information about implementations and efficacy. Critically, we’ll need to find a sustainability model for that type of rigorous evaluation to ensure this becomes a key feature in how adoption decisions are made.
2. We need to recognize that evidence exists on a continuum.
can support . What is important is that each decision is supported by an appropriate level of evidence. provides a useful reference for educators on different evidence types and how they should be viewed. For educators, it would be wise to look at the scale and cost of the decision and determine the appropriate type of evidence.
Tools like the , , and can provide useful support in making decisions and evaluating the use of technology.
It’s important to remember that researchers and philanthropists may use education research for different purposes than would a college, university system, or districts. Academic researchers may be looking to identify causal connections, learning gains, or retention rates, while a district is often focused on a specific context and implementation (what works for schools similar to mine).
When possible, traditional randomized control trials provide useful information, but they’re often not affordable, feasible, or even necessarily appropriate. For example, many districts, schools, or colleges are not accustomed to or well versed in undertaking this type of research themselves.
3. It’s easy to blame other actors for the current lack of evidence-driven decisions in education.
Everyone we spoke to agrees that decisions about ed tech should be made on the basis of merit and fit, not marketing or spin. But nearly everyone thinks that this problem is caused by other actors in the ecosystem, and this means that progress here will require hard work and coordination.
For example, investors often don’t screen their investments for efficacy, nor do they promote their portfolio companies to necessarily undertake sufficient research. Not surprisingly, this tends to be because such research is costly and doesn’t necessarily drive market growth. It’s also because market demand is not driven by evidence. It’s simply not the case that selection choices for tools or technologies are most often driven by learning impact or efficacy research. That may be shifting slowly, but much more needs to be done.
Entrepreneurs and organizations whose products are of the highest quality are frustrated that schools are too often swayed by their competitors’ flashy sales tactics. Researchers feel that their work is underappreciated and underutilized. Educators feel overwhelmed by volume and claims, and are frustrated by a lack of independent information and professional support. We have multiple moving pieces that must be brought together in order to improve our system.
Ensuring that ed tech investments truly help close achievement gaps and expand student opportunity will require engagement and commitments from a disparate group of stakeholders to help invent a new normal so that our collective progress is directional and meaningful. To make progress on this, we must bring the conversation of efficacy and the use of evidence to center stage.
That’s what we’re hoping to help continue with this symposium. We’ve learned much, but we know that the journey is just beginning. We can’t do it alone. Feel free to follow and join the conversation on Twitter with #ShowTheEvidence.

Authors:

  • Aubrey Francisco, Chief Research Officer, Digital Promise
  • Bart Epstein, Founding CEO, Jefferson Education Accelerator
  • Gunnar Counselman, Chief Executive Officer, Fidelis Education
  • Katrina Stevens, former Deputy Director, Office of Educational Technology, U.S. Department of Education
  • Luyen Chou, Chief Product Officer, Pearson
  • Mahnaz Charania, Director, Strategic Planning and Evaluation, Fulton County Schools, Georgia
  • Mark Grovic, Co-Founder and General Partner, New Markets Venture Partners
  • Rahim Rajan, Senior Program Officer, Bill & Melinda Gates Foundation
  • Robert Pianta, Dean, University of Virginia Curry School of Education
  • Rebecca Griffiths, Senior Researcher, Center for Technology in Learning, SRI International
This series is produced in partnership with Pearson.


]]>