Analysis: No ‘Gold Standard’ in Ed Tech; Continuum of Evidence Needed to Ensure Technology Helps Students
This is the third installment in a series of essays discussing the EdTech Efficacy Research Symposium, a gathering of various professionals in the field of education to explore the importance of efficacy research in the development and implementation of educational technologies. This series is a collaboration with Pearson, one of the co-sponsors of the symposium, alongside the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. If you haven’t already, you can read the first and second parts by clicking on the links provided.
In order to make informed decisions regarding educational technology and ensure that the investments we make yield positive outcomes in student learning, it is crucial to have evidence that aligns with the needs and objectives of the users. Whether it’s a teacher considering the use of a new app, an administrator introducing a reading program, or a company improving its product for students and teachers, the evidence must be appropriate for their specific contexts.
During the recent Ed Tech Efficacy Research Symposium, there was a consensus among educators, researchers, funders, and companies: we need a comprehensive range of evidence. It is important to match the level and type of evidence with the specific needs and purposes of the stakeholders involved. As Confucius wisely said, "Do not kill a mosquito with a cannon."
Different levels of district implementation and stages of ed tech product development require different types of evidence. Research has shown that districts often rely on peer recommendations and pilot programs rather than rigorous evidence when making decisions about which ed tech products to purchase. However, it’s important to recognize that not all ed tech decisions require the same level of rigor in terms of evidence.
Properly designed research is essential for generating useful evidence. While large-scale approaches and randomized control trials are often used in academic research to establish causal relationships, they may not always be suitable for informing practical and rapid decision-making. School leaders frequently face smaller-scale decisions, such as whether to use an educational app to supplement the standard curriculum. In these cases, they are more concerned with factors like usage rates and the potential impact on students and teachers. Rapid cycle evaluations (RCEs) can provide timely and cost-effective evidence for school and district leaders to inform their decisions.
Smaller and more agile research approaches can also address specific questions around context and scope. Instead of seeking generalized outcomes, educators and school leaders may want to know which tools are effective in particular circumstances. For example, if educators are considering a tool for an after-school program, research that provides overall general results might not be as helpful. A targeted RCE can assess whether the tool is achieving its intended objectives in that specific setting. While evidence from local studies may not be applicable to all situations, having some evidence is better than having none, especially when compared to relying on biased marketing materials or the subjective opinions of a small group of peers.
Traditional research often presents a binary assessment of whether a treatment worked or not. However, learning technologies are not static interventions and contexts vary greatly. As products are developed iteratively and undergo functional changes over time, research should also adapt and demonstrate how the product is evolving and improving. Additionally, relying solely on snapshot data can fail to capture the dynamic context in which a product is implemented.
Educators should consider factors like scale, cost, and implications when determining the type of evidence necessary for a decision. A continuum of evidence that takes into account product cost and implementation risks can provide a range of research possibilities for ed tech products. For example, if several teachers provide positive testimonials about a product in a similar setting, it may be enough for a teacher to try it in her classroom. However, if a district wants to adopt a reading program across all elementary schools, stronger forms of evidence should be collected and reviewed before investing significant resources.
By understanding the importance of evidence that aligns with specific needs, education professionals can make informed decisions that improve the efficacy of educational technology.
There are several tools available that can provide valuable support in evaluating the use of technology. These tools include the Ed Tech Rapid Cycle Evaluation Coach, LearnPlatform, Edustar, the Ed-Tech Pilot Framework, and the Learning Assembly Toolkit. To help classify different types of studies and evidence in ed tech research, the Learning Assembly has created an Evaluation Taxonomy.
Companies should be expected to provide evidence based on their stage of development. It is not enough to simply have a "cool idea" for a product. Instead, companies should utilize learning science to create their products, conduct user research to refine them, and conduct evaluation research to provide evidence of their effectiveness in different contexts.
The level of evidence produced by companies should correspond to their stage of development. All companies should be able to explain how learning science supports their products during the initial development phase. Early-stage companies should conduct user research and gather feedback to improve their products and services. Later-stage companies should participate in evaluation research to demonstrate the effectiveness of their products in different settings.
Districts and companies require support in gathering the appropriate level of evidence. The emphasis on using evidence to make purchasing and implementation decisions has increased with the passage of the Every Student Succeeds Act. To assist educators and stakeholders, the U.S. Department of Education has provided non-regulatory guidance on Using Evidence to Strengthen Education Investments. This guidance includes definitions of "evidence-based" and recommendations on identifying the level of evidence for different interventions.
ESSA outlines four tiers of evidence: strong, moderate, promising, and demonstrates a rationale. While it would be ideal for all educational technology to have strong evidence supporting its use, this is often impractical due to the cost and complexity of conducting research. However, all products used in schools should at least be able to demonstrate a rationale, meaning that efforts have been made to study their effects and there is a logical model based on research.
In conclusion, it is important to move away from the narrow view that only randomized control trials provide acceptable evidence. There is a continuum of evidence and different types of evidence inform different decisions. By approaching product development, adoption, and evaluation with this understanding, collaborations and relationships among educators, researchers, administrators, and other stakeholders in the education ecosystem can be fostered, ultimately leading to better learning outcomes for students.
Receive stories similar to these directly to your email. Register for Newsletter and never miss an update.