Evidence Tablets in the classroom 7 sep. ‘20
Leermiddelen
- Cook (2012)
- Van der Ven, Segers, Takashima en Verhoeven (2017)
Research-practice gap: difference between the findings of scientific research on
effective educational practices and what takes place in schools and classrooms (Cook,
2012 Porter & McMaken, 2009)
Evidence-based practices (EBPs): to bridge the research-practice gap, instructional
approaches shown by research to result in improved student outcomes (another
broad term: research-based practices)
Term represents a systematic approach to determining which research-based
practices are supported by many research studies that
(a) Are of high methodological quality,
(b) Use appropriate research designs that allow for assessment of effectiveness
(c) Demonstrate meaningful effect sizes such that they merit educators’ trust
that the practice works (Cook 2012, Cook, Tankersley, & Landrum, 2009)
What are the advantages of basing educational practice on scientific evidence and what
are challenges/pitfalls? (Cook, 2012)
Logical Argument for Identifying and Applying Evidence-Based Practices in Education:
1. The most effective instructional practices and programs produce the highest
student outcomes
2. Scientific research is the most reliable method for determining effective
instructional practices and programs
3. Teachers can appropriately apply practices identified as effective by scientific
research
Therefore, the identification and application of practices shown by research to be
effective (e.g., evidence-based practices) can improve student outcomes
In summary, to the degree that the three premises are valid, as we have endeavored to
show, the conclusion is logically entailed. That is, if (a) effective practices exist that are most
likely to improve student outcomes generally, (b) scientific research is the best way to
determine which practices are effective, and (c) teachers can apply EBPs, then it must be the
case that (d) identifying and applying EBPs can improve student outcomes generally.
Criticism on evidence-based education:
(a) Generally effective practices do not exist
(b) Science cannot meaningfully identify effective practices
(c) EBPs will not be implemented broadly
Success of EBP depending on different factors, such as teachers’ instruction
Small proportion of students (non-responders or treatment resisters) don’t
respond positive to the most effective instructional practices
, It depends on the student(characteristics) if this form of education has positive
results on the student-outcomes
Educational research can be conducted poorly and is clearly not immune to
conscious and unconscious researcher bias
Implementing EBPs without professional wisdom likely leads to disappointing
outcomes, just as exercising professional wisdom while implementing less than
effective programs and practices is unsatisfactory
Challenges of applying EBP’s by teachers:
(a) Implementing those practices with fidelity (loyalty)
(b) Flexibly adapting the application of EBP’s to meet the needs of different learners
while still preserving implementation fidelity (loyalty)
(c) Maintaining the appropriate use of EBP’s over time appear even more daunting
(beanstigend… difficult?)
Reasons that EBPs are not broadly embraced and broadly implemented by educators
include
a) teachers’ views of research and researchers,
b) the importance of practice-based evidence, and
c) the general opposition to change and reform in educational institutions.
Which kind of study design will provide the strongest evidence, and why? And why should
we also look at studies with alternative designs? (Cook, 2012)
Reviews of empirical literature descriptive quantitative and qualitative research
designs (group experimental designs, such as randomized controlled trials)
Evidence-based reviews, because it asks whether a certain practice results in
generally improved student outcomes, these reviews focus on research designs that
exhibit causality (i.e., demonstrate that the practice causes changes in student
outcomes)
To answer the question whether a practice works, then it seems appropriate to focus
on research designs that yield causal claims (e.g., randomized controlled trials).
Reviewing solely (only) experimental studies is appropriate because those studies
best address their research question
In addition to knowing whether a practice works generally, educators also benefit
from understanding several related issues, such as why, how, for whom, and where it
works just knowing if it works cannot lead to an effective classroom
implementation
Important to look at other studies too: such as case studies, content analyses,
qualitative interviews, that adequately address the research question(s)
Although questions regarding how a practice works, the nature of a practice, and
the practicality of a practice are clearly important issues, they are technically
separate questions from whether a practice works the questions are
addressed best by conducting discrete reviews that likely will include other types
of research studies
Alternative designs (other approaches to experimental research):