100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Reading and Mathematics Software Effectiveness Study $13.99   Add to cart

Exam (elaborations)

Reading and Mathematics Software Effectiveness Study

 4 views  0 purchase
  • Course
  • Reading and Mathematics Software Effectiveness St
  • Institution
  • Reading And Mathematics Software Effectiveness St

Guest Editorial: More Questions than Answers: Responding to the Reading and Mathematics Software Effectiveness Study Kimberly M. Fitzer, J oseph R. Freidhoff, Anny Fritzen, Anne Heintz, Matthew J. Koehler, Punya Mishra, J im Ratcliffe, Tianyi Zhang, J injie Zheng, and Wenying Zhou[1] Mich...

[Show more]

Preview 2 out of 6  pages

  • August 24, 2024
  • 6
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • Reading and Mathematics Software Effectiveness St
  • Reading and Mathematics Software Effectiveness St
avatar-seller
StudyCenter1
Fitzer, K. M., Freidhoff, J . R., Fritzen, A., Heintz, A., Koehler, J ., Mishra, P., Ratcliffe, J ., Zhang, T.,
Zheng, J ., & Zhou, W. (20 0 7). Guest editorial: More questions than answers: Responding to the
reading and mathem atics software effectiveness study. Contem porary Issues in Technology and
Teacher Education, 7(2), 1-6.




Guest Editorial:
More Questions than Answers:
Responding to the Reading and Mathematics
Software Effectiveness Study
Kim berly M. Fitzer, J oseph R. Freidhoff, Anny Fritzen, Anne Heintz, Matthew J . Koehler,
Punya Mishra, J im Ratcliffe, Tianyi Zhang, J injie Zheng, and Wenying Zhou[1]
Michigan State University

Problem s worthy of attack, prove their worth by hitting back—Piet Hein

There have been few large-scale em pirical studies of the effectiveness of educational
software in im proving student learning, even though educational technology has becom e
a ubiquitous tool for learning both in and out of the classroom . The recently released
report to Congress, “Effectiveness of Reading and Mathem atics Software Products:
Findings from the First Student Cohort,” (Dynarski et al., 20 0 7) produced by the National
Center for Education Evaluation and Regional Assistance aim s to fill this gap. (The
com plete report can be downloaded from http:/ / ies.ed.gov/ ncee/ pdf/ 20 0 740 0 5.pdf).

The study is im pressive in m any regards. The sam ple included 439 teachers and 9,424
students in 33 districts. Sixteen different software products for reading in first grade and
in fourth grade, m athem atics in sixth grade, and high school algebra in ninth grade were
selected. Classroom s within participating schools were random ly assigned to treatm ent or
control conditions. Treatment classrooms used one of 16 preselected software titles, while
not interfering with instruction in control classroom s.

Teachers in the treatm ent group received software and training on one of the software
products and were expected to use it in their teaching. It is worth noting that teachers in
control classroom s were also perm itted to use educational software products and even
introduce new technology products at their discretion. Assessments of effectiveness cast a
wide net by including standardized tests and classroom observations. In all grades,
students were given reading and m athem atics tests both at the beginning and end of the
school year. Teachers were interviewed to determ ine their attitudes toward the
technology products and to assess how these tools were being used.

The study’s key finding is that the use of preselected educational software products did
not m ake a statistically significant difference in student test scores between the control
and the treatm ent groups, though the report suggests that there was substantial variation
between schools regarding the effects on student achievem ent. The m edia were quick to
respond to these findings (e.g., Paley, 20 0 7; Trotter, 20 0 7; ZD Net Editorial, 20 0 7a,
20 0 7b; am ong others). Our concern, however, is that readers m ight m isinterpret the
nuances of the study, and the subsequent m edia reports as being an indictm ent of all
educational technology. There is som e evidence that this is already happening. For




1

, Contemporary Issues in Technology and Teacher Education, 7(2)



instance, a group opposing a recent technology bond initiative in a Michigan school
district, quoted the study and other m edia reports as evidence for supporting its case (see
the Com m on Sense for Okem os Web site at http:/ / com m onsenseforokem os.org).

As scholars of educational technology it is vital that we engage in a serious dialogue
regarding the im plications of these findings for research, scholarship, and policy. This
guest editorial is just one sm all step in this direction. We would like to thank the editors
of CITE J ournal for providing us with the opportunity to write this first response, as it
were, allowing us to initiate the conversation. We expect that this dialogue will continue
online, through the m entoring blog sponsored by the Society for Inform ation Technology
and Teacher Education and other venues.

There is m uch to adm ire in this study. First, the focus of the study is particularly relevant
given the financial resources that m any schools, districts, and local governments are
investing in educational technology. Adm inistrators and policy makers need to know if
technology-based instructional interventions really im prove what m atters m ost—student
achievem ent. Second, the selected focus on low-income schools is consistent with the N o
Child Left Behind (NCLB) goal of providing achievement-driven interventions to all
students, particularly traditionally underserved populations. The em phasis on low-
income schools is laudatory, but it is com plicated som ewhat by the fact that m ost of the
software program s selected for this study were tutorials. This approach is consistent with
previous surveys showing that low-incom e and otherwise disadvantaged schools tend to
use such software program s, rather than using m ore open-ended program s em phasizing
higher order thinking skills.

Third, investigating technology use in first-, fourth-, sixth-, and ninth-grade reading or
m athem atics classroom s provides data that would help us understand how students and
teachers interact with educational software at different levels and in different content
areas.

Fourth, by carefully selecting software products that showed some prior evidence of
effectiveness, the researchers reduced the possibility that nonsignificant outcom es could
be attributed to inferior software products.

Finally, given the difficulties of m easuring im m ediate results within real-world
environm ents, the longitudinal design of the study is a definite strength. Regardless of
whether the researchers find significant results in Year 2, we adm ire the foresight of a
longitudinal design that carefully disaggregates the treatment data.

In addressing the question of whether or not software-based interventions are successful,
however, this report raises m any m ore questions for the research com m unity than it
answers. A prim ary question is how broadly do the results of the study apply? Despite its
m erits, the study displays certain fundam ental limitations, only som e of which are
discussed in the report. For exam ple, the authors point out that the study “was not
designed to assess the effectiveness of educational technology across its entire spectrum
of uses, and the study's findings do not support conclusions about technology's
effectiveness beyond the study's context, such as other subjects areas" (p. xiv). Most
m edia reports of the study, however, fail to convey these nuances of interpretation.

Another question is how much software use is sufficient to produce a difference in
learning outcom es? For exam ple, the selected software program s were used, on average,
for only about 10 -11% of the instructional tim e.[2] More im portantly, what is not clear is
the degree to which the other 90 % of instructional time was m eaningfully coordinated




2

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller StudyCenter1. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $13.99. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

73314 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$13.99
  • (0)
  Add to cart