100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Collaborative Testing Improves Performance but Not Content Retention in a Large-Enrollment Introductory Biology Class €14,68   In winkelwagen

Tentamen (uitwerkingen)

Collaborative Testing Improves Performance but Not Content Retention in a Large-Enrollment Introductory Biology Class

 18 keer bekeken  0 keer verkocht
  • Vak
  • Collaborative Testing Improves Performance
  • Instelling
  • Collaborative Testing Improves Performance

Collaborative Testing Improves Performance but Not Content Retention in a Large-Enrollment Introductory Biology Class Hayley Leight, Cheston Saunders, Robin Calkins, and Michelle Withers Department of Biology, West Virginia University, Morgantown, WV 26506 Submitted April 16, 2012; Revise...

[Meer zien]

Voorbeeld 2 van de 10  pagina's

  • 26 juli 2024
  • 10
  • 2023/2024
  • Tentamen (uitwerkingen)
  • Vragen en antwoorden
  • Collaborative Testing Improves Performance
  • Collaborative Testing Improves Performance
avatar-seller
CBE—Life Sciences Education
Vol. 11, 392–401, Winter 2012
Article
Collaborative Testing Improves Performance but Not
Content Retention in a Large-Enrollment Introductory
Biology Class
Hayley Leight, Cheston Saunders, Robin Calkins, and Michelle Withers
Department of Biology, West Virginia University, Morgantown, WV 26506
Submitted April 16, 2012; Revised August 8, 2012; Accepted August 10, 2012
Monitoring Editor: Diane Ebert-May
Collaborative testing has been shown to improve performance but not always content retention.
In this study, we investigated whether collaborative testing could improve both performance and
content retention in a large, introductory biology course. Students were semirandomly divided
into two groups based on their performances on exam 1. Each group contained equal numbers of
students scoring in each grade category (“A”–“F”) on exam 1. All students completed each of the four
exams of the semester as individuals. For exam 2, one group took the exam a second time in small
groups immediately following the individually administered test. The other group followed this
same format for exam 3. Individual and group exam scores were compared to determine differences
in performance. All but exam 1 contained a subset of cumulative questions from the previous
exam. Performances on the cumulative questions for exams 3 and 4 were compared for the two
groups to determine whether there were significant differences in content retention. Even though
group test scores were significantly higher than individual test scores, students who participated in
collaborative testing performed no differently on cumulative questions than students who took the
previous exam as individuals.
INTRODUCTION
At large research universities, it is not uncommon for intro-
ductory science courses to have enrollments of 200 or more
students (Smith et al. , 2005). Based on the grading time nec-
essary for such large numbers of students, examinations for
these classes tend to be made up primarily of multiple-choice
questions and in this context, provide an objective, time-
efficient method for evaluating student performance (Straits
and Gomez-Zwiep, 2009). Computer-based testing offers a
convenient vehicle for administering multiple-choice exams
DOI: 10.1187/cbe.12-04-0048
Address correspondence to: Michelle Withers (Michelle.Withers@
mail.wvu.edu).
c/circlecopyrt2012 H. Leight et al. CBE—Life Sciences Education c/circlecopyrt2012
The American Society for Cell Biology. This article is distributed
by The American Society for Cell Biology under license from
the author(s). It is available to the public under an Attribution–
Noncommercial–Share Alike 3.0 Unported Creative Commons
License (http://creativecommons.org/licenses/by-nc-sa/3.0).
“ASCBR/circlecopyrt” and “The American Society for Cell BiologyR/circlecopyrt” are regis-
tered trademarks of The American Society for Cell Biology.to large numbers of students and as such is becoming in-
creasingly commonplace in college classrooms (Clariana and
Wallace, 2002). Computer-based exams have many attractive
features. For example, they facilitate standard testing proce-
dures; allow for accurate, objective scoring; provide a mech-
anism for quantitative assessment of student learning; and
permit the assessment of cognitive and perceptual perfor-
mances of students, as well as of their content knowledge
(Mead and Drasgow, 1993; Rosenfeld et al. , 1993; de Beer and
Visser, 1998; Metz, 2009). An additional benefit of computer-
based exams is the ability to administer them outside class
at a variety of times. This offers students the flexibility of
scheduling their examinations at times that best fit both their
personal preferences and busy schedules, which may lead to
reductions in exam anxiety and in the number of students
who miss exams (Stowell and Bennett, 2010).
One weakness of computer-based exams can be the lack
of postexamination feedback for students, as they typically
receive only a numerical grade. When tests are offered at
multiple times during an exam window, several equivalent
versions of each multiple-choice question are generated to
reduce the probability that students testing at later times will
receive the exact combination of questions as students testing
392 Group Exams Improve Grades but Not Retention
earlier. Unfortunately, this drastically increases the requisite
size of question banks for computer-based tests. Due to their
large size, these question banks are time-consuming to create
and, therefore, not openly shared with students. As a result,
students must review their exams in a monitored environ-
ment, which may reduce the number of students who opt
to do so. Without postexamination feedback, the exams are
primarily tools of evaluation and miss an opportunity to fa-
cilitate learning.
Assessments such as exams are best used as tools to help
instructors better understand the relationship between what
we teach and what students learn (Tanner and Allen, 2004)
and to help students improve retention and comprehension
of content. Testing as a study strategy has been shown to
improve content retention due to repeated recall efforts, a
phenomenon referred to as the “testing effect” (Roediger and
Karpicke, 2006). Multiple-choice exams can also invoke the
testing effect, resulting in improvements on subsequent ex-
ams (Marsh and Roediger, 2007). However, when students
receive a numerical grade only, exams serve primarily as
mechanisms for evaluation and have little impact on student
learning and content retention (Epstein et al. , 2001, 2002). Fur-
thermore, some students succumb to a phenomenon known
as the “negative testing effect,” in which their recollection of
incorrect choices interferes with the learning of correct con-
tent (Roediger and Marsh, 2005). Rao et al. (2002) demon-
strated that incorporating examination formats that allow
students opportunities to receive feedback on mistakes made
on multiple-choice questions may reduce or preclude this
negative impact on learning.
Group testing is a promising way to bring the power of
collaborative learning to bear on the discussion and analy-
sis of exam questions after students have completed an exam
once as individuals (Millis and Cottell, 1998; Michaelson et al.,
2002; Hodges, 2004). Incorporating group exams into the
computer-based testing format could provide a time-efficient
mechanism to boost the learning potential of these exams.
Collaborative testing improves performance (Stearns, 1996;
Sumangala et al. , 2002; Giuliodori et al. , 2008; Eaton, 2009;
Haberyan and Barnett, 2010) and motivation (Hodges, 2004;
Kapitanoff, 2009), decreases test anxiety (Muir and Tracy,
1999, Zimbardo et al. , 2003; Hodges, 2004; Kapitanoff, 2009),
and effectively evaluates student learning (Russo and War-
ren, 1999). It is also viewed positively by students (Cortright
et al., 2003; Lusk and Conklin, 2003; Mitchell and Melton, 2003;
Zimbardo et al. , 2003; Shindler, 2004; Woody et al. , 2008; San-
dahl, 2010). While studies consistently have demonstrated
improvements in student performance on collaborative ex-
ams, the ability of collaborative testing to improve content
retention is still in question. Some studies report an improve-
ment in content retention from collaborative testing (Rao et al.,
2002; Cortright et al. , 2003; Bloom, 2009), while others show
no effect (Lusk and Conklin, 2003; Woody et al. , 2008; San-
dahl, 2010). Given the inconsistency of these findings and the
extra time and resources required to add collaborative test-
ing into an existing examination format, we wanted to know
whether collaborative testing would indeed improve student
learning in an introductory science class. In this study, we in-
vestigated whether collaborative examinations can improve
both performance and content retention when added to a
computer-based testing format for a large-enrollment intro-
ductory biology course.METHODS
Course Context
Biology 115: Principles of Biology, is a first-semester,
introductory-level course with a laboratory that introduces
students to basic concepts in cellular, molecular, and evolu-
tionary biology and fundamental science process skills. It is
the first of a five-course series required for biology majors
and serves as a specific requirement for several undergradu-
ate science degrees on campus. In addition, this course fills a
General Education Curriculum requirement for non–science
majors at West Virginia University, Morgantown. The course
consists primarily of freshmen seeking degrees in biology,
chemistry, or life sciences–related disciplines. A very small
proportion of the students are seeking degrees in other sci-
ence and non–science disciplines. The class is roughly split
between males and females.
Course Structure
To examine the impact of collaborative testing on student
learning, we used a single section ( ∼250 students) of Biol-
ogy 115: Principles of Biology, during the Fall semester of
2010. The section employed an active-learning format and
was taught by a discipline-based education researcher trained
in scientific teaching by the National Academies Summer In-
stitute on Undergraduate Biology Education who has taught
the course since 2007. Group-learning activities, such as per-
sonal response system (clicker) questions, case studies, dis-
cussion, and problem solving were employed on a daily ba-
sis to engage students with the course material. In addition
to the lecture-based component of the course, all students
were enrolled in an accompanying laboratory section. Final
grades were determined from five course examinations, con-
cept inventory pre- and posttests, formative assessments, and
laboratory exercises. The objective course examinations were
computer-based and consisted of multiple-choice, multiple-
correct, true/false, and sequencing problems.
Research Design
To evaluate the effect of collaborative testing on content reten-
tion in this large-enrollment introductory biology course, we
employed a randomized cross-over design (Cortright et al. ,
2003; Sandahl, 2010). We elected to use a randomized cross-
over design due to its unique characteristic: each subject has
the ability to serve as his or her own control (Rietbergen and
Moerbeek, 2011). Essentially, by randomly splitting the class
in two, we were able to run the experiment twice during the
semester with each group serving once as the experimental
group and once as the control group that controlled for coin-
cidental differences in the two samples. Based on scores from
exam 1, students in each grade category (“A”–“F”) were ran-
domly assigned to one of two equally sized groups (A or B).
Due either to withdrawal or nonparticipation in the group
exams, the group sizes for A and B were 92 and 104 students,
respectively, at the conclusion of this study. Power analysis
using the effect size of 0.06 calculated from data reported by
Cortright et al. (2003), a power of 0.8, and a two-tailed alpha
of 0.05, yielded a sample size requirement of 90.
All students, regardless of group designation, completed
each course exam as individuals in the Biology Department
Vol. 11, Winter 2012 393

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper TIFFACADEMICS. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €14,68. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 67163 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen

Laatst bekeken door jou


€14,68
  • (0)
  Kopen