100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Volledige samenvatting Psychologie als Wetenschap () €4,48   In winkelwagen

Samenvatting

Volledige samenvatting Psychologie als Wetenschap ()

 66 keer bekeken  5 keer verkocht

In dit document staat een volledige samenvatting voor het vak Psychologie als Wetenschap van Universiteit Utrecht. Dit omvat aantekeningen bij het boek Science Fictions van Stuart Ritchie, aantekeningen bij alle colleges, aantekeningen bij de focusliteratuur en aantekeningen bij de SPSS opdracht.

Voorbeeld 4 van de 41  pagina's

  • Ja
  • 19 juni 2023
  • 41
  • 2022/2023
  • Samenvatting
book image

Titel boek:

Auteur(s):

  • Uitgave:
  • ISBN:
  • Druk:
Alle documenten voor dit vak (58)
avatar-seller
Loissnoek
Psychologie als Wetenschap
Aantekeningen boek
1. How science works

Science is a social construct. Any claim about the world can only be described as scientific knowledge
after it’s been through this communal process, which is designed to sieve out errors and faults and
allow other scientists to say whether they judge a new finding to be reliable, robust and important.
Because scientists focus so much on trying to persuade their peers, it’s easy to disregard the real
object of science: getting us closer to the truth.

History: formed letter-writing circles with like-minded peers/scientists published standalone books
written in the courts of wealthy rulers or for private patrons or guilds -> Henry Oldenburg created a
journal/newsletter with descriptions of recent experiments and discoveries -> longer articles

In order to get funding for a research question scientists have to apply for a grant.

Desk rejection: when researchers submit a paper to a journal and it is bounced back to the author.

Until 1830 a written evaluation of each study wasn’t tried. The formal peer review system we know
didn’t become universal until well into the twentieth century. It took until the 1970s for all journals to
adopt the modern model of sending out submissions to independent experts for peer review, giving
them the gatekeeping role they have today.

Mertorian norms:

- Universalism: scientific knowledge is scientific knowledge, no matter who comes up with it
- Disinterestedness: Scientists are not in it for the money, for political or ideological reasons, to
enhance their ego or reputation.
- Communality: Scientists should share knowledge with each other
- Organised scepticism: Nothing is sacred, and a scientific claim should never be accepted at
face value

Science is self-correcting: Eventually incorrect ideas are overturned by data.

2. The replicati on crisis

A large consortium of scientists chose 100 studies from three top psychology journals and tried to
replicate them. Only 39 percent of the studies were judged to have replicated successfully. Another
one of these efforts, in 2018, tried to replicate twenty-one social-science papers that had been
published in the world’s top two general science journals, Nature and Science. This time the
replication rate was 62 per cent. Further collaborations that looked at a variety of different kinds of
psychological phenomena found rates of 77 percent, 54 percent and 38 percent. Almost all of the
replications, even where successful, found that the original studies had exaggerated the size of their
effects.

Maybe it’s not quite that bad, for two reasons.

1. We would expect some results that are solid to fail to replicate sometimes, merely due to bad
luck.
2. Some replications might have failed due to their being run with slight changes to the
methodology from the original.

,The replication rate seems to differ across different areas of psychology: In the 2015 Science paper,
cognitive psychology did better than social psychology.

The studies that failed to replicate continued to be routinely cited both by scientists and other
writers: entire lines of research, and bestelling popular books, were being built on their foundation.

Hardly anyone runs replication studies (in economics: 0.1%; in psychology: 1%). So the replication
studies that failed are just the ones we know about. How many other results would prove
unreplicable if anyone happened to make the attempt? And if everyone is constantly marching
onwards to new findings without stopping to check if our previous knowledge is robust, is the above
list of replication failures that much of a surprise.

You’d think that if you obtained the exact same dataset as was used in a published study, you’d be
able to derive the exact same results that the study reported. Unfortunately, in many subjects,
researchers have had terrible difficulty with this seemingly straightforward task. This is a problem
sometimes described as a question of reproducibility, as opposed to replicability (the latter term
being usually reserved to mean studies that ask the same questions of different data).

How’s it possible that some results can’t even be reproduced?

- Sometimes it’s due to errors in the original study.
- Other times, the original scientists weren’t clear enough with reporting their analysis. In
macroeconomics, a re-analysis of 67 studies could only reproduce the results from 22 of
them using the same datasets and the level of success improved only modestly after the
researchers appealed to the original authors for help.

But why does it make a difference that studies can’t be reproduced?

1. Science is crucial to our society, and we mustn’t let any of it be compromised by low-quality,
unreplicable studies. If we let standard slip in any part of science, we risk tarnishing the
reputation of the scientific enterprise more generally.
2. In the field of medical research, the immediate consequence of a lack of replicability is
indisputable. In medical research, from a random sample of 268 biomedical papers, all but
one of them failed to report their full protocol. Another analysis found that 54% of
biomedical studies didn’t even fully describe what kind of animals, chemicals or cells they
used in their experiment. This means that you’d need additional details beyond the paper
even to try to replicate the study. Next to that, it means that treatments are also based on
low-quality treatments and a charity that systematically assesses the quality of medical
treatments concluded that in 45% of the treatments there’s insufficient evidence to decide
whether the treatment in question works or not. Also a lot of money is being squandered.

4. Bias

Biases towards getting clear of exciting results, supporting a pet theory, or defeating a rival’s
argument can be enough to provoke unconscious data-massaging, or in some cases, the out-and-out
disappearance of unsatisfactory results. Biases appear at every stage of the scientific process. Our
tendency to overlook these biases turns the scientific literature, which should be an accurate
summary of all the knowledge we have gained, into a very human amalgam of truth and wishful
thinking.

Scientists are usually looking for positive results rather than null results. In a 2010 study Daniele
Fanelli searched through almost 2,500 papers from across all scientific disciplines, totting up how

,many reported a positive results for the first hypothesis they tested, the lowest rate was space
science with 70.2%, the highest rate was psychology/psychiatry with 91.5%. Scientists choose
whether to publish studies based on their results.

Publication bias: Journals post mostly positive results.

File-drawer problem: Scientists submit mostly positive results.

The 0.05 p-value cut-off for statistical significance encourages researchers to think of results below it
as somehow ‘real’, and those above it as hopelessly ‘null’.

If the studies with small effects have been removed, the overall effect that shows up in the meta-
analysis will by definition be larger than is justified.

Franco and colleagues looked at studies whose authors had applied to a central government program
and checked what happened with each of them. It turned out, 41% of the studies that were
completed found strong evidence for their hypothesis, 37% had mixed results, and 22% were null.
The percentages among the published articles were nowhere near similar. Of the articles that were
published, the percentages for strong, mixed and null results were 53%, 37% and 9%, respectively.
There was, in other words, a 44-percentage-point chasm between the probability of publication for
strong results versus null ones. From the scientists in question, Franco and her colleagues learned
that 65% of studies with null results had never even been written up in the first place, let alone sent
off to a journal.

There is also a moral case against publication bias. If you’ve run a study that involved human
participants, particularly if it’s one where they’ve taken a drug or undergone an experimental
treatment, it could be argued that you owe it to those participants to publish the results. Otherwise,
all the trouble they went to was for nothing. A similar argument applies to research you’ve done with
someone else’s money.

P-hacking: Because the p < 0.05 criterion is so important for getting papers published, scientists
whose papers show ambiguous or disappointing results regularly use practices that ever so slightly
nudge, or hack, their p-values below that crucial threshold.

HARKing/Hypothesizing After the Results are Known: taking an existing dataset, running lots of ad hoc
statistical tests on it with no specific hypothesis in mind, then simply reporting whichever effects
happen to get p-values below 0.05.

The 5 percent value is for a single test. Some straightforward math shows that in a world where our
hypothesis is false, increasing the number of statistical tests snowballs our chances of obtaining a
false-positive result, so with multiple tests, we go well beyond the 5% tolerance level.

In 2012 a poll of over 2,000 psychologists asked them if they had engaged in a range of p-hacking
practices. Had they ever collected data on several different outcomes but not reported all of them?
Approximately 65% said they had. Had they excluded particular datapoints from their analysis after
peeking at the results? 40% said yes. And around 57% said they’d decided to collect further data after
running their analyses – and presumably finding them unsatisfactory.

Scientists appear to tweak studies just enough to limbo their results under the 0.05 line, then send
them off for publication, so if you collect together all the p-values from published papers and graph
them by size, you see a strange sudden bump just under 0.05, there are a lot more than you’d expect
by chance.

, There’s never just one way to analyse a dataset. Endless choices offer endless opportunities for
scientists who begin their analysis without a clear idea of what they’re looking for.

Garden of forking paths: at each point where an analytic decision is required, you might choose any
of the many options that present themselves. Each of those choices would lead to slightly different
results.

This is what scientists are unwittingly doing when they p-hack: they’re making a big deal of what is
often just random noise, counting it as part of the model instead of as nuisance variation that should
be disregarded in favor of the real signal. Publication bias and p-hacking are two manifestations of the
same phenomenon: a desire to erase results that don’t fit well with the preconceived theory.

Outcome-switching is when you have a hypotheses and you also measure some other facts about
your subjects and then you present the study as if it has always been about the other facts.

From 2005, the International Committee of Medical Journal Editors, recognizing the massive problem
of publication bias, ruled that all human clinical trials should be publicly registered before they take
place – otherwise they wouldn’t be allowed to be published in most top medical journals. Of 67 trials,
online nine reported everything they said they would. Across all the papers, there were 354
outcomes that simply disappeared between registration and publication (it’s safe to assume that
most of these had p-values over 0.05), while 357 unregistered outcomes appeared in the journal
papers ex nihilo. A similar audit of registrations in anesthesia research found that 92% if trials had
switched at least one outcome and that the switching tended to be towards outcomes with
statistically significant results.

In the US, just over a third of registered medical trials in recent years were funded by the
pharmaceutical industry. In a recent review, for every positive trial funded by a government or non-
profit source, there were 1.27 positive trials by drug companies.

Many scientists forge lucrative careers based on their scientific results, producing bestselling books
and regularly being paid five- or six-figure sums for lectures, business consulting and university
commencement addresses. When a lucrative career rests on the truth of a certain theory, a scientist
gains a new motivation in their day job: to publish only studies that support that theory (or p-hack
them until they do). This is a financial conflict of interest like any other and one aggravated by the
extra reputational concerns.

‘meaning well bias’: the bias of a scientist who really wants their study to provide strong results,
because it would mean progress in fighting a disease, or a social or environmental ill, or some other
important problem.

Although sharing results among a community of researchers can, at least in part, compensate for the
biases of individual scientists, when those biases become shared among a whole community, they
can develop into a dangerous groupthink. There are stories of bullying and intimidation when
researchers challenge current hypotheses. These hint at a field where bias has become collective,
where new ideas aren’t given the hearing they deserve, and where scientists routinely fail to apply
the norm of organized skepticism to their own favored theories. If the vast majority of a community
shares a political perspective, the important function of peer review is substantially weakened. Also
priorities for what to research in the first place might become skewed: scientists might pay
disproportionate attention to some politically acceptable topics, even if they’re backed by relatively
weak evidence, and those that go against a particular narrative, even if they’re based on solid data.

Everyone is biased and has a standpoint.

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper Loissnoek. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €4,48. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 76669 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€4,48  5x  verkocht
  • (0)
  Kopen