Garantie de satisfaction à 100% Disponible immédiatement après paiement En ligne et en PDF Tu n'es attaché à rien
logo-home
Samenvatting KOM Research Methods Experimenteel en integriteit €5,89   Ajouter au panier

Resume

Samenvatting KOM Research Methods Experimenteel en integriteit

 6 vues  0 fois vendu
  • Cours
  • Établissement

Samenvatting boek Research Methods van Beth Morling voor Kennismaking onderzoeksmethoden en Statistiek, deel Experimenteel en Integriteit

Aperçu 3 sur 20  pages

  • 19 septembre 2023
  • 20
  • 2021/2022
  • Resume
avatar-seller
Boek H10 blz 273 - 286
Experiment: the researcher must have manipulated at least one variable and measured
another.
Manipulated variable: variable that is controlled.
Measured variable: take the form of records of behavior or attitudes, such as self-reports,
behavioral observations or physiological measures.
The manipulated variable is also called the independent variable. The levels of an
independent variable are called conditions.
The measured variable is also called the dependent variable, or outcome variable. How a
participant acts on the measured variable depends on the level of the independent variable.
When researchers graph their results, the independent variable is almost always on the x-
axis and the dependent variable is on the y-axis. The independent variable always comes
first in time, and the dependent variable comes second or later.
Researchers also control third variables (nuisance variables) in their studies by holding all
other factors constant between the levels of the independent variable. Any variable that an
experimenter holds constant on purpose is called a control variable. Control variables are
not really variables at all because they do not vary.

The three rules for an experiment to support a causal claim are:
- Covariance: do the results show that the causal variable is related to the effect
variable?
- Temporal precedence: does the study design ensure that the causal variable
comes before the outcome variable in time?
- Internal validity: Does the study design rule out alternative explanations for the
results?
One of the upsides of an experiment is the fact that they have comparison groups, this way
you can answer the question; compared to what?
Control group: a level of an independent variable that is intended to represent ‘no
treatment’ or a neutral condition. When a study has a control group, the other level or levels
of the independent variable are usually called the treatment group(s). When the control
group is given an inert treatment it is called a placebo group or placebo control group.
When a study uses comparison groups, the levels of the independent variable differ in some
intended and meaningful way. All experiments need a comparison group so the researchers
can compare one condition to another, but the comparison group does not need to be a
control group.
The ability to establish temporal precedence is a feature that makes experiments superior to
correlational designs. Experiments unfold over time, and the experimenter makes sure the
independent variable comes first.
For a study to be internally valid one must ensure that the causal variable and not other
factors is what is responsible for the change in the outcome variable. You can interrogate
this validity by exploring alternative explanations.
For any given research question, there can be several possible alternative explanations,
known as confounds or potential threats to internal validity.
Internal validity is subject to a number of distinct threats:
- Design confound: an experimenter’s mistake in designing the independent variable;
it is a second variable that happens to vary systematically along with the intended
independent variable and therefore is an alternative explanation for the result.

,When an experiment has a design confound, it has poor internal validity and cannot support
a causal claim.
Not every potentially problematic variable is a confound. It is only a problem if it shows
systematic variability with the independent variable. However, if it shows unsystematic
variability (random or haphazard) then it is not a confound. Unsystematic variability can
lead to other problems in an experiment. It can obscure, or make difficult to detect
differences in, the dependent variable. However, unsystematic variability should not be
called a design confound.
- Selection effect: when the kinds of participants in one level of the independent
variable are systematically different from those in the other.
A selection effect may occur if the experimenters assign one type of person to one condition,
and another type of person to another condition. Well-designed experiments often use
random assignment to avoid selection effects. Assigning participants at random to different
levels of the independent variable controls for all sorts of potential selection effects.
In the case that researchers wish to be absolutely sure the experimental groups are as equal
as possible before they administer the independent variable, they may choose to use
matched groups or matching. To create a matched group, the researchers would first
measure the participants on a particular variable that might matter to the dependent variable.
They would next match participants up in pairs, starting with the two with the highest scores,
and within that matched set, randomly assign one of them to each of the two conditions.
They would continue this process until they reach the participants with the lowest scores.

Blz 298 - 306
In an experiment, researchers operationalize two constructs, the independent variable and
the dependent variable. When you interrogate the construct validity of an experiment, you
should ask about the construct validity of each of these variables.

Dependent variables: you should start by asking how well the researchers measured their
dependent variables. One aspect of good measurement is face validity.

To interrogate the construct validity of the independent variables, you would ask how well
the researchers manipulated (or operationalized) them. In some studies, researchers need
to use manipulation checks to collect empirical data on the construct validity of their
independent variables. Manipulation check: an extra dependent variable that researchers
can insert into an experiment to convince them that their experimental manipulation worked.
Manipulation checks are more likely to be used when the intention is to make the
participants think or feel certain ways.
The same procedure might also be used in a pilot study: a simple study, using a seperate
group of participants, that is completed before (or sometimes after) conducting the study of
primary interest.
Experiments are designed to test theories. Therefore interrogating the construct validity of an
experiment requires you to evaluate how well the measures and manipulations researchers
used in their study capture the conceptual variables in their theory.

As with an association or frequency claim, when interrogating a causal claim’s external
validity, you ask how the experimenters recruited their participants.
When asking about external validity, you ask about random sampling.
When asking about internal validity, you ask about random assignment.

, In experiments, internal validity is often prioritized over external validity. To get a clean,
confound-free manipulation, researchers may have to conduct their study in an artificial
environment, hese locations may not represent situations in the real world.

When interrogating statistical validity the first question to ask is whether the difference
between means obtained in the study is statistically significant. A statistically significant
result suggests covariance exists between the variables in the population from which the
sample was drawn.

Knowing a result is statistically significant tells you the result was probably not drawn by
chance from a population in which there is no difference between groups. However, if a
study used a very large sample even tiny differences might be statistically significant. Asking
about effect size can help you evaluate the strength of the covariance.
The correlation coefficient r can help researchers evaluate the effect size of an association.
In experiments the indicator of standardized effect size is called d, it represents how far
apart two experimental groups are on the dependent variable. It also indicates how much the
scores within the groups overlap. It takes into account both the difference between means
and the spread of scores within each group (standard deviation). When d is larger, it
usually means that the independent variable caused the dependent variable to
change for more of the participants in the study. When d is smaller, it usually means
the scores of participants in the two experimental groups overlap more.

Statistical review blz 479 - 495
Inferential statistics: a set of techniques that uses the laws of chance and probability to
help researchers make decisions about the meaning of their data and the inferences they
can make from that information. Inferential statistics are performed with the goal of
estimation.
The traditional inferential statistics technique is called null hypothesis significance testing
(NHST). It follows a set of steps to determine whether the result from a study is statistically
significant.
Null hypothesis: assume that nothing is going on
The steps of null hypothesis significance testing:
1. Assume there is no effect (the null hypothesis)
2. Collect data
3. Calculate the probability of getting such data, or even more extreme data, if the null
hypothesis is true
4. Decide whether to reject or retain the null hypothesis
When we reject the null hypothesis we are essentially saying that: data like these could have
come by chance, but data like these happen very rarely by chance; therefore we are pretty
sure the data were not the result of chance.
When we retain the null hypothesis, we are essentially saying that: data like these could
have happened just by chance, in fact data like these are likely to happen by chance …% of
the time, therefore we conclude that we are not confident enough, based on these data, to
reject the null hypothesis.
Alpha level: the point at which researchers will decide whether the p is too high (and
therefore will retain the null hypothesis) or very low (and therefore will reject the null
hypothesis). Usually set at 5%.

Les avantages d'acheter des résumés chez Stuvia:

Qualité garantie par les avis des clients

Qualité garantie par les avis des clients

Les clients de Stuvia ont évalués plus de 700 000 résumés. C'est comme ça que vous savez que vous achetez les meilleurs documents.

L’achat facile et rapide

L’achat facile et rapide

Vous pouvez payer rapidement avec iDeal, carte de crédit ou Stuvia-crédit pour les résumés. Il n'y a pas d'adhésion nécessaire.

Focus sur l’essentiel

Focus sur l’essentiel

Vos camarades écrivent eux-mêmes les notes d’étude, c’est pourquoi les documents sont toujours fiables et à jour. Cela garantit que vous arrivez rapidement au coeur du matériel.

Foire aux questions

Qu'est-ce que j'obtiens en achetant ce document ?

Vous obtenez un PDF, disponible immédiatement après votre achat. Le document acheté est accessible à tout moment, n'importe où et indéfiniment via votre profil.

Garantie de remboursement : comment ça marche ?

Notre garantie de satisfaction garantit que vous trouverez toujours un document d'étude qui vous convient. Vous remplissez un formulaire et notre équipe du service client s'occupe du reste.

Auprès de qui est-ce que j'achète ce résumé ?

Stuvia est une place de marché. Alors, vous n'achetez donc pas ce document chez nous, mais auprès du vendeur rosavleemingh. Stuvia facilite les paiements au vendeur.

Est-ce que j'aurai un abonnement?

Non, vous n'achetez ce résumé que pour €5,89. Vous n'êtes lié à rien après votre achat.

Peut-on faire confiance à Stuvia ?

4.6 étoiles sur Google & Trustpilot (+1000 avis)

80364 résumés ont été vendus ces 30 derniers jours

Fondée en 2010, la référence pour acheter des résumés depuis déjà 14 ans

Commencez à vendre!

Récemment vu par vous


€5,89
  • (0)
  Ajouter