100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary ARMS general part - lectures, seminars, Grasple & Workgroups €5,99
In winkelwagen

Samenvatting

Summary ARMS general part - lectures, seminars, Grasple & Workgroups

1 beoordeling
 235 keer bekeken  22 keer verkocht

Deze uitgebreide samenvatting bevat de volgende onderdelen van het algemene gedeelte van ARMS: * Hoorcolleges * Grasple lessen * Voorbereidende opdrachten werkgroepen de samenvatting is in het Engels, omdat het tentamen ook in het Engels is!

Voorbeeld 4 van de 57  pagina's

  • 5 mei 2020
  • 57
  • 2019/2020
  • Samenvatting
Alle documenten voor dit vak (12)

1  beoordeling

review-writer-avatar

Door: daanhonkoop • 3 jaar geleden

avatar-seller
NadesjaFijn
Summary ARMS general part
This summary contains the following aspects:
 Lectures and seminars
 Prepatory assignments
 Grapsle lessons


Lecture 1: multiple linear regression
A found associations/relation (correlation) does not mean that there is necessarily a causation.

You always must look very critical at the studies you read. Always look at the following aspects:
Review the way studies were performed.
 Is it a representative sample?
 Do they use reliable measures of variables
 Do they use correct analyses and correct interpretation of results?
Consider alternative explanations for the statistical association.
 Is there a third variable that can cause the found relations?
 Association does not mean causation.
 Does the effect remain when additional variables are included?

You can investigate a third variable with a multiple regression (adding variables to your model).

Simple linear regression
 Involves 1 outcome (Y) and one predictor (X).
 The outcome = the dependent variable (e.g. IQ)
 Predictor = the independent variable (e.g. birth order)

The formula for the simple linear regression is as followed:




A multiple linear regression (MLR) is used to look at more than one variable in a research. Multiple
regressions is all about adding variables to your model.
The formula for the MLR is as followed:




Note: most formulas will not be part of the exam (How do you compute things, equations in Fields).
Though, the basic formulas (MLR) is basic knowledge, so you do need to know those ones.




Important with (multiple) linear regression

, 1. To what extend is the variation in the data explained (Residual  R2). The closer the dots to
the line (scatterplot)  the better the variance is explained by the model. Can the predictor
explain why some people have a high IQ and some people have a low IQ.
2. The slope of the regression line (B1)  is the line horizontal or diagonal? The larger the B
value, the more diagonal the slope. The more diagonal the slope, the more X has a significant
impact on the Y.
You always look at both points. That is the key thing of any regression model:
1. R squared (residual) = how well does the model fit the data?
2. B1 = how important is the predictor (X) for predicting the outcome?

Multiple linear regression (MLR)
 Examines a model where multiple predictors (X) are included to check their unique linear
effect on Y

The MLR model




Y = outcome variable
B0 = intercept = the value of Y when X is 0
B1 = slope of X1
B2 = slope of X2
Ei = residual / error

When you have an equation with Y =, it means that all the scores are observed. When you have an
equation with ^Y =, it means that the scores are not observed but predicted.

Additive linear model = other name for multiple regression.
Each of the prediction is additive: together they do something more than both separately if you add
them up.

Types of variables in a MLR
 There are 4 measurement levels: nominal, ordinal, interval, ratio
 Categorial: nominal and ordinal
 Continuous: interval and ratio
 MLR always use ratio or interval variables in both the predictor and outcome variable. They
are continuous and numerical.
 But: categorial predictors can be included as dummy variables.

Dummy coding
 Dummy coding is used when you want to include a variable that is not an interval or ratio
level, for instance gender (male/female).
 Dummy coding always only includes 0 and 1 (not 20 and 30). They only use two outcomes.
 When you chose 0 for women, you’ll have 1 for male.
 B0 + B1 x 1 = for males
 B0 + B1 x 0  B0 = for females. (the average is the best prediction)
 You’ll look at B1 because it’s the difference between women and men. It is the difference in
prediction between the two groups (which is what you want to know: is gender a significant
predictor).

Gender is easy because it has two categories. But colour has more.

,For instance, 4 categories  red, blue, green, yellow.

Then you’ll get = B0 + B1red +B1blue + B1green as a MLR equation.
You always need one dummy variable less than the number of predictors!
The predictor that is not transformed into a dummy is called the reference group.
B1red  red or not red.
B1blue  blue or not blue.

MLR and hierarchical MLR
A hierarchical MLR gives you the opportunity to look at two different research questions/models.
1) one model is the model with the already existing predictors and;
2) another model; given that the correlation is already there in the first model, you can add variables
to this first model.
In this model you will add other variables to see if that gives a significant improvement. This is the
hierarchical MLR. (is model 2 significantly better than model 1?)

Hypotheses
For each model you have the following hypotheses:




R2 = is there a good fit? Variation explained by the predictors?
R2 change = is the second model significantly better than the first model? Relevant?
B1 = what does the slope do? How does the predictor predict the Y value?

Output
Always read the titles, subtitles and footnotes of SPSS output!

Model summary (second hypothesis)
 R = multiple correlation coefficient. Correlation between [Y observed, and Y predicted]. If
you square it, you get R squared.
 R square = proportion of variance in outcome explained by the model. Note: output .135
means 13,5% of the variance is explained by the model.
 Adjusted R square = R square is what you compute for your sample, it is not an excellent
estimation for the population. Most of the times it is too optimistic, there is a bit of bias. The
more predictors, the more optimistic it is.
You can correct for this. Adjusted R square is corrected for bias. If you want to say something
about the variation in Y in the population, you need to use Adjusted R square.
 R square Change = it is the same as the R square for the first model, because they are the
same. For the second model it is the change between the first and second model. It also adds a
significance for it. It is the difference between the first and second model.

Inferential statistics = using the sample and its results to say something about the population.

, ANOVA (first question for each model)
R2 = is there a good fit? Variation explained by the predictors?
 For the first model the F and Significance is the same, for the second model it is different.
 The difference between the model summary and ANOVA is as followed:
 Anova = looks at if R squared significantly non-zero in both models.
 The model summary = significance of the change. Is the second model better than the first
model?

Coefficients
 B (SLOPE!) = what is the relation between predictors? It tells you what the unique
prediction is when the other predictors are also in the model. The predictors cannot be
compared in importance because it each has its own operationalisation (0 – 10, or 20 – 1000),
one unit on the scale van be very different. When you look at the B, you can be sure that all
other predictors are fixed at one point. (When you want to say something about years of
education and life satisfaction, you can for instance say that all participants are 15 years old
which would be the predictor age. Then each added year of education comes with a certain
increase or decrease in life satisfaction).
 Beta = If you want to say which predictor is the most important, you’ll have to look at the
Beta. The highest beta means the highest importance. These predictors are standardised and
therefor comparable. Note: a negative value can also be the largest, with the biggest they
mean the furthest away from zero. The minus or plus tells you the direction of the relation.

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper NadesjaFijn. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 48298 samenvattingen verkocht

Opgericht in 2010, al 15 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,99  22x  verkocht
  • (1)
In winkelwagen
Toegevoegd