ISYE6414 FINAL EXAM BUNDLE
Least Square Elimination (LSE) cannot be applied to GLM models. - ANSWER: False -
it is applicable but does not use data distribution information fully.
In multiple linear regression with idd and equal variance, the least squares
estimation of regression coefficients are always unbiased. - ANSWER: True - the
least squares estimates are BLUE (Best Linear Unbiased Estimates) in multiple
linear regression.
Maximum Likelihood Estimation is not applicable for simple linear regression and
multiple linear regression. - ANSWER: False - In SLR and MLR, the SLE and MLE are
the same with normal idd data.
The backward elimination requires a pre-set probability of type II error - ANSWER:
False - Type I error
The first degree of freedom in the F distribution for any of the three procedures in
stepwise is always equal to one. - ANSWER: True
MLE is used for the GLMs for handling complicated link function modeling in the X-Y
relationship. - ANSWER: True
In the GLMs the link function cannot be a non linear regression. - ANSWER: False - It
can be linear, non linear, or parametric
When the p-value of the slope estimate in the SLR is small the r-squared becomes
smaller too. - ANSWER: False - When P value is small, the model fits become more
significant and R squared become larger.
In GLMs the main reason one does not use LSE to estimate model parameters is the
potential constrained in the parameters. - ANSWER: False - The potential constraint
in the parameters of GLMs is handled by the link function.
The R-squared and adjusted R-squared are not appropriate model comparisons for
non linear regression but are for linear regression models. - ANSWER: TRUE - The
underlying assumption of R-squared calculations is that you are fitting a linear
model.
The decision in using ANOVA table for testing whether a model is significant depends
on the normal distribution of the response variable - ANSWER: True
When the data may not be normally distributed, AIC is more appropriate for variable
selection than adjusted R-squared - ANSWER: True
, The slope of a linear regression equation is an example of a correlation coefficient. -
ANSWER: False - the correlation coefficient is the r value. Will have the same + or -
sign as the slope.
In multiple linear regression, as the value of R-squared increases, the relationship
between predictors becomes stronger - ANSWER: False - r squared measures how
much variability is explained by the model, NOT how strong the predictors are.
When dealing with a multiple linear regression model, an adjusted R-squared can
be greater than the corresponding unadjusted R-Squared value. - ANSWER: False -
the adjusted rsquared value take the number and types of predictors into account.
It is lower than the r squared value.
In a multiple regression problem, a quantitative input variable x is replaced by x −
mean(x). The R-squared for the fitted model will be the same - ANSWER: True
The estimated coefficients of a regression line is positive, when the coefficient of
determination is positive. - ANSWER: False - r squared is always positive.
If the outcome variable is quantitative and all explanatory variables take values 0 or
1, a logistic regression model is most appropriate. - ANSWER: False - More research
is necessary to determine the correct model.
After fitting a logistic regression model, a plot of residuals versus fitted values is
useful for checking if model assumptions are violated. - ANSWER: False - for logistic
regression use deviance residuals.
In a greenhouse experiment with several predictors, the response variable is the
number of seeds that germinate out of 60 that are planted with different treatment
combinations. A Poisson regression model is most appropriate for modeling this
data - ANSWER: False - poisson regression models rate or count data.
For Poisson regression, we can reduce type I errors of identifying statistical
significance in the regression coefficients by increasing the sample size. - ANSWER:
True
Both LASSO and ridge regression always provide greater residual sum of squares
than that of simple multiple linear regression. - ANSWER: True
If data on (Y, X) are available at only two values of X, then the model Y = \beta_1 X
+ \beta_2 X^2 + \epsilon provides a better fit than Y = \beta_0 + \beta_1 X +
\epsilon. - ANSWER: False - nothing to determine of a quadratic model is necessary
or required.
If the Cook's distance for any particular observation is greater than one, that data
point is definitely a record error and thus needs to be discarded. - ANSWER: False -
must see a comparison of data points. Is 1 too large?
Voordelen van het kopen van samenvattingen bij Stuvia op een rij:
Verzekerd van kwaliteit door reviews
Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!
Snel en makkelijk kopen
Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.
Focus op de essentie
Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!
Veelgestelde vragen
Wat krijg ik als ik dit document koop?
Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.
Tevredenheidsgarantie: hoe werkt dat?
Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.
Van wie koop ik deze samenvatting?
Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper tutorsection1. Stuvia faciliteert de betaling aan de verkoper.
Zit ik meteen vast aan een abonnement?
Nee, je koopt alleen deze samenvatting voor €17,94. Je zit daarna nergens aan vast.