100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Quantitative Research methods (All lectures linked with book) $6.47   Add to cart

Summary

Summary Quantitative Research methods (All lectures linked with book)

2 reviews
 178 views  21 purchases
  • Course
  • Institution
  • Book

This summary contains an in-depth summary of all lectures given in the course Quantitative Research Methods for Business Administration. Furthermore, it is an ideal summary to use while doing the exam since it contains an in-depth roadmap for each topic discussed during the lectures including a tab...

[Show more]
Last document update: 3 year ago

Preview 6 out of 37  pages

  • No
  • 2, 6.1, 6.2, 6.4-6.7, 6.10-6.10.3, 6.11, 6.12.3-6.12.10, 9.4-9.5, 9.9-9.10.3, 9.11-9.11.5, 9.11.7, 9
  • April 5, 2021
  • April 5, 2021
  • 37
  • 2020/2021
  • Summary

2  reviews

review-writer-avatar

By: alexmendonca • 5 months ago

review-writer-avatar

By: brammatser • 3 year ago

Translated by Google

This does not correspond in all respects to the substance covered in the course

reply-writer-avatar

By: kjeldhemel • 3 year ago

Translated by Google

Dear Bram, I don't know what course you have, but this is Quantitative Research Methods. This is given in Year 2 block 3 of Business Administration at Radboud University. These are all the lectures I had last year. Send me a message we can see what you're missing together!

avatar-seller
Table of Contents
Lecture 1 .......................................................................................................................... 2
Testing representativeness ...................................................................................................... 3
Data cleaning: Data matrix ..................................................................................................... 4
Lecture 2 .......................................................................................................................... 6
Reliability analysis .................................................................................................................. 6
Factor analysis ........................................................................................................................ 8
Simple structure ............................................................................................................................................. 8
Steps in factor analysis................................................................................................................................... 9

Lecture 3 ........................................................................................................................ 13
Approach ANOVA ................................................................................................................... 14
ANOVA: characteristics and assumptions .................................................................................................... 15
Two way ANOVA .................................................................................................................... 16
Lecture 4 ........................................................................................................................ 18
Assumptions ANCOVA ............................................................................................................ 18
ANCOVA: SPSS ....................................................................................................................... 19
ANCOVA effect covariate ........................................................................................................ 19
MANOVA hypothesis .............................................................................................................. 20
MANOVA characteristics and assumptions .............................................................................. 21
Research on education 1 ........................................................................................................ 21
Assumptions of homogeneity ................................................................................................. 22
Post hoc tests for Discipline .................................................................................................... 23
Lecture 5 ........................................................................................................................ 24
Steps regression analysis ........................................................................................................ 24
2. Estimating baseline model .................................................................................................. 25
3. Distribution of the variables................................................................................................ 25
4. Check model assumptions................................................................................................... 27
Lecture 6 ........................................................................................................................ 28
Interaction model................................................................................................................... 28
Johnson nemer procedure ...................................................................................................... 30
Mediation model.................................................................................................................... 31
Lecture 7 ........................................................................................................................ 33
Nominal dependent variable: logistic regression ..................................................................... 33




1

,Lecture 1
This is the model of argumentation made by
Stephen Toulmin’s. Ground stand for
information, warrant stands for rules or
principles and lastly, claim stands for
choice/decision/opinion.

Nominal categories means that they differentiate only in name. Ordinal categories are more or less
the same, but it includes a rank order. Interval measurement level it has different measurement
categories but the distance between the categories is equal → temperature in Celsius. Ratio
measurement level has a meaningful zero point → an absence of something. So, a temperature is not a
ratio measurement because there is no absence. But a per centage is a ratio measurement since 0%
means an absence.

Validity: the extent to which the characteristics measured are the actual characteristics of the objects
involved in the research.

Reliability: repeated measurements under the exact same conditions should give the same result.

Utility: the extent to which the results fit with the problem of the client, or the extent to which the
results actually contribute to solve the problem.

Questionnaires can be used in surveys and experiments as well as observation. Asking questions to
respondents can have different format → oral and open to a closed format on the internet.
Observations can be differentiated into open observation and predefined categories → more
appropriate in quantitative research. With content analysis you look at all kind of written sources and
search for typical behavior. Primary data is collected for the current research, no one before collected
this data before you. Secondary data have been collected before you, for all kind of other purposes.

Questionnaires types:
- Open ended / closed questions
- Single / multiple response → single response is after one question you can only answer one
thing. Multiple → you can choose more option, e.g., which soda do you like? Coca cola, 7up
etc.
- Dichotomous questions → yes or no questions
- Scale items → Likert scale


Sampling
Population → all student in NL → hard to reach every student in NL so you make a selection of this
population → operational population → all students in Nijmegen and Utrecht. But you have to make
sure that this operational population is an adequate representation of the whole population. There are
some decisions you have to make and give argumentation to see whether this population is good
enough. Sampling frame → list of your operational population. Your initial sample is the sample with
who you prefer to do your research with. However, in every research some participants do not react.
So, your final sample is the amount of participant who do really participate in the study.
Probability sampling → fate is responsible for the selection of your initial sample. If this is the case,
you have to check your representativeness of the population.

Non-probability → representativeness cannot be assumed → if you would choose to pick a
convenience sample (people in the Refter that are present at a specific time). Never has
representativeness.

Ethical issues



2

,Research ethics is becoming more and more important. You need to follow informed consent rules,
inform respondents about the purpose of the research and ask of he/she wants to participate.
Furthermore, respect confidentiality and privacy.

Testing representativeness
Representativeness is the degree to which a sample adequately reflects the population for relevant
characteristics. If the figures are of categorical nature, you should use a chi-square test.

When you have of the characteristic of an interval or ratio level (age) you should test the
representativeness by a T-test for one single sample. In a T-test you compare the mean age of the
sample with the age that is being published in data sets of high quality, for example statistics
Netherlands etc. For example, if in the sample the age is 40.1 and the CBS portraits that people who
are employed average age is 42.4 then you use the t-test for one sample to see whether your sample
figure statistically differs from the population figure.

Chi square test (categorical): comparing two categorical variables with one another. When you test
cross tabulation, you use chi square statistic. The frequency observed and frequency expected is
important.
Frequency observed:




When you have the frequency observed, you have to calculate the expected frequency. Get the
information from a reliable organization with good statistics, and what you will learn is that the
sampling distribution of people visiting supermarket is a uniformed distribution → all the same in the
three categories.

So, the Fo is the observed frequency, and the Fe is the expected frequency, which is the same for
every supermarket (26.7).

1. calculate difference between Fo and
Fe (Fo – Fe)
2. square this difference to get rid of the
negative numbers.
3. squared difference is being divided by
a comparison base (Fe).
4. sum the values you get the chi square.


- H0 distribution in sample = distribution in population
- H1 distribution in sample =/ distribution in population
- Chi square (2, N=80) = 0.77; p = 0.68
- statistical decision (a = .30) p > a H0 is not rejected (probability rate is higher than alpha rate so no
rejection)
- conclusion: sample is representative for population.

Output from SPSS:




3

,We use an alpha of 0.5 because we do not want do decide too quickly that sample is representative:
here it is more important to avoid a type 2 error, than a type 1 error. Consequently: increase alpha, to
have a smaller beta.




When you increase alpha, beta decreases and the other way around.


Data cleaning: Data matrix
Data cleaning is checking whether all the data in the data matrix are correct. Suspicious patterns of
answering in questionnaires should be traced in data. For example, only positive/negative answers.

Missing data
Missing values of variables can occur for a variety of reasons. There are two problems:
- Number of cases used in statistical analyses must be sufficient (power)
- Type of cases (respondents) used in statistical analyses (validity)
When there exists a relationship in the population you hope to find that back by means of your
statistical testing → Pearson correlation. When the power is too low, you do not find an association in
your statistical testing although it may be there in the population. Because of missing values, you
might lose many respondents and that will negatively influence the power of your test. Additionally,
there may be problems with the validity

Procedure:
1. determining type of missing data
2. determining the extent of missing data
3. diagnose randomness of missing data
4. choose and apply imputation method

1. type:
o Ignorable (non-response, routing, censored data / design) you cannot do much about is thus
ignorable.



4

, ▪ Routings you have in your questionnaire will result in missing values that you can
ignore (if for example you have a question “are you employed” no → skip to
question 18. Then you of course will have a missing value from those participants
▪ Censored data / design → for example if you study the army, you will also have
some missing data due to the selection criteria they have, you cannot do anything
about it because that is part of how this group looks like.
o Not ignorable (no answers, missing categories)
▪ May be caused by questions, too long, not clear etc.
▪ May be caused by interviewers because of inappropriate behaviour which leads to
an incorrect questionnaire.
▪ Check whether your routing is correctly
▪ Data entry, you deal with 5-point scales but accidently you type a 6 instead of a 5.
This 6 would then be described as a missing value while it was just a 5. Can easily
be recognized.

2. extent
When is missing data a problem? An amount of less than 10% can be ignored, except when the
missing data occur in a nonrandom fashion. When missing data is not completely random spread but
located in specific groups/questions than it is a problem, it should be taken into consideration → look
critically at your data.

Two missing categories, 88 and 99 → 69 respondents
with a missing value → 7.1 % → would not be a
problem.

However, taken a closer
look into the people with a
missing value shows that it
it belongs mainly to
students and persons who
do the housekeeping. It
also seems to be located in
people wo have junior
vocational education and
senior vocational education. Although you first think it
is not a problem then there might be some selectivity in stake → some selectivity than the generalized
would be less adequate.

3. randomness
▪ Missing completely at random → MCAR → missing data equal in each group.
▪ Missing at random MAR → more missing data in one group than in others.
An example:




5

, In this table we can see the missing
percentage in column 6. The extent of
missing values or in principle no problem.
However, we do have to diagnose the
randomness of the missing data.




To distinguish whether the missing values
or MAR or MCAR we look at 2 types of
tables. One in which you confront
categorical variable with a metric
variable. To see whether the missing
values are different for specific groups,
than you look at the difference between
the percentages for these particular
groups. Here the biggest difference is
between highly rural and urbanized.
When difference in large samples is
larger than 5%, they indicate a
statistically significant difference. Not the case here. In smaller samples they differences have to be
5%.


▪ Overall test for MCAR: little MCARS tests → tested whether the patterns deviate from the
normal expected patterns.
o H0 observed missing data pattern does not differ from a completely random pattern
o H1 observed missing data pattern differs from a completely random pattern
▪ P = .42 a = 0.05, not below the trash hold level of .05 so we do not reject H0 → we can
assume that missing data is completely at random.


4. Imputation methods
▪ MAR: consider missions as a subset of sample
▪ MCAR a few ways:
o Listwise deletion: if a test subject has a missing score on 1 of the variables, then he
will be removed → costly because you have to permenantly delete a participant
o Pairwise deletion: suppose you were to calculate a correlation, then respondents who
have a missing score on 1 of the variables should not be included
o Mean substitution: if 1 of the 18 items is forgotten, then you give the average of the
total sample
o Regression techniques: on 1 of the items a missing score; based in the info of the
other items a regression line on the score is to be determined.



Lecture 2
Reliability analysis
A scale is a collection of items measuring a specific construct. A construct is a latent variable,
theorical, ‘true’ score. An item is operational, empirical, concept as measured. Advantages of using
more stimuli for measuring scale
- Broader coverage of what is meant with the particular construct
- Increases reliability (less disturbance of random factors)


6

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller kjeldhemel. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $6.47. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67866 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$6.47  21x  sold
  • (2)
  Add to cart