Master BA: Change Management (RUG)
Summary Research & Skills MSc BA (2016/2017)
Exam/test grade: 9
This summary includes:
- Summary of the video lecture
- Summary of the lecture slides
- Summary of the following literature:
- General
Student projects. Chapter 2 from the book Aken J.E. van, Berends, H., Bij, H. van der
(2012). Problem solving in organizations – A methodological handbook for business and
management students.
Quality criteria for research. Chapter 13 from the book Aken J.E. van, Berends, H., Bij, H.
van der (2012). Problem solving in organizations – A methodological handbook for business
and management students.
- Literature review
Crossan, M.M., Apaydin, M. (2010). A multidimensional framework of organizational
innovation: A systematic review of the literature. Journal of Management Studies 47 (6):
1154-1191. (Example of the approach)
Song, M., Podoynitsyna, K., Bij, H. van der, Halman, J.I.M. (2008). Success factors in
new ventures: A meta-analysis. Journal of Product Innovation Management 25 (1): 7-27.
(Example of the approach)
- Theory testing
Bacharach, S.B. (1989). Organizational theories: Some criteria for evaluation. Academy of
Management Review 14 (4): 496-515. (Description of the approach)
Song, M. Im, S., Bij, H. van der, Song, L.Z. (2011). Does strategic planning enhance or
impede innovation and firm performance? Journal of Product Innovation Management 28 (4):
503-520. (Example of the approach)
- Theory development
Eisenhardt, K.M. (1989). Building theories from case study research. Academy of
Management Review 14 (4): 532-550. (Description of the approach)
Brown, S.L., Eisenhardt, K.M. (1997). The art of continuous change: Linking complexity
theory and time-paced evolution in relentlessly shifting organizations. Administrative Science
Quarterly 42 (1): 1-34. (Example of the approach)
1
,Chapter 2. Student projects
Two main research paradigms to be used in student field work
- Explanatory paradigm: aims to produce descriptive and explanatory knowledge
o Process structure for doing fieldwork: Empirical cycle
Steps of empirical cycle:
Step 1: Observation concerns phenomena in the real world and
what is written (academic literature) about the phenomena
Step 2: Induction possible explanations for the issue are
developed, aided by literature (theory-developing step)
Step 3: Deduction ideas of induction step are transformed into
hypotheses (generating hypotheses step)
Step 4: Testing hypotheses are empirically tested
Step 5: Evaluation outcomes of empirical test are examined and
interpreted
o Type of problem: generic (academic) and specific (business) knowledge
problems
o Two knowledge generating processes to develop descriptive or explanatory
generic theory (part of empirical cycle):
Theory Development (observation, induction, ends at beginning of
deduction)
Phase 1: Business phenomenon – not explained in academic
literature, generally recognized in companies (so not specific)
Phase 2: Observation of phenomenon in case studies
Phase 3: Develop explanations by comparing findings with existing
theories
Phase 4: Come to propositions (= outcome Theory Development)
Theory Testing (follow-up the propositions from theory development)
(deduction testing, evaluation)
Phase 1: driver is a business phenomenon – no conclusive evidence
on explanations in academic literature (gap that needs to be partly
closed), phenomenon is faced by many companies (so not specific)
Phase 2: identification of important variables, generation of
conceptual model and hypotheses
2
, Phase 3: large scale data collection and statistical data analysis
Phase 4: compare results to hypotheses, theoretical and practical
implications, future research: outcome=newly tested (part of a)
theory
- Design science paradigm: aims to produce solutions to field problems
o Process structure for doing fieldwork: Problem-solving cycle
o Steps of problem-solving cycle:
Driven by a business problem of a company: mess of interrelated problems
Step 1: Problem definition identify problem mess and structure it
Step 2: Analysis and diagnosis problem and context are analyzed for
causes of the problem
Step 3: Solution design design solution to tackle the causes
Step 4: Intervention solution is implemented
Step 5: Evaluation effects of implemented solution are assessed
o Type of problem: performance problem (business)
o Academic problem solving
Focus is on both a specific and a generic business problem of a company
Outcome: specific solution to specific business problem and generic design
proposition for solving the type of business problem
Phases/steps:
Phase 1: Business phenomenon – solution to type of business
problem not adequately addressed in academic literature (gap=
‘how’ question)
Phase 2: selection of company’s business problem (linked to
business performance)
Phase 3: analysis and diagnosis, data collection and analysis,
academic literature
Phase 4: solution design, implementation, evaluation
Phase 5: academic reflection, formulation of generic design
proposition, future research.
The mixture of the three consistent research processes to carry out a master graduation
project is dangerous and will, in general, not lead to consistency. Also mixture of theory
development and theory testing with same case information is wrong. It is not valid to
3
,develop and test theory on the basis of the same data and a test on the basis of a few cases is
not a large scale test that is meant in the theory-testing process.
Video lecture:
4 research approaches: Academic problem solving, Theory development, Theory testing,
Literature review First three are empirical approaches
Chapter 13. Quality criteria for research
Four research-oriented quality criteria for reaching inter-subjective agreement: consensus
between the actors who deal with a research problem.
- Controllability
o Prerequisite for the evaluation of validity and reliability
o Researchers have to reveal how they executed a study
o Controllability requires that results are presented as precisely as possible.
o The detailed description of a study enables others to replicate the study and
check whether they get the same outcomes.
- Reliability
o The results are reliable when they are independent of the particular characteristics
of that study and can therefore be replicated in other studies.
Should be independent of the researcher who conducted the study
Hot bias: refer to the influence of interests, motivations and
emotions of researchers on their results
Cold bias: refer to subjective influences of the researcher that have
a cognitive origin and no personal motivation
Inter-rater reliability, standardization and use of tools
Should be independent of the respondents
People within a company can have widely diverging opinions
Respondents are selected, the sample, from the larger group from
the larger group, the population
Research results become unreliable when the selection of
respondents leads to results that differ substantially from the
results that would be obtained with other respondents.
How to counter this:
o All of the roles, departments, and groups that are involved
in the problem area need to be represented among the
respondents
o When group is large: select respondents at random; a select
sample
o Increase number of respondents
Should be independent of the measuring instrument employed
Outcomes should be replicable with other instruments
Triangulation: using multiple research instruments
Statistical correlation of items represent the degree to which
different items replicate each other’s findings measured by
Cronbach’s Alpha: when high measure instrument is reliable
Should be independent of the specific situation in which the study was
carried out
4
, Unreliable when the particular situation leads to results that
cannot be replicated in other circumstances
Carry out study at different moments in time.
- Validity
o A research result is valid when it is justified by the way it is generated
o Validity presupposes reliability
o Reliability does not presuppose validity
o A perfectly reliable measure is not necessarily valid
o Three types of validity:
Construct validity
The extent to which a measuring instrument measures what it is
intended to measure
Refers to the quality of the operationalization of a concept
o The concept should be covered completely
o The measurement should have no components that do not fit
the meaning of the concept
Internal validity
Concerns conclusions about the relationship between phenomena;
the proposed relationship should be adequate
Results of a study are internally valid when conclusions about
relationships are justified and complete
There must be no plausible competing explanations: alternative
explanations must be ruled out
Correlation is a necessary, but not a sufficient condition for
causality
External validity
Refers to the generalizability or transfer of research results and
conclusions to other people, organizations, countries, and
situations
Can be increased by increasing the number of objects studied
- Recognizability
o Less prominent in the traditional methodological literature, but very important in
practice-oriented research: the recognizability of research results by the members
of the client organization
o Refers to the degree to which the principal, the problem owner and other
organization members, recognize research results in problem-solving projects
o Results should sound reasonable, plausible or at least possible to them
o Increases as a result of a member check and if they are involved in the study
Video lecture:
Quality criteria for research:
- Controllability (reveal how you did your study) prerequisite for the other criteria
- Reliability (can other people replicate your study and can they come to the same results).
Basic errors that should be avoided:
o Researchers bias (study result depends on the particular researchers that carried
out the study)
5
, o Instruments bias (results of the study depends on the particular instrument that is
used)
o Respondents bias
o Circumstances bias
- Validity (justify your results by the way they have been generated)
o Construct validity (has to with the measurement of constructs in your study: did
you really mention what you intended to measure)
o Internal validity (has to do with the relationship between phenomena: can you
really proof that B is caused by A)
o External validity (has to do with the generalization of your study results beyond
the study population)
Qualitative Literature Review = Systematic Literature Review (Crossan &
Apaydin 2010)
- Systematic literature review in the area of organizational innovation
- A comprehensive multi-dimensional framework of organizational innovation linking
(innovation) leadership, innovation as a process and innovation as an outcome.
- Was done because reviews and meta-analyses are rare and narrowly focused, either
around the level of analysis (individual, group, firm, industry, consumer group, region,
and nation) or the type of innovation (product, process, and business model).
o An impediment to the systematic analysis was the loose application of the term
innovation, which is often employed as substitute for creativity, knowledge, or
change.
- Their claim is that this literature field is very scattered
o Studies on organizational innovation are concentrated on different levels of the
organization (firm level studies, group level studies, individual level studies)
o Studies on organizational innovation are concentrated on types of innovation
(process innovation, product innovation)
o Claim: These different concentration points/sets of literature have not been linked
yet.
- Generally in as less mature, scattered literature field
- A systematic review uses an explicit algorithm to perform a search and critical appraisal
of the literature. It removes the subjectivity of data collection by using a predefined
selection algorithm.
- The initial step of the project was a review and categorization of the findings. Then they
synthesized the revealed categories into a comprehensive multi-dimensional framework of
organizational innovation consisting of three sequential components: Innovation
leadership; innovation as a process; innovation as an outcome.
- The focus of the paper is on organizational innovation (firm, group, and individual level
of analysis) – it is driven by an intention to be practical in the orientation by focusing on
elements that are arguably within control of the firm.
o By targeting the firm level, they can provide a practical basis on which managers can
build structures and systems that would enable innovation within a firm.
o Isolate the leaders’ influence from organizational level factors.
Although leadership for innovation has been a subject of research, the
mechanisms for its connection with the rest of the innovation process have not
been explicit.
6