Week 1: Introduction
Marketing Thinking and Doing
marketers and their academic counterparts. Consider weaving as an analogy. Individual fibers
have value separately; when combined, they can produce useful materials or beautiful
tapestries. But this is not the case now, the move separately. The process of creating a better
weave is already underway, so the two parts (academic and practice) are moving towards
eachother.
Why wave?
Academic reasons: Better ideas, Better data, New tools, More effective teaching
Practitioner reasons: New lenses, Across-Industry knowledge, Going a mile deep, new tools,
benefit of the cold eye
What both can do? -> Find common problems, share a workbench, celebrate different types of
theory
What can academics do? -> Get out into the field, build a classroom lab, translate marketing
reality into ideas, create value beyond the paper, engage in healthy confrontation, embrace
research diversity
What can practitioners do? -> invite research experts, share your data, identify your theory
The marketing discipline: Inspire, connect, reward
Digital marketing: A framework, review and research agenda
A Thematic Exploration of Digital, Social Media, and Mobile Marketing: Research
Evolution from 2000 to 2015 and an Agenda for Future Inquiry
Digital, Social Media and Mobile(DSMM): (1)DSMM as a facilitator of individual
expression, (2) DSMM as a decision support tool, and (3) DSMM as a market intelligence
source
,Week 2: Marketing Meets Data
With Decision-Driven Data Analytics
The authors discuss the power of what they coin “decision driven analytics” and compare it to
a more traditional data-science approach to data driven analytics. They make many good
points, but I find the need for a new term a little too much. In the lecture, I’ll use “Gold
Standard” data analytics to mean “decision driven analytics”, because in my mind what they
are advocating is more along the lines of best practices in data driven decision making as
opposed to a whole paradigm shift.
Marketing and Data Science: Together the Future is Ours.
Pradeep and his co-authors (all of which are at the “top” of the field in marketing analytics)
discuss the evolution of marketing into a data driven field. They argue quite hard for
synergies between marketing as a science and computer science / data science. Its a good
article, and it highlights the transition that marketing has been going through over the past 20
or so years. Their push for computer/data science moves a little too much towards problems
of descriptive and predictive analytics for my taste (see for example Box 1) as opposed to a
causal toolkit but their over-arching point is full of merit, and is the current forefront of the
field.
Optimizing Digital Marketing with Data Science.
This piece is from a data scientist working in marketing and business analytics. Its included to
give you a sense of how a data scientist (as opposed to a more traditional marketer) thinks
about different domains in the digital marketing space.
, Week 3: Regression
Causal Inference: Econometric Models vs. A/B Testing.
This article talks about (i) the difference between observational and experimental data, and
(ii) estimating causal effects with and without experimental data. Its a good summary of
things you should know (or want to know) from a toolkit perspective. When reading, skip the
section(s) on Instrumental Variables because we won’t be needing that for this class.
“Unfortunately, a raw correlation between X and Y alone is NOT enough to help us establish
the causal relationships. The complicating factor here is a set of other features, called
Confounding Variables, that affect both X and Y. For example, factors, such as visitor
geolocation, gender, age and interest would affect both the use of the new feature and the
outcome of sales revenue. Therefore, we need to isolate the effect of the new web page design
(X) on the sales revenue (Y) while controlling for these confounding variables.”
In an observational study, we observe and collect the actual data (e.g., X and Y) relevant in
the study without randomly imposing any kind of treatment or restriction on a group.
In an experimental study, we randomly impose treatment to a group, while the other group
doesn’t receive the treatment, so that we can investigate the causal relationship between the
treatment and the outcome variable. Randomization design and intervention make
experimental studies different from observational studies.
Look through the slides again, about regression and interpretation
Look out for causality, instead use: ‘Is associated with’