Summary empirical research
Week 0: lesson 2: update on statistics
Typical set up of data in a paper:
Table 1: sample selection e.g. which years or which firms dropped because missing data
final sample
Table 2: univariate analysis look at one variable at the time descriptive statistics
Table 3: bivariate analysis look at two variables together
Table 4: multivariate analysis full model where you look at multiple variables in one
analysis
Table 5,6,7,8…: e.g. robustness checks etc.
Descriptive statistics:
- summarize datasets gives an idea of the variables in the data sets and their
distribution
- measures of central tendency (mean and median) and spread (std. dev., min., q1, q3,
max.)
Scale of variables:
- Non-metric
o Nominal: typically coded 0 or 1 e.g. dummy with 0 if big4 and 1 if not big4
o Ordinal: gives sort of ordering, but doesn’t give a magnitude of the difference
e.g. 1=qualified, 2=unqualified, 3=GV-paragraph
- Metric
o Interval/ratio: measures quantitative variables e.g. Audit fees or
temperature
Bivariate analysis: comparing two variables together
T-test on the mean: e.g. look at difference between accounting quality of BIG vs. NOTBIG4
given the distribution of both, the less the overlap the more significant
Pearson/spearman: spearman below the line and pearon above the line always look at
the variable with the lowest order of measurement (so spearman)
Multivariate analysis:
Linear regression: typically used measure lower p-value means that the overall errors are
closer to the line that goes through all the dots
,Week 1: lecture 1: Ball and Brown 1968
Value relevance and aggregate beliefs:
GAAP has a usable perspective for users, but earlies in the days it costed money but it had
no benefits
RQ: are accounting numbers useful for investors? paper looks at content (is information
relevant for capital market) and timing (backward looking and after the fact) of usefulness
Methodology: measure relationship between stock price and accounting information
measure should capture firm specific news about accounting information (unexpected
earnings) and stock market reaction to this news (unexpected returns) e.g. earnings
numbers of Apple
unexpected earnings = Actual earnings -expected earnings
- Naïve model: expected earnings this
year = actual earnings last year naïve because last
year profit is not a good proxy for this year profit (e.g.
corona)
- Example: apple has 3 EPS, last year 2 EPS UE = 1
EPS
- Regression model: regress the
change on earnings on the change on the aggregate
earnings in the market M= market average NI
e.g. if apple outperforms market than it is positive
Difference (error) in graph is the unexpected earnings
good news when dot is above the line (because you
outperformed the market) measures
- Naïve model: you proxy for last years’ earnings
- Regression model: regress change in earnings on
change in market earnings (error is good or bad news)
Finance literature:
1. Market-wide variations in stock returns are triggered
by the release of information which concerns all the
firms e.g. a new job number in the US
2. For the individual firm it’s contents of that specific information
should be assessed relative to changes in the rate of return in the
firm’s stocks net of market-wide effects
error captures the
impact of new information, about firm j, on the return from holding a
stock in j (realized – expected return)
Lines: average of unexpected earnings good news means that you
receive more than last year (e.g. apple example naïve model)
,Earnings completely timely and 100% relevant in the graph on the
right
Conclusion paper:
- Only 10% of income information has not been anticipated till
reporting month (90% is anticipated, 10% is unexpected)
- Low timeliness of earnings information, otherwise the graph
on the right would be the real one
- Accounting captures relevant information because it lower
the unexpected earnings audit adds value for investors
Value relevance: earnings and stock
prices correlated throughout the
year
Information content: study the EA
of earnings
Drift: investors that are late with
processing the information that
came out during the EA
Book value relevance increases while earnings relevance decreases over time
Week 1: lecture 2: Landsman and Maydew 2002
Information content and individual beliefs
RQ: what is the change in information content of earnings over the past three decades
daily unit of measurement (small window study)
Hypothesis:
- Price-based: If a price moves, market participants have revised their beliefs this
increases the variability of price change looks at whole market
- Volume-based: if a person revised their belief, he moves the price increased
number of shares traded looks at individual investors
Avoid clustering through not only taking
companies that have the same year end (can
influence each other)
Beaver 1968: at the EA, the variability and number
of shares traded increases a lot dotted line is
the normal amount
Why replication?
- Economy changed (more intangibles in e.g. tech firms)
- Value relevance declined over time value relevance of earnings numbers
decreased over time
Paper looks at relative volume of spike compared to x weeks beforedifference is
abnormal
,AVOL= captures whole change over time individual
TIME= change per year e.g. 72,73,74,75,76…. shows that the relevance of earnings
increased over time (contrary to expected)
CALDUM= equals to 1 if firm reports at the end of year (12/31)
CALTIME= look whether firms reporting at year end have a different time tram
AVAR= abnormal variance (the spike) increasing trend meaning that information content
still increases over time aggregate
Conclusion:
- Event study: because we only look at the EA
- Stock price equals sum of expected future dividends change in expected future
dividend causes change in stock price
- When new information arrives, the expectations change
Week 1: lecture 3: Dechow, Sloan, Sweeney 1995
RQ: evaluate accrual-based models for detecting earnings management
- 20% of firms uses earnings management to steer their earnings
- 5 different models used in the paper
- Discretionary accruals (DA) = Total accruals (TA) – Non-discretionary accruals (NDA)
- TA = Net Income (NI) – Cash flow from operations (CFO) scaled by lagged assets
TA in a period/period length e.g. T = 9 years
NDA = TA from last year similar as naïve model where
you ignore everything that happens within the year
Jones: change in revenues + level of PPE
proxies for the size of the operations of the firm
M-Jones: adds the change in receivables
because only taking in to account changes in
revenue that are not accompanied to a change
in receivables through for example channel stuffing
Industry: median of total accruals in the industry median of first 10 is proxy for 11th firm
DA is measured with noise error term
Controls are difficult to identify so they are allocated to error as well mu
= total error
A regression is made for each firm, which results in specific beta coefficients afterwards
all betas are put together and the paper looks if they are significantly different from zero
Conclusion:
- Tests have low power in a 100% EM sample, it only gets picked up a few times
- Larger models needed to test for EM
- Measurement error: in Jones, aggressive revenue recognition classified as NDA,
while it should be DA
- More controls needed e.g. firm performance
,Week 1: lecture 4: Dechow and Dichev 2002
RQ: the role of accrual estimation errors
Accruals are estimated of future cash flows receivables will be substituted for cash later
on
Model looks at
change in working capital compared to change in cash flow from operations (+error that
measures the imperfection of accruals)
Earnings = Cashflow (CF) + Accruals
meaning that you receive it in period
before but recognize it in current period
From cash flow to accrual no error term because it is no estimation but a fact
Accruals:
- Accruals are temporary adjustments that delay or anticipate the recognition of
realized cash flows plus an estimation error term
- Accruals are negatively (positively) related to current (past and future) cash flows
- Error term captures extent to which accruals map into cash flow realizations
Standard deviation of the error term = estimation quality of accruals to cash flow low
error means better mapping
So, more losses means higher error means lower quality
Persistence: high earnings last year results in high earnings this year (persistence)
Higher persistence leads to lower error so better quality
Imperfection in mapping accruals to cash flows because: uncertain economic environment,
management expertise or manipulation
Week 1: Lecture 5: Burgstahler and Dichev 1997
RQ: Earnings management to avoid earnings losses or decreases
Threshold management: achieving goals
Psychology: tendency to perceive continuous data in discrete form e.g. blocks
Prospect theory: theory that prescribes that individuals perceive losses and gains
differently relative numbers compared to 0 matters e.g. if analysts predict 20 EPS than
,this will be the reference point, a real value of 19 EPS is perceived worse than that 21 EPS as
a gain stronger for small effect (e.g. compared to 15 and 25)
- It emphasizes growth and the consistent pattern of
earnings increases
- Concentration of earnings just above zero (more
than expected) papers shows why firms avoid to
report earnings decreases/losses
Bell curve: normal distribution (should be how it normally
it) measured through: eye-ball statistics, formal statistical
tests and smoothness
Graph: if you engage in earnings changes, you will do it
again next year so therefore the threshold becomes clearer
every year
Looking at profit, the number of observations increases,
because a lot of firms continuously make profit
How threshold management:
- Working capital accruals are cheap to manipulate
- Higher level of WCA (e.g. receivables, current assets …)
results in a higher discontinuity and a higher threshold
- Graph: the higher the CA, the larger the increase
around the threshold more firms push small loss to
a small profit
Week 2: lecture 1: Basu 1997
Market-based conservatism: anticipate no profits
but all losses basu: earnings reflect bad news
more quickly than good news
e.g. the graph shows that good news is allocated
to multiple years and bad news in one year
shows that accounting is conservative
findings:
earnings are timelier for bad news:
The reaction of earnings to bad news is
stronger compared to good news
dummy variable that takes 1 if good news
Timeliness earnings to cash flows:
,Conservatism shouldn’t be seen in cash flows because that are real decisions
Graph: beta1-beta0/beta0 a lot greater for earnings than for cash flows
Unexpected earnings increases persistent
If there is a negative earnings change (the dip in earnings),
than it is offset to an increase in the next year
ERC < bad news
Good earnings news is more persistent than bad earnings
news
Conservatism increases when auditor legal liability is
higher
Week 2: Lecture 2: Ball and Shivakumar 2005
Accounting-based conservatism
RQ: is loss recognition more timely for private firms relative to public firms, because private
firms have insider access and public firms operate more via financial statement
- Regulatory conditions are constant for public and private firms, market conditions
vary to test for difference
Same for accruals
Alfa 3 (interaction term) measures whether losses are more timely than gains if negative,
it means that a negative change in last year gets offset in this year (because negative minus
negative is positive)
Explanations of conservatism: listed firms more risky, taxes, opportunistic managers and
audit firm size
Private firms reporting is of lower quality,
EXAM QUESTIOM: why is endogeneity of importance in controlling the results?
Week 2: Lecture 3: Barth, Landsman and Lang 2005
Voluntary IFRS adoption:
,RQ: is application of IAS associated with higher accounting quality e.g.: IAS may be of
lower quality than domestic standards
- Flexibility in standards allow more earnings management
Hypothesis: firms that apply IFRS exhibit higher accounting quality:
- less earnings management: less smoothing and loss avoidance
- more timely loss recognition: higher frequency of large losses
- higher value relevance: higher association between stock price and earnings/book
values
research design: randomly assign firms to apply IAS
Change in financial reporting quality = change in standards + change in incentives + change
in economic environment mitigate the effects of I and E
- I: mitigate effects through controlling for factors associated with firms’ voluntary
accounting choices
- E: match sample (e.g. country or size) and use controls for economic environment
Error term can be seen as the proxy for accounting quality
Test 1: look if firms with IAS have better quality
than firms without IAS
Test 2: look if firms that have better quality
already had that better quality before adopting
Test 3: look if the firms that adopted to IAS
actually have a change in their quality
Sample: highly concentrated in certain
industries possible high internal but low
external validity
Smoothing: less smooth numbers are an estimation of accounting quality higher
correlation means less smoothing
Significantly more variability in Net income with IAS equals less smoothing
Significantly less correlation in NIAS firms equals less smoothing
Significant positive coefficient for LNEG indicates more conservative reporting under IFRS
Results:
- Post-adoption: IAS firms have higher quality than NIAS firms
- Pre-adoption: except LNEG, no significant differences in accounting quality between
IAS and NIAS firms
- Post and pre: for IAS firms, half of the difference are significant
Week 2: Lecture 4: Christensen 2013
Mandatory IFRS adoption:
RQ: what are the capital market benefits of IFRS difficulties
, - Discretion in accounting standards and incentives not changed
- IFRS adoption clustered together and with other changes (enforcement)
Hypothesis: firms that apply IFRS exhibit higher accounting quality:
- less earnings management: less smoothing and loss avoidance
- more timely loss recognition: higher frequency of large losses
- higher value relevance: higher association between stock price and earnings/book
values
- Improving standards without the infrastructure, then adopting IFRS is just a label
so enforcement should also be a value-added part
liquidity: is a market effect a higher liquidity means a better capital market position
idea: if IFRS adoption, you will get a higher liquidity (negative coefficient)
result: mandatory switch to IFRS has capital market benefits stronger effect for countries
where it is bundled with enforcement
Week 2: Lecture 5: Minnis 2011
RQ: economic effects of audit verification (exogenous audit) by third party is
fundamental aspect of financial reporting (credible signal) paper looks at voluntary
(endogenous) audit
Hypothesis 1: if your audited, lower cost of debt (lower interest costs)
Hypothesis 2: if your audited, ratio’s (e.g. solvency) more linked to cost of debt
Hypothesis 3: if your audited, income has a better link with future cash flows voluntary
audit therefore improves accounting quality
Table 4: take the percentage of audits in state tied to exogenous factors
Table 6: shows that there are also endogenous factors that influence an audit
- State audit residual: portion of choice to audit not related to economic or
institutional variation at state level
Results: audits reduce cost of debt and enhance contracting (through better predictability of
future cash flows and more correlation between accounting and interest)
, Week 2: lecture 6: Francis and Yu 2009
RQ: do larger offices provide higher quality audits because it reduces information
asymmetry, more inhouse experience and more colleagues to consult
Week 3: Lecture 1: introduction to experimental research
Experiment: usually a short computer-based scenario where different groups experience a
small difference in their scenario abstract setting (no accounting hints)
Experimental studies: strip away realityisolate specific factors randomization can
measure anything does it translate to reality?
Archival studies: hold onto realitymany things happening at once quasi-randomization
measurement limited to the data real
Internal validity: can we establish causality of a specific
effect? high for lab experiment
External validity: do findings extrapolate to settings we
care about? high for archival studies
Field experiment: randomization in a nature setting (mix)
- Downsides: expensive or low external validity
- A missing real-world factor could change how the
real world works in experiments and the external validity could become a lot less
Week 3: lecture 2: Dietvorst, Simmons and Massey 2015
AI hasn’t taken over, but AI is better at predicting when situation is:
- Static: when history is likely to repeat difficult in economy looks fr patterns
- NOT novel: there actually is a story normally
- Measurable: e.g. difficult to measure what a good leader is
Humans work differently and have reason flexibility
- Novel situations: know how the world works
- Figuring out why: e.g. being Sherlock Holmes