Cognitive Modeling Class Notes
CM CLASS 1 - COMPUTATIONAL RATIONALITY
– making quick decisions -> calculate expected utility (how useful to
take certain actions), determine which action has best utility
– theoretical assumption --> the longer you compute, the better optimal
decision will come (classica homo economicus)
– consider cost, factor it in and find a net value, at the peak of this net
value there's the best expected utility plotted with computation time
– examples of these: autonomous cars, medical decision making
– kahneman, 2011 - fast ad slow --> interesting reading (watson
selection task, gambler's fallacies --> fallacies in decision making)
Computation Rationality:
– used to study why such fallacies in decision making happen, what
causes those fallacies, how they are implemented in the brain
(following marr's analysis)
– for weird sentences --> comp. rat. explains that based on experience,
we expect particular type of sentences, keeping track of all options
takes time and memory (cost), do not keep track of options once have
significant evidence against it, this helps performance, but can lead to
errors
– useful for brain, mind and machines
– this is because all three form beliefs and plan actions in support of
maximizing expected utility (MEU)
– ideal MEU may be intractable --> rational algorithms might
approximate
Computation Rationality in Neuroscience:
– animals do learn optimal policies for how to, i.e. navigate a space,
navigate in games --> based on forward thinking and creating a model
of the problem in the future
– tradeoff costs and benefits to find best computational solution
Process model:
– describes steps (process, algorithm) the mind goes through while
performing certain tasks (at algorithmic level and cognitive band), also
cross-connection with other levels
– many varieties: ACT-R, SOAR, EPIC, each with different styles
– they're scientific ways to explain certain processes
– describe process the agents go through and check why when and how
errors occur, to find limitations of the minds, we can try to emulate
human behavior
, – AI applications --> when do we need human support given an
automatized task? what type of support should be given? inspiration
for robots and autonomous systems to take decisions in finite amount
of time
– other applications: better interface design, tutoring and support
systems
Abstraction continuum:
– newell’s bands
– marr's levels
– typical processing models look at a behavior, make a model and then
measure it and observe it
– sub-optimal behavior (khaneman and tverskys):
– humans demonstrate problems in (logical and numerical) judgment
& decision making
– mechanistic explanation --> humans have limitations in ability to
calculate optimal solution (not enough time, memory, knowledge)
– rational explanation —> situation is ill described, people have other
goals (in broader context) and environment in daily life is different
from experiment
– core of lewis et al paper is to propose a framework for including
information processing bounds in rational analysis
– application of bounded optimality to the challenges of developing
theories of mechanism and behavior
– comp. reality --> application of bounded optimality
– both environment and agent count towards behavior
– so idea is —> first have a general program space, then within it have a
bounded agent program space (smaller as agent is bounded in
observations and actions (e.g. only have front eyes)), then have an
ecological environment which provides an evaluation environment for
a given task --> this defines the set of actions, which is then even
more defined given utility function, mechanism (cog/neural machine),
environment and rational evaluation
– generally introduction of bounds in models result in much similar to
human performance, which argues for why bounds are needed in
cognitive modeling
– environment + Mechanism --> make agent shape the program
Optimality types:
– type 1: optimality --> make no assumptions on limitations, no
limitations on bondings, limitations physical and mental (no limitations)
– type 2: ecological-optimality --> given the environment, best options
are constrained by the environment itself
– type 3: bounded-optimality --> first comp. rat. approach, where there
are cognitive constraints, machine is bounded, the model captures
, –
steps of central controller (production rules), explicit exploration of
different strategies that are serial or in parallel, capture individual
differences in duration of each step fit to data from longest SOA
conditions, overall utility of a strategy in the average of all scores
– type 4: ecological-bounded-optimality --> consider ecological and
cognitive constraints
– why not use type 1 model --> humans deviate from ideal response
time (not always making 100% correct assumptions and decisions)
– why not use type 2 model:
– environment alone is not enough
– assumptions about agent (internal process) are needed, and these
can differ between theories, e.g. ACT-R vs. EPIC
– when use type 3 model --> when humans details matter (psych. RQs),
what humans can and can’t do
CM CLASS 2 - PROCESS MODELING AND NEUROSCIENCE
– example to where it can be relevant: intelligent tutoring systems
– quality of processing models is very important --> how well are
processes captured over time?
– confidence in theory is needed to have confidence in how to design
the models
– common classic model free brain analysis --> fMRI, EEG, ERP
– check purves et al. fig. 1.6 --> neuroscience methods space-time
tradeoffs
– fMRI:
– when brain is active, oxygenated blood goes around, can measure
the signal using a big magnet that gets this signal when activity in
brain happens
– problem with fMRI is that it lacks temporal precision (can mitigate
this with a model)
– EEG:
– in our neurons there are neurotransmitters which have a relative
positive or negative charge
– measure change in pos. neg. electrical activity
– EEG output can be indicative of different states
– limitation of raw EEG, since know the fluctuations but doesn't
know where it comes from or how to interpret it
– ERP:
– find out by measuring while participants carry out certain tasks,
measure how amplitude changes over time, over multiple trials,
then average out over these trials
– but it requires many trials, only works good for processes close to
stimulus onset/offset, mostly externally driven processes (over by