Lecture 6 – future directions
Summary;
- The emergence of complex systems theories
- High-level cognition relies on low-level processes such as; Decision-making, guided
by emotions and Artificial intelligence, guided by perception and learning.
- Brain as predictor (Visual consciousness/Conscious will)
Social-intuitionalist model;
- Do we judge behaviour based on rational thought or an emotional knee-jerk? (Haidt,
2001)
- Proposes that judgments are based on intuition, not rational reflection, through two
routes.
Hot route;
- “You stole my car! HULK SMASH!!!”
- Compare to System 1 (Kahneman, 2011)
Cold route;
- “Stealing is against society because property and work are lynchpins of mutually-
beneficial cooperation between men; it’s what separates us from the animals. I shall
call my local law enforcement officials and prosecute you to the full extent of the
law.”
- Compare to System 2 (Kahneman, 2012)
- Overall, the hot route is more direct than the cold
route, this is supposed to predict thar emotional state
influences decision making.
- Moral judgments and emotional stimuli activate
overlapping regions in the brain such as; Amygdala,
thalamus, and upper midbrain (Moll et al., 2002)
- Intuitions are shaped by emotion (Wheatley & Haidt,
2005/Schnall et al., 2008/Cushman & Mele, 2009)
Schnall et al., 2008;
- Participants experience disgust (exposed to a bad smell and watch the toilet scene in
Trainspotting)
- Then judge the morality of actions, E.g., 1st cousin marriages, eating dogs
- Ps in the disgust condition judge more negatively
Wheatley & Haidt, 2005;
- Ps are induced to feel disgust through
hypnosis
- Then judge actions
- More disgust > more severe judgment
, Intentionality judgements;
- Ps rate whether the chairman intended to harm/help the environment
- ‘The vice-president of a company went to the chairman of the board and said, “We are
thinking of starting a new program. It will help us increase profits for this year’s balance
sheet, but in 10 years it will start to (harm/help) the environment.” The chairman answered,
“I don’t care at all about (harming/helping) the environment. I just want to make as much
profit for this year’s balance sheet as I can. Let’s start the new program.” They started the
new program. Sure enough, ten years later, the environment started to be (harmed/helped)’
- Ps are more likely to say actions are intentional when they caused harm
- Automatic emotional reaction influences judgments
- Found;
- Ps are more likely to say the chairman’s actions were intentional
when they harmed the environment
- Automatic, emotional reaction influences judgments
- Does this serve a function for assessing responsibility?
Kohlberg;
- states that Morality develops as cognitive abilities (rational thought) develop.
Therefore, rationality matters, right, in how we make moral judgments?
- Haidt’s model accommodates this; reflection and considered thought have effects
over the long term, but overall occur with much lower frequency (takes longer).
Artificial intelligence;
- Frame problem = Well-defined problems are much easier e.g. Games
(Chess, Go, Poker, StarCraft). Well suited for what computers can do.
- Domain general intelligence = Conversation and navigation are more
difficult, only done by humans. These are outstanding challenges.
- What can computers do?
- Weak AI = Computers can be programmed to act as if they were
intelligent (as if they were thinking)
- Strong AI = Computers can be programmed to think (i.e. they really are
thinking)
ENIAC (1945) is the first electronic digital computer
- computers inspired models of cognition.
Deep Blue (1997) was the first supercomputer which beat a professional
chess player (Gary Kasperov).
Why did they get it wrong?
- The need for knowledge – Expert chess players have experience
- Expert chess players have strategies and heuristics that are difficult to put into a
computer
- Deep Blue mimics performance, but not underlying processes
Watson (2011);
- Supercomputer which gives you a clue and you needed to guess the actor who it
related to.
- A lot harder than playing chess because it involves language which needs processing.
AlphaGo (2016);
- Lead the game ‘Monte Carlo tree search’
- Go begins on a 19x19 square board