100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Philosophy and Neuroethics - all lectures + summary of literature $11.81   Add to cart

Class notes

Philosophy and Neuroethics - all lectures + summary of literature

2 reviews
 31 views  3 purchases
  • Course
  • Institution

Philosophy and Neuroethics Week 1 - Mind-body problem - Searle, J. Minds, Brains, and Computers - Clark & Chalmers The Extended Mind - Lecture a - Lecture b Week 2 - Science and Objectivity - Kent Staley, An Introduction to Philosophy of Science chapter 2 - Kent Staley, An Introduct...

[Show more]

Preview 4 out of 69  pages

  • December 18, 2021
  • 69
  • 2021/2022
  • Class notes
  • Leon de bruin
  • All classes

2  reviews

review-writer-avatar

By: alexvandeberg • 1 year ago

review-writer-avatar

By: 12yaaac • 1 year ago

avatar-seller
Philosophy and Neuroethics




Week 1 - The Mind-Body Proble 2
• Searle, J. (1980) Minds, brains and computers 2
• Clark & Chalmers 1998 The Extended Mind 5
• Lecture 1 (a) - 02/11/21 8
• Lecture 1 (b) - 04/11/21 13

Week 2 - Science and Objectivit 17
• Kent Staley - An Introduction to Philosophy of Science, chapter 2, 5, and 12 17
• Lecture 2 (a) - 09/11/21 - Normative- descriptive, empiricism, falsi cationism 25
• Lecture 2 (b) - 11/11/21 - Paradigms values 28

Week 3 - Foundations of Ethic 31
• Medical ethics : four principles plus attention to scope - Raanan Gillon 31
• Shafer Landau, The Fundamentals of Ethics, Chapter 9 : Consequentialism, Its nature and
attractions 33
• Lecture 3 (a) - 16/11/21 35
• Lecture 3 (b) - 18/11/21 - Consequentialism & Medical Ethics 39

Week 4 - Consciousness, the Concepts of AI Intelligenc 40
• Nagel, T. (1974), What is it like to be a bat 40
• Lecture 4 (a) - 23/11/21 42
• Lecture 4 (b) - 25/11/21 48

Week 5 - Epistemology, Ethics & Algorithm 51
• C Thi Nguyen (2018), Escape the Echo Chamber 51
• Lecture 5 (a) - 07/12/21 -> not recording, only slides 53
• Lecture 5 (b) - 09/12/21 56

Week 6 - Free Will and Brainreadin 60
• Benjamin Libet, Do we have free will 60
• Lecture 6 (a) - 07/12/21 62
• Lecture 6 (b) - 09/12/21 66




s m
y g s e fi

,Week 1 - The Mind-Body Problem

• Searle, J. (1980) Minds, brains and computers

Searle argues that although weak AI, which states that the mind functions somewhat like a
computer, might be correct, strong AI, which stated that the appropriately programmed computer
is mind and has intentions, is false.

What psychological and philosophical signi cance should we attach to recent e orts at computer
simulations of human cognitive capacities ?
Distinguishment between ‘strong’ AI and ‘weak/cautious’ AI needed to answer:
(1) Weak AI
-> principal value in the study of the mind : powerful tool for humans
(2) Strong AI - focus in this essay is on strong AI
-> not merely a tool in the study of the mind -> it is a mind -> understanding and cognitive
abilities

Schank’s program : the aim of the program is to simulate the human ability to understand stories.
It is characteristic of human’ beings story-understanding capacity that they can answer question
about the story even though the information that they give was never explicitly stated in the story
(e.g. burger example - eat or not?)
Schank’s machines can similarly answer questions, by having a representation of the sort of
information that human beings have, which enables them to answer question given these sorts of
stories. When the machines is given the story and then asked the question, the machine will print
out answers of the sort that would expect human beings to give if told similar stories.
Partisans of strong AI claim that in this question and answer sequence the machine is not only
simulating a human ability but also (1) that the machine can literally be said to understand the
story and provide the answers to questions, and (2) that what the machine and its programs do
explains the human ability to understand the story and answer questions about it.
-> author argues these claims are unsupported by Schank’s work
—> thought experiment about writing in Chinese while not knowing Chinese - Suppose the
following is given:
- First batch of Chinese writing — script
- Second batch of Chinese script — story
- Set of rules in English for correlating both batches - enabling to correlate one set of formal
symbols with another set of formal symbols —program
- Third batch of Chinese symbols — questions
- Instruction in English for correlating third batch with the rst two — program
—> suppose getting good at following instructions, the answers become indistinguishable from
those of native Chinese speakers - behaving like a computer, performing computational
operations on formally speci ed elements —> instantiation of the computer program
Claims (1) and (2) :
(1) Obviously do not understand a word of the Chinese stories
(2) The computer and its program do not provide su cient conditions of understanding since the
computer and the program are functioning, and these is no understanding -> But does it even
provide a necessary condition or a signi cant contribution to understanding? Is understanding
a story in English not exactly the same as I was doing in manipulating the Chinese symbols ?
Supposition that we can, construct a program that will gave the same inputs and outputs as
native speakers, and in addition we assure that speakers have some level of description where
they are also instantiations of a program.




fi fi fi ffi fi ff

, Whatever purely formal principles you put into the computer, they will not be su cient for
understanding, since a human will be able to follow the formal principles without understanding
anything. But are such principles necessary or even contributory? Since no reason has been given
to suppose that when I understand English I am operating with any formal program at all.
—> What do have with English sentences that I do not have with Chinese sentences ?
The obvious answer is that I know what the former mean, while I haven't the faintest idea what the
latter mean. But in what does this consist and why couldn't we give it to a machine, whatever it
is?

Di erent degrees of understanding, and clear cases in which the word applies, and clear cases in
which it does not.
We often attribute ‘understanding’ and the tiger cognitive predicates by metaphor and analogy to
cars, adding machines, and other artefacts, but nothing is proved by such attributions. The
reason we make these attributions has to do with the fact that in artefact she extent our own
intentionality - our tools are extensions of our purposes, and so we nd it natural to make
metaphorical attributions of intentionality to them. E.g. the sense in which an automatic door
“understands instructions’ from its photoelectric cell is not at all the sense in which I understand
English.
Newell and Simon write that the kind of cognition they claim for computers is exactly the same as
for human beings.
-> author argues that in the literal sense the programmed computer understands what the car and
the adding machine understand, namely, exactly nothing. The computer's understanding is not
just partial or incomplete; it is zero
-> Could machine think ? Yes, we are precisely such machines
-> Could an arti cial, man-made machine, think ?Assuming it is possible to. produce arti cially a
machine with a nervous system, neurons, with axons and dendrites, and all the rest of it,
su ciently like ours, again the answer to the question seems to be obviously, yes. If you can
exactly duplicate the causes, you could duplicate the.e ects.
-> Could a digital computer think ? If by ‘digital computer’ we man anything at all that has a level
of description where it can correctly be described as the instantiation of computer program, then
again, yes, since we are the instantiations of any number of computer programs, and we can
think.
-> Could something think, understand, and so on solely by virtue of being a computer with the
right sort of program? Could instantiating a program by itself be a su cient condition of
understanding? No.
-> Why not? Because the formal symbol manipulations by themselves don't have any
intentionality; they are quite meaningless; they aren't even symbol manipulations, since the
symbols don't symbolise anything. Such intentionality as computers appear to have is solely in
the minds of those who program them and those who use them, those who send in the input and
those who interpret the output.
-> If programs are in no way constitutive of mental processes, why have so many people believed
the converse ? Why on earth would anyone suppose that a computer simulation of understanding
actually understood anything? For simulation, all you need is the right input and output and a
program in the middle that transforms the former into the latter

Several reasons why AI does seem to be able to reproduce and thereby explain mental
phenomena :
- Confusion about the notion of ‘information processing” : many believe that the human brain
does something called ‘information processing’, and analogously the computer with its
program does information processing. Thus when the computer is properly programmed,




ffffi fi ff fi ffi ffi fi

, ideally with the same program as the brain, the information processing is identical in the two
cases, and this information processing is the essence of the mental. But, while people 'process
information’ when they read and answer questions about stories, the programmed computer
does not do ‘information processing’, it manipulates formal symbols. The computer lacks
interpretations about its rst-order symbols. Thus, either we construe the notion of “information
processing’ in such a way that it implies intentionality as part of the process or we don’t. If the
former, then the programmed computer does not do information processing, it only
manipulates symbols. If the latter, then the computer does information processing, but in the
same sense as adding machines, typewriters - meaning outsiders interpret the input and
output.
- Residual behaviourism or operationalism - since appropriately programmed computers can
have input-output patterns similar to those of human beings, we are tempted to postulate
mental states in the computer similar to human mental states.
- Residual operationalism is joined to a residual form of dualism; strong AI only makes sense
given the dualistic assumption that, where the mind is concerned, the brain doesn’t matter. In
strong AI what matters are programs, and programs are independent of their realisation in
machines. -> unless you accept some form of dualism, the strong AI project hasn't got a
chance. The project is to reproduce and explain the mental by designing programs, but unless
the mind is not only conceptually but empirically independent of the brain you couldn't carry
out the project, for the program is completely independent of any realisation -> only a machine
can think, and indeed only very special kinds of machines, namely brains and machines that
had the same causal powers as brains. And that is the main reason strong AI has had little to
tell us about thinking, since it has nothing to tell us about machines. By its own de nition, it is
about programs' and programs are not machines —> basically dualism as one thinks that to
make AI you need only programs to replicate the mind while actually the mind logically should
mean creating a brain also.





fi fi

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller isabellevlug. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $11.81. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

80796 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$11.81  3x  sold
  • (2)
  Add to cart