Summaries: Psychoogical Assessment & The
Reader for Exam Part 1
Summary Psychological Assessment (CH 1, 2, 3, 4, 5, 10 & 13)
Chapter 1: Introducton
1: Know the central role of the clinician and how to fulfl this role.
2: Know and explain the fie important points in eialuatnn psycholonical tests (see Table 1.1)
3: Understand the diferences between test-retest reliability, alternate forms, internal consistency
and inter-scorer reliability .
4: Understand how a low reliability afects the interpretatons and predictons based on the test data
5: Understand the diferences between content ialidity, criterion ialidity, and construct ialidity
(coniernent and discriminant ialidity)
6: Understand how the inadequacy to take into account base rate, the confrmatory bias and the
hindsinht bias infuence clinical judnement
7: Know and explain the einht phases in clinical assessment (see Finure 1.1)
Role of the clinician
The central role of clinicians conductnn assessment should be to answer specifc questons and make
clear, specifc and reasonable recommendaton to help improie functoninn. To fulfl this role,
clinicians should intenrate a wide ranne of data and brinn into focus diierse areas of knowledne.
Thus, they are not merely administerinn and scorinn tests. Psychometrics tend to use tests merely to
obtain data and their task is ofen perceiied as emphasizinn the clerical and technical aspects of
testnn. Their approach is primarily data-oriented, and the end product is ofen a series of
traits/ability descriptons. Psychological assessment, places data in a wide perspectie, with its focus
beinn problem soliinn and decision makinn. The ideal role of the clinician can be more clearly defned
by briefy elaboratnn on the historical and methodolonical reasons for the deielopment of the
psychometric approach.
When psycholonical tests were orininally deieloped, nroup measurements of intellinence met
with early and noteworthy success, especially in military and industrial setnns. An advantage of the
data-oriented intellinence tests was that they appeared to be objectiee which would reduce possible
interiiewer bias. A further deielopment with psychometric approach was the strateny of usinn a
test-battery. It was reasoned that if a sinnle test could produce accurate descriptons of an
ability/trait, administerinn a series of tests could create a total picture of the person. The noal then,
was to deielop a nlobal, yet defnitie descripton for the person usinn purely objectve methods.
Behind this approach were the concepts of indiiidual diferences/trait psychology.
The objectie psychometric approach is most appropriately applicable to ability tests, such as
measurinn intelligence/mechanical skills. Its usefulness decreases howeier, when users atempt to
assess personality traits such as dependencee authoritarianism or anxiety. Personality iariables are
far more complex and therefore ned to be ialidated in the context of history, behaiioral
obseriatons and interpersonal relatonships.
,Psycholonical assessment is most useful in the understandinn and eialuaton of personality and in
elucidatnn the likely underlyinn causes of problems in liiinn. These issues iniolie a partcular
problem situaton haiinn to do with a specifc indiiidual. The central role of the clinician performinn
psycholonical assessment is that of an expert in human behaiior who must deal with complex
processes and understand test-scores in the context of a person’s life. The clinician must haie
knowledne concerninn problem areas and on the basis of this knowledne, form a neneral idea
renardinn behaiiors to obserier and areas in which to collect releiant data.
In additon to an awareness of the role sunnested by psycholonical assessment, clinicians should be
familiar with core knowledge related to measurement and clinical practce. This includes descriptie
statstcse reliability (and measurement error,e ialidity (and the meaning of test scores,e normatie
interpretatone selecton of appropriate testse administraton procedurese iariables related to diiersity
(ethnicitye racee gendere culture,e testng indiiiduals with disabilites and an appropriate amount of
superiised experience. Clinicians should also know the main interpretve hypotheses in psycholonical
testnn and be able to identfy, sif throunh and eialuate a series of hypotheses to determine which
are most releiant and accurate.
The aboie knowledne should be intenrated with releiant neneral coursework, includinn abnormal
psychology, the psychology of adjustment, theories of personality, clinical neuropsychology,
psychotherapy and basic case management. A problem in many traininn pronrams is that althounh
students frequently haie knowledne of abnormal psycholony, personality theory and test
constructon, they usually haie insufcient traininn to intenrate their knowledne into the
interpretaton of test results. Their traininn focuses on deielopinn competency in administraton and
scorinn, rather than on knowledne relatnn to what they are testnn.
Evaluatng Psychological Tests
Before usinn a psycholonical test, clinicians should iniestnate and understand the theoretcal
orientaton of the test, practcal consideratons, the appropriateness of the standardizaton sample
and the adequacy of its psychometric propertes (reliability and ialidaton). Helpful descriptons and
reiiews that relate to these issues are ofen found in the test manuals as well as past and future
editons.
,Evaluatng a Psychological test
Theoretcal Orientaton.
1. Do you adequately understand the theoretcal construct the test is supposed to be measurinn?
2. Do the test items correspond to the theoretcal descripton of the construct?
Practcal consideratons
1. If readinn is required by thy examinee, does his/her ability match the required leiel?
2. How appropriate is the lennth of the test?
Standardizaton
1. Is the populaton to be tested to the populaton the test was standardized for?
2. Was the size of the standardizaton sample adequate?
3. Haie specialized subnroup norms been established?
4. How adequately do the instructons permit standardized administraton?
Reliability
1. Are reliability estmates sufciently hinh? (clinical decision: 0.90 and research: 0.70
2. What implicatons do the relatie stability of the trait,, the method of estmatnn reliability and
the test format haie on reliability?
Validity
1. What criteria and procedures were used to ialidate the test?
2. Will the test produce accurate measurements in the context and for the purpose for which you
would like to use it?
Theoretcal Orientaton Before clinicians can efectiely eialuate whether a test is appropriate,
they must understand its theoretcal orientaton. Clinicians should research the construct that the
test is supposed to measure and then examine how the test approaches this construct. This
informaton can usually be found in the test manual. If the informaton is insufcient, the clinician
should seek it elsewhere.
Practcal Consideratons A number of practcal issues relate more to the context/manner in which
the test is used than to its constructon. First, tests iary in terms of the leiel of educaton (reading
skill) that examinees must haie to understand them adequately. Second, some tests are too lonn,
which can lead to a loss of rapport with or extensiie frustraton on the part of the examinee. Finally,
clinicians haie to assess the extent to which they need traininn to administer and interpret the
instrument.
Standardizaton Another central issue relates to the adequacy of norms. Each test has norms that
refect the distributon of scores by a standardizaton sample. The basis on which indiiidual test
scores haie meaninn relates directly to the similarity between the indiiidual beinn tested and the
sample. If a similarity exists between the nroup or indiiidual beinn tested and the standardizaton
sample, adequate comparison can be made. Three major questons relate to the adequacy of norms
must be answered. First: whether the standardizaton nroup includes representaton from the
populaton on which the examiner would like to use the test. Second: whether the standardizaton
nroup is large enounh. Third: a test may haie specialized subgroup norms as well as broad natonal
norms. Standardizaton can also refer to administraton procedures. A well-constructed test should
haie clear instructons that permit examiners to niie the test in a manner similar to that of other
examiners.
, Reliability The reliability of a test refers to its degree of stabilitye consistency and predictability. It
addresses the extent to which scores obtained by a person are or would be the same if the person is
re-examined by the same test on diferent occasions. Underlyinn concept of reliability is the possible
range of errore or error of measuremente of a single score. This is an estmate of the ranne of possible
random fuctuaton that can be expected in an indiiidual’s score. Error is always present in the
system, because psycholonical constructs cannot be measured directly. Two main issues relate to the
denree of error in a test
1. First: the ineiitablee natural iariaton in human performance. Typically iariability is less for
measurements of ability than those for personality and state of beinn. Personality traits/states of
mind are much more dependent on factors such as mood.
2. Second: psycholonical testnn methods are necessarily imprecise. For the hard sciences,
researchers can make direct measurements, such as concentraton of a chemical solutom. In
contrast, many constructs in psycholony are ofen measured indirectly. Variability in measurement
also occurs simply because people haie trust fuctuatons in performance.
A hinh measure of reliability is usually nenerally .80 ir more, but the iariable beinn measures also
channes the expected strennth of the statstc. Ideally, clinicians hope for reliability statstcs of .90 or
hinher in tests. The purpose of reliability is to estmate the degree of test iariance causes by error.
There are four primary methods of obtaininn reliability.
Test-re-test: the extent to which the test produces consistent results upon retestnn.
Alternate forms: the relatie accuracy of a test at a niien tme
Split-half/coefficient alpha: the internal consistency of the items
Inter-scorer: the denree of anreement between two examiners.
Test-Retest Reliability
Test-Retest reliability is determined by administerinn the test and repeatnn it on a second occasion.
The reliability coefcient is calculated by correlatng the scores obtained by the same person on the
two diferent administratons. The denree of correlaton between two scores indicates the extent to
which the test scores can be neneralized from one situaton to the next. If the correlatons are hinh,
the results are less likely to be caused by random fuctuatons in the conditon of the examinee or
the testnn eniironment.
Alternate Forms
The alternate forms method aioids many of the problems encountered with test-retest reliability.
The lonic behind alternate forms is that if the trait measured seieral tmes on the same indiiidual by
usinn parallel forms of the test, the diferent measurements should produce similar results. The
denree of similarity between the scores represents the reliability coefficient of the test. The interial
between administratons should always be included in the manual, as well as the descripton of any
likely sinnifcant interieninn life experiences.