COURSE 6 – PSYCHODIAGNOSTICS – TUTORIALS
TASK 1 – A DIFFICULT PATIENT
1. What is a good psychological test?
Covers a range of functional domains needs to be broad.
Provide empirically quantified information
Standard rise administration and scoring procedures.
There have to be norms to compare and give meaning to results.
Research is reliable and valid.
Psychological test = a standardized measure of a sample of behaviour that
establishes norms and uses important test items that correspond to what the test is
to discover about the test maker.
Norms rely on the number of test-takers who take a given test, to establish what is
normal in the group.
Uniformity of a psychological test:
- Administrators present the test in the same way;
- Test-takers take the test in the same way;
- Scorers score the test in the same way.
this helps with validity and reliability.
The purposes and appropriate uses of psychological assessment:
- To describe current functioning (including cognitive abilities, severity of
disturbance, and capacity for independent living);
- Confirm, refute, or modify the impressions formed by clinicians through their
less structured interactions with patients;
- Identify therapeutic needs, highlight issues likely to emerge in treatment,
recommend forms of intervention, and offer guidance about likely outcomes;
- Aid in the differential diagnosis of emotional, behavioural, and cognitive
disorders;
Pre-treatment evaluation Assessment is likely to yield the greatest overall
utility when:
a. The treating clinician or patient has salient questions;
b. There are a variety of treatment approaches from which to choose and
a body of knowledge linking treatment methods to patient
characteristics;
c. The patient has had little success in prior treatment;
d. The patient has complex problems and treatment goals must be
prioritized.
- Monitor treatment over time to evaluate the success of interventions or to
identify new issues that may require attention as original concerns are
resolved;
- Manage risk, including minimization of potential legal liabilities and
identification of untoward treatment reactions;
- Provide skilled, empathic assessment feedback as a therapeutic intervention
in itself.
1
,A foundation for understanding testing and assessment validity evidence
Three readily accessible but inappropriate benchmarks can lead to unrealistically
high expectations about effect magnitudes:
1) It is easy to recall a perfect association. However, perfect associations are
never encountered in applied psychological research, making this benchmark
unrealistic.
2) It is easy to implicitly compare validity correlations with reliability coefficients
because the latter are frequently reported in the literature. However, reliability
coefficients evaluate only the correspondence between a variable and itself,
so they cannot provide a reasonable standard for evaluating the association
between two distinct real-world variables.
3) Monomethod validity coefficients are presented everywhere in the
psychological literature (e.g. self-reports are compared with self-reports).
Because the systematic error of method variance is aligned in such studies,
the results are inflated and do not provide a reasonable benchmark for
considering the real-world associations between two independently measured
variables.
Instead of relying on unrealistic benchmarks to evaluate the findings it seems that
psychologists studying highly complex human behavior should be rather satisfied
when they can identify replicated univariate correlations among independently
measured constructs.
Therapeutic impact is likely to be greatest when:
a. Initial treatment efforts have failed;
b. Patients are curious about themselves and motivated to participate;
c. Collaborative procedures are used to engage the patient;
d. Family and allied health service providers are invited to furnish input;
e. Patients and relevant others are given detailed feedback about results.
Monomethod validity coefficients are obtained whenever numerical values on a
predictor and criterion are completely or largely derived from the same source of
information.
- E.g., a self-report scale that is validated by correlating it with a conceptually
similar scale that is also derived from self-report.
Distinctions between psychological testing and psychological assessment:
Psychological testing = a relatively straightforward process wherein a particular
scale is administered to obtain a specific score.
- Score has one meaning.
Psychological assessment = concerned with the clinician who takes a variety of
test scores, generally obtained from multiple test methods, and considers the data in
the context of history, referral information, and observed behaviour to understand the
person being evaluated, to answer the referral questions, and then to communicate
findings to the patient, his or her significant others, and referral sources.
- Score can have different meanings (after considering all relevant information).
- Assessment uses test-derived sources of information in combination with
historical data, presenting complaints, observations, interview results, and
information form third parties to disentangle the competing possibilities.
Distinctions between formal assessment and other sources of clinical information:
2
, 1) Psychological assessments generally measure a large number of
personality, cognitive, or neuropsychological characteristics
simultaneously. As a result, they are inclusive and often cover a range of
functional domains, many of which might be overlooked during less formal
evaluation procedures.
2) Psychological tests provide empirically quantified information, allowing for
more precise measurement of patient characteristics than is usually obtained
from interviews.
3) Psychological tests have standardized administration and scoring procedures
(in less formal assessments, standardization is lacking).
4) Psychological tests are normed, permitting each patient to be compared with
a relevant group of peers, which in turn allows the clinician to formulate
refined inferences about strengths and limitations.
5) Research on the reliability and validity of individual test scales sets formal
assessment apart from other sources of clinical information. Without this,
practitioners have little ability to measure the accuracy of the data they
process when making judgments.
6) The use of test batteries in psychological assessment in a battery,
psychologists generally employ a range of methods to obtain information and
cross-check hypotheses. These methods include self-reports, performance
tasks, observations, and information derived from behavioural or functional
assessment strategies.
Assessment methods:
- Unstructured interviews elicit information relevant to thematic life
narratives, thought they are constrained by the range of topics considered and
ambiguities inherent when interpreting this information;
- Structured interviews and self-report instruments elicits details
concerning patients’ conscious understanding of themselves and overtly
experienced symptomatology, though they are limited by the patient’s
motivation to communicate frankly and their ability to make accurate
judgements;
- Performance-based personality tests elicit data about behaviour in
unstructured settings or implicit dynamics and underlying templates of
perception and motivation, though they are constrained by task engagement
and the nature of the stimulus materials;
- Performance-based cognitive tasks elicit findings about problem solving
and functional capacities, though they are limited by motivation, task
engagement, and setting;
- Observer rating scales elicit an informant’s perception of the patient,
though they are constrained by the parameters of a particular type of
relationship and the setting in which the observations transpire.
These distinctions provide each method with particular strengths for measuring
certain qualities, as well as inherent restrictions for measuring the full scope of
human functioning. Independence among psychological methods can point to
unappreciated complexity. This low cross-method correspondence can indicate
problems with one or both methods. Crossmethod correlations cannot reveal what
makes a test distinctive or unique, and they also cannot reveal how good a test is in
any specific sense. Clinicians and researchers should recognize the unique strengths
and limitations of various assessment methods and harness these qualities to select
3
, methods that help them more fully understand the complexity of the individual being
evaluated.
Method disparities and errors in practice
Although a single clinician using a single method to obtain information from a patient
is less expensive, it will develop an incomplete or biased understanding of that
patient. Patients will be misunderstood, mischaracterized, misdiagnosed, and less
than optimally treated. Over the long term, this should increase health care costs.
Issues at the interface of assessment research and practice
Validity coefficients suggest that psychologists have a limited capacity to make
reasoned, individualized judgments from test scales alone. When one considers the
errors associated with measurement and the infrequent occurrence of most clinical
conditions, validity coefficients are too small to justify testing-based decisions for
individuals. One cannot derive unequivocal clinical conclusions from test scores
considered in isolation. However, failure to appreciate the testing-versus-assessment
distinction has led some to seriously question the utility of psychological tests in
clinical contexts.
Because most research studies do not use the same type of data that clinicians do
when performing an individualized assessment, the validity coefficients from testing
research may underestimate the validity of test findings when they are integrated into
a systematic and individualized psychological assessment.
Contextual factors play a very large role in determining the final scores obtained on
psychological tests, so contextual factors must therefore be considered. Contextual
factors associated with each individual contribute to what is known as method
variance. However, it is much more difficult to make such individualized adjustments
when conducting research. As clinicians view all test data in a contextually
differentiated way, the practical value of tests used in clinical assessment is likely
greater than what is suggested by the research on their nomothetic associations.
However, trying to document the validity of individualized, contextually included
conclusions is very complex.
Conclusions
Formal assessment is a vital element in psychology's professional practice. This
review has documented the very strong and positive evidence that already exists on
the value of psychological testing and assessment for clinical practice. The validity of
psychological tests is comparable to the validity of medical tests and indicated that
differential limits on reimbursement for psychological and medical tests cannot be
justified on the basis of the empirical evidence. In addition, distinct assessment
methods provide unique sources of data and have documented how sole reliance on
a clinical interview often leads to an incomplete understanding of patients. On the
basis of a large evidence, it is argued that optimal knowledge in clinical practice (as
in research) is obtained from the sophisticated integration of information derived from
a multimethod assessment battery. Finally, critical implications are identified that flow
from the distinction between testing and assessment and have called for future
investigations to focus on the practical value of assessment clinicians who provide
test-informed services to patients and referral sources.
Source: Meyer
2. What makes a test valid and reliable?
4