Psychological test: standardized measure of a sample of behavior that establishes norms and uses important
test items that correspond to what the test is to discover about the test maker.
Norms rely on the number of test-takers who take a given test, to establish what is normal in the group.
A good psychological test covers a range of functional domains. It needs to be broad.
The diagnostic process:
Usually starts with the client’s referral to the diagnostician but might also begin with the client’s direct
question to the diagnostician.
Diagnostician analyzes both the clients request for help and the referrer’s request. Does not have to be
the same.
Diagnostician also formulates questions that arise during the first meeting with the client.
On the basis of these questions, the diagnostician will construct a diagnostic scenario that contains a
provisional theory about the client describes what the problems are and how they can be explained.
Testing the theory requires 5 diagnostic measures:
1. Converting the provisional theory into concrete hypotheses.
2. Selecting a specific set of research tools, which can either support or reject the formulated hypotheses.
3. Making predictions about the results or outcomes from this set of tools, in order to give a clear
indication as to when the hypotheses should accepted/rejected.
4. Applying and processing instruments.
5. Diagnostic conclusion: reasons for why the hypotheses have been rejected/accepted based on the
obtained results.
There are 5 questions that form the basis for most of the questions that are posed by clients, referrers and
diagnosticians:
1. Recognition: What are the problems? What works and what doesn’t?
o Inventory and description
o Organization and categorization in dysfunctional behavior clusters or disorders.
o Examination of the seriousness of the problem behavior.
2. Explanation: why do certain problems exist and what perpetuates them?
3. Prediction: How will the client’s problems subsequently develop in the future?
4. Indication: How can the problems be resolved?
5. Evaluation: Have the problems been adequately resolved as a result of the intervention?
1
,Diagnostic cycle: way to regulate and discipline the diagnostic process according to the empirical cycle of
scientific research (De Groot, 1961, 1994). Consists of:
1. Observation: collecting and classifying empirical materials, which provide the basis for forming thoughts
about the creation and persistence of problem behavior.
2. Induction: formulation of theory and hypotheses about the behavior.
3. Deduction: testable predictions are derived from the hypotheses.
4. Testing: new materials are used to determine whether the predictions are correct or incorrect.
5. Evaluation.
Diagnostic process: from application to report
Application: diagnostician’s first task is to analyze and clarify the request and the request for help. This
results in:
o Information about the referrer, details and type of request.
Important to understand referrer’s frame of reference vision of the client’s behavior and
performance, which has been formed by education and experience.
Analysis of request leads to clarification of the relationship between diagnostician and
referrer.
In some cases it is important to make a distinction between the referrer in name and the
actual referrer.
Referrers differ from each other in terms of the nature and extent of the powers which are
available to them.
Analysis of the request also aims to understand the type and content of the request:
May adhere to an open-ended (no existing hypotheses) format or closed format.
Contents of a request are partially connected to the setting from which the request
originates.
Requests can be classified according to the five basic questions.
Analysis is supported by what the referrer already knows about the client.
Analysis helps to determine whether or not the client presented himself to the referrer and
whether he consents to the examination.
o Analysis of the request for help includes exploration of the client’s mindset.
During the first meeting, the client’s attitude to the examination is evaluated.
Content of the problem is determined.
Client is questioned about their complaints, how they started, developed and what factors
play a role.
Client is asked who can best help him and what the result of an intervention should be.
Reflection of the diagnostician: weight is given to each of the various pieces of information. This will
partly be influenced by the diagnostician’s character.
o Diagnostician should be aware of potential biases in both general clinical judgment and towards
clients.
o Diagnostician also estimates their own knowledge of a problem and may refer the client to a
colleague if necessary.
Diagnostic scenario:
o All of the questions from referrer, client and diagnostician are organized.
o On the basis of this information, an initial, tentative theory about the client’s problematic behavior
is proposed.
o In the diagnostic examination, recognition precedes explanation, which both precede prediction
and indication. Diagnostician ideally works through the steps of the diagnostic cycle, but in
practice all of the basic questions are often examined simultaneously.
o Not all of the basic questions need to be examined in every diagnostic examination.
Diagnostic examination: hypotheses are formed.
o Hypothesis: assumption about a correlation in reality, which is formulated in such a way that
concrete, verifiable predictions may be derived from it.
Recognition: hypothesis centers on the presence of psychopathology or a differential
diagnosis.
Explanation: hypothesis requires a list of explanatory factors and their predisposing or
perpetuating roles.
Predictive: based on empirical knowledge of successful predictors.
Indication: assumptions about which treatment and therapists are best suited to a client
with a particular problem.
o Selection of examination tools: determined by the nature of the question, psychometric quality of
the instruments and the efficiency considerations (duration of examination and scoring
convenience).
Recognition: diagnostician has access to objective instruments that are tailored to more
disorders or specific psychopathological profiles. Observation, anamnestic data and
information from informants might also be classified as examination tools.
Explanation: focus on explanatory factors, such as intelligence, cognitive abilities,
personality, and context factors.
Prediction: instruments that have predictive validity.
Indication: additional questionnaires.
2
, o Formulation of testable predictions: criteria of examination need to be established.
Might be defined in the DSM-V.
This has to be done before the examination, so that there can’t be any
misinterpretation/bias to confirm the hypothesis.
o Administration and scoring: provide qualitative and quantitative information.
Test results are interpretated with norm tables.
During administration, the diagnostician collects a lot of observational data.
During analysis of each test, it is possible that new hypotheses will arise.
o Argumentation: administration and scoring results are linked back to the hypotheses and
predictions.
Psychometric quality and nature of tools and sources are taken into account.
Weights are assigned to each of them.
If the results match the hypothesis, it will not be rejected. If they do not clearly occur, the
hypothesis may be retained. If there is an obvious contradiction, the hypothesis is rejected.
Diagnostician tries to reach a conclusive outcome, into which as many results as possible
have been integrated.
o Report: contains the results of the diagnostic examination for the referrer. The five steps are used
in the structure of the report.
The report and verbal explanation form the diagnostician’s masterpiece, which is evaluated
by colleagues and requesters.
Initial aim is to substantiate the conclusion from the examination.
In the report, a distinction is made between facts, interpretation of facts, and conclusions.
Sources are mentioned and quality of these sources is weighted.
May contain specific test information, if this provides data that will need to be discussed
later. Separate section of the report.
The requester should be able to read the information in the way the diagnostician
intended. Needs to be clearly, transparently and structurally well-written.
Reporting to the client is often done verbally.
Criteria for a good psychological test:
Theoretical basis of the test: the information should enable the prospective test user to judge whether
the test is suitable for their purposes. Contains 3 items that deal with the theoretical basis and logic of the
test development procedure:
1. Whether the test manual clarifies the construct that the tests purports to measure, the groups for
which it is meant, and the application of the test.
2. Deals with the theoretical elaboration of the test construction process. Assessment of this item also
involves translated tests or adaptations of a foreign instrument. The manual also has to supply
definitions of the constructs to be measured.
3. Asks for information about the operationalization of the construct and deals with content validity, in
particular for educational tests.
Quality of the test materials: paper-and-pencil tests and computer tests should be administered both if
they are available. The key items deal with:
1. Standardization of test content.
2. Objectivity of the scoring system.
3. Presence of unnecessary culture-bound words or content that may be offensive to specific ethnic,
gender or other groups.
4. Design, content and form of the test materials.
5. Instructions for the test maker.
6. Quality of the items
7. Scoring of the items.
For adaptive tests it is required that the decision rules for the selection of the next item are specified.
For tests that are computer scored, information has to be provided that enables the rater to check the
correctness of the scoring.
For computer-based tests, special attention is given to the resistance of the software to user errors, quality
of the design of the user interface, and the security of the test materials and test results.
Comprehensiveness of the manual: evaluates the comprehensiveness of the information the manual
provides to the test user, to enable the well-founded and responsible use of the test.
o The manual should, in addition to a User’s Guide, supply a summary of the construction process
and relevant research.
o Key item: whether there is a manual at all.
o Other items: completeness of instruction for successful test administration, information on
restrictions for the use of the test, availability of a summary of results of research performed with
the test, inclusion of case descriptions, availability of indications for test-score interpretation, and
statements on user qualifications.
o For computer-based test there are extra items: sufficient information has to be supplied with
respect to the installation of the software, there has to be sufficient information regarding the
3
, operation of the software and opportunities provided by it, and technical support has to be
available for practical software use.
Norms: scoring a test usually results in a raw score. These are partially determined by characteristics of
the test, such as number of items, time limits, item difficulty or item popularity, and test conditions.
o Raw score is difficult to interpret and unsuited for practical use. To give meaning to a raw score, 2
ways of categorizing can be distinguished:
Norm-referenced interpretation: Set of scaled scores may be derived from the distribution
of raw scores of a reference group.
Domain-referenced interpretation: standards are derived from a domain of skills or subject
matter to be mastered.
Criterion-referenced interpretation: cut scores are derived from the results of empirical
validity research.
In domain- and criterion-referenced interpretation, raw scores will be categorized in two or
more difficult score ranges.
o Criterion is assessed with 2 key items, and 3 separate sections on the differently referenced
interpretations:
1. Checks whether norms, standards, or cut scores are provided.
2. Asks in which year or period the data were collected.
Norm reference specific:
3. Deals with the size and representativeness of norm group. Groups >400 are “good”.
4. Ask for information on the norm scale used.
5. Means, standard deviations, and other information with respect to the score
distributions
6. Differences between various subgroups
7. Standard error of measurement, standard error of estimate, or test information
function.
Domain-reference specific:
8. Description of specific method chosen and procedures for determining the cut scores.
9. Description of training and selection procedure of the judges.
10. Significant importance is assigned to inter-rater agreement with respect to the
determination of the critical score.
Criterion-reference specific:
11. Results have to show sufficient validity.
12. The sample used has to be comparable to the population in which the test is used.
13. The sample has to be sufficiently large.
Reliability: reliability is a basic requirement for a test. However, different estimation methods may
produce different reliability estimates, and in different groups the test score may have different
reliabilities.
o Reliability results should be evaluated from the perspective of the test’s application.
o Classical test theory assumes that a test score additively consists of a reliable component (true
score) and a component caused by random measurement error.
o The objective of the reliability analysis is to estimate the degree to which test-score variance is due
to true-score variance.
o Although reliability estimates can differ depending on method and the characteristics of the group
studied, for the reliability criterion only one qualification is given (insufficient, sufficient or good).
The qualification is based on the range of the majority of the coefficients.
o The reliability criterion has 3 items:
1. Checks whether any reliability results are provided at all.
2. Asks for the level of the reliability.
Insufficient <0.80
0.80-0.90 is sufficient
> 0.90 is good.
3. Deals with the quality of the reliability research design and the completeness of the
information supplied. The rating for reliability can be adjusted downwards when this research
shows serious weaknesses.
Construct validity: should support the claim that the test measures the intended trait or ability.
o As a consequence of the diversity of calidity research, in the recommendations of the former
version of the rating system few directions were given with respect to the type and
comprehensiveness of research that would be enough for the qualification “good” or “sufficient”.
o The rating system distinguishes 6 types of research in support of construct validity:
1. Research on the dimensionality of the item scores.
2. Psychometric quality of the items.
3. Invariance of the factor structure and possible bias.
4. Convergent and discriminant validity
4
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller fennegijzen. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $6.34. You're not tied to anything after your purchase.