1
Article: The technology crisis in neuropsychology
Authors: Miller & Barr
Published: 2017
Technological development lacking in neuropsychology
Neuropsychology uses outdated and labor intensive methods to collect data. These methods are slow, very
inefficient, expensive and provide poor estimates of human behavior. The field of neuropsychology in its
wholeness is relatively new, compared to other fields of psychology. Neuropsychology still heavily relies on paper-
and-pencil tests. Many tests are minimally revised versions of older tests, sometimes even using the same stimuli
and settings as 100 years ago. For example, some list-learning tests (e.g., 15 words test) use the same words as
when the test was developed (1919). Most of the tests used in neuropsychology were not developed with the goal
of assessing brain-behavior relationships.
Factors that contributed to the limited adoption of new technology in neuropsychology:
- More focus on technical challenges than on technical advantages
- Concerns about clinicians and patients lacking familiarity with new technology
- The role of trained examiners (who evaluate/interpret test results) might diminish
- Lack of innovation in the tests that were digitized (it was mostly just redeveloping standard paper-pencil
tests, nothing very innovative)
- For many digitized tests, there were no psychometric studies performed, so little was known about the
concurrent validity of computer measures
- Very little quantity and quality of normative data for computer measures
- Often, more effort was put into the technology itself rather than in the psychometric properties of tests
Rabin, Barr and Burton (2005) found that neuropsychologists mostly use standard paper-pencil tasks. Rabin,
Paolillo and Barr (2016) found that 10 years later, neuropsychologists still used these tasks. Almost half of
neuropsychologists never use computer testing (Rabin et al., 2014). Reasons why neuropsychologists did not use
computer testing:
- Financial costs associated with purchasing computer tests
- Lack of good normative data
- Concerns about test utility or validity
Advantages of technological integration
The most way to move forward in computer testing is to digitize current tests. This will help with automating
administration and scoring. Digitized tests also give way more output, such as speed and consistency of output,
accuracy, pencil pressure, number/length/position of pauses, amplitude and frequency and more. speech
recognition technology could be used to automatically record patterns in responding, pauses, grammatical errors,
consistency and latency in tasks such as the verbal fluency task. Computers are way more precise than humans.
It is harder to develop computer measures for free recall memory. Speech recognition could be used, but has
limitations. For example, the reliability and accuracy of speech recognitions engines is not great; these engines
are still prone to error. However, these errors will probably decrease as technology continues to develop.
It is even harder to assess nonverbal learning and memory. However, it would be useful if verbal and nonverbal
memory could be integrated.
Computerized scoring
A benefit of digital tests is the built-in standardization of administration of the test. For example, sample questions
could be repeated until it is clear that the patient fully understands the task demands. This built-in standardization
increases reproducibility and reliability. Automatic scoring reduces scoring and data entry errors (data does not
have to be entered manually). Another benefit of computerized testing is the ability to include the Item Response
Theory (IRT): item-level performance of the patient is tracked and based on the patient’s performance, the
computer calculates which items should be answered next by the patient (so that the patient does not have to
, 2
answer a lot of very simple items if it is already clear that the patient’s performance exceeds that basic level). If
the simple item is answered correctly, the computer will automatically skip to a somewhat more difficult item and
so on. This ‘calculating’ of which item to show next is not doable for humans and required the computational
speed and power of a computer. By the use of the IRT, fewer items are needed to determine an underlying ability
level. A condition for IRT to work, is that a test needs a very large item bank to draw items from. This would have
the benefit that multiple alternate forms of the test could be developed and would also be beneficial for
longitudinal monitoring and re-evaluations. New items could also be integrated and studied more easily.
Points of consideration for computerized scoring:
- Computers are more rigid and less flexible than analog measures. For example, computers might not
recognize when a patient does not understand task instructions or when the patient needs some
reorienting cues. It is also harder to revise an answer once it is recorded.
- It is important to maintain the security and confidentiality of a patient’s data. Patients may have to give
informed consent for their data to be saved electronically. However, over 60% of participants in a large
survey are already comfortable with storing their medical records online and this number is likely to grow.
- The volume of data generated by computer measures is much greater than volume generated by analog
measures.
Remote and portable testing
New technology helps with remote testing. Ideally, remote testing methods will closely parallel the methods used
for in-lab settings to facilitate comparability of results from both remote and in-lab settings. Remote testing will
improve accessibility, especially in underserved areas. A disadvantage of remote testing is that it is limited to
computer/device-based interfaces and may thus limit the input and response of the patient. A good remote
assessment would require standardization and appropriate normative data.
Points of consideration for remote testing:
- It is hard to verify the identity of the person taking the test. Biometric identification looks promising to help
solve this issue (e.g., thumbprint readers or facial recognition). Remote testing could be made even more
secure by two-level authentication, for example facial recognition combined with a one-time access code
provided by the therapist.
- It is hard to assess the validity of obtained data. One specific problem is random responding. Some
patterns can be detected, for example when a patient shows an unpredictable pattern of errors that don’t
follow a typical difficulty curve.
- Test security: test items and test content might be recorded or written down. It might be possible to make
a large item bank and to generate a different test each time, using different items from the item bank.
Continual or high-frequency (passive) data-capture
Portable and wearable technology (e.g., phones, tablets and (smart)watches) can be used to assess and gather
everyday behavior (e.g., an online logbook) and health measures (e.g., heart rate, step count; these are examples
of passive data-capture). The amount of data that can be gathered on a patient is massive. However, it is a
challenge to sift through ‘noise’ in the data. Robust baseline data could facilitate comparisons within the patient
over time. With the use of predictive algorithms and data from the patient, important changes in behavior and
health could be predicted and the patient could be alerted. Another advantage of continual data-capture is that
behavior data is captured in real-time and with additional information about the local environment it would even be
possible to link behavior to the context. Because behavior is so closely monitored, treatment recommendations
and changes could be updated much sooner. Ecological validity of continual data-capture could improve by
measures that are developed for quantifying behavior in the real world.
Points of consideration for continual data-capture:
- Some devices (such as tablets) may be used by multiple users. Confirming the identity of the user may be
a problem.
- More knowledge is needed on how to link continuously captured data to analog measures.
- Patients will have to give informed consent.
, 3
Barriers to technology integration
Clinicians have concerns about (new) digital assessments. For example, about the quantity and quality of
validation data. Clinicians need to be convinced of the benefits of digital assessments over analog assessments.
Clinicians might worry that technology might take over their job. However, if more digital data will become
available, there is an even larger need for clinicians that are skilled to interpret this digital data. Another concern
might be of a financial nature; buying the hardware and the software and education the clinicians will cost money.
Clinicians also might feel that technology will result in a loss of qualitative behavioral data, for assessments in the
lab allow for in vivo observations. However, digital devices often have a built in camera that would allow for live
behavioral assessment. A last concern is that some patients or patient groups might not feel comfortable or might
feel intimidated by technology. Within the next decade, this will is not likely to be a concern anymore. Proper
education prior to the assessment (e.g., video tutorials or spoken instructions) should help with people who claim
not to be good with computers.
------------------------------------------------------------------------------------------------------------------
Article: Applications of technology in neuropsychological assessment
Authors: Parsey & Schmitter-Edgecombe
Published: 2013
Introduction
In the last decade, researchers and clinicians have started using technology to improve the reliability, efficiency
and cost-effectiveness of neuropsychological assessment. The society also uses more technology than one or two
decades ago; for example, millenials never spend a day without their phone and/or laptop. Familiarity with
technology will improve performance on computerized testing.
Parsey and Schmitter-Edgecombe conducted a review to assess the utility of technology in neuropsychology and
suggested future directions of research. They included 108 studies in their review.
Computer-based cognitive assessment
Computer-based assessment = any instrument that utilizes a computer, digital tablet, handheld device, or other
digital interface instead of a human examiner to administer, score, or interpret tests of brain function and related
factors relevant to questions of neurologic health and illness (Bauer et al., 2012, p. 2).
Computer-based assessment has been used in the military and sport psychology since the 1980s, to assess
cognitive functioning and to assess mild traumatic brain injury. In these fields, computer-based assessment has
been generally accepted. In the field of neuropsychology, most clinicians use at least one computer-based
assessment measure. Computerized measures of executive skills and higher-order functioning shows similar (if
not improved) reliability, compared to analog tests. Normative data of analog tests cannot be applied to
computerized versions of those analog tests.
There is a gap in the application of technology, for many clinicians are hesitant to use more technology while this
new technology offers additional cognitive and behavioral information (and more benefits). Technology offers
more accurate measurement of time sensitive parameters (e.g., reaction time or inspection time) and offers the
use of algorithms that can lead to adaptive testing (algorithm selects future test items based on prior
performance). Algorithms can also be used to detect characteristic deficits/complaints of specific disorders, thus
assisting the neuropsychologist in making a diagnosis.
Passive sensor monitoring could provide additional information about daily functioning that is not subject to self-
report bias and subtle changes might stay unnoticed of not measured by a sensor.
, 4
Cognitive assessment using virtual reality
Virtual reality (VR) includes a wide scale of technologies and devices to assess manipulation of objects in a virtual
space (usually 3D) and time. VR was first used as a digital version of traditional paper-and-pencil tasks. VR could
offer an environment that better represented everyday life. A VR version of the Wisconsin Card Sorting Test
(WCST) led to poorer performance, but the patients enjoyed the VR version more than the traditional version. VR
may have provided more distractions; a benefit of this is that it improves the ecological validity of the VR version: it
is better at measuring cognitive abilities with real-world interruptions. Adding controlled distractions to testing
might improve ecological validity even more. Including of these controlled distractions might give researchers a
better understanding of the influence of external stimuli on cognitive performance in real-world scenarios. These
simulated external stimuli could be used to learn more about distractibility and attentional lapses. For example,
this is used in attention deficit research in which children have to complete standardized neuropsychological tests
in a virtual classroom with distractions. Research has shown that this form of testing better classified children with
ADHD.
Virtual simulations of everyday tasks (e.g., driving a car or doing the groceries) provides a safe and controlled
environment to assess functional capabilities.
- VR driving simulators have been used to study driving in clinical populations (e.g., dementia, brain injury
and spinal cord injury). There is still discussion about what conclusions can and cannot be drawn from VR
driving simulators: basic knowledge about driving abilities from VR should be combined with other
assessments.
- Virtual Multiple Errands Tests (VMET) were developed to assess frontal lobe lesions. The participant
navigates through a virtual supermarket to obtain items from a given shopping list. Simultaneously, the
participant has to obtain information and obey rules. These tests are useful to evaluate performance of a
complex task in a safe and controlled setting.
- A computer-simulated kitchen in which the participant had to prepare a meal. This assessment showed
satisfactory construct validity and test-retest reliability.
Important note: one should not just assume that a VR version of an analog test measures exactly the same
cognitive constructs as the analog version, because digitizing an analog test can alter the nature of the task.
Strengths of computer-based cognitive assessment
- Current computerized assessments show comparable reliability and validity to analog measures, when
used with appropriate normative data.
- Algorithm design.
- Increased ease of administration and standardization (e.g., less errors in scoring and interpretation).
- VR provides the possibility to customize the virtual environment to specific target populations and to
influence environmental stimuli (improved ecological validity).
- Additional cognitive and behavioral information can be obtained and data can be obtained more precisely,
compared to paper-and-pencil tasks.
- Computer-based assessment could improve inter-rater reliability.
- Computer-based assessment enables longitudinal monitoring of daily activity performance.
Weaknesses of computer-based cognitive assessment
- Variations in computer hardware (e.g., speed of a computer). Equipment can vary by laboratory or user.
- Limited information on psychometric and normative properties for different clinical populations.
- Internal and ecological validity, test-retest reliability and utility for various populations need to be assessed
and improved.
- Physiological concerns about VR use (e.g., motion sickness). The influence of motion sickness on
cognitive performance has yet to be studied.
- Using VR technology is new for most patients and the novelty might alter behavior.
- VR is subject to great individual variability (e.g., differences in computer experience, learning and
adaptation, enjoyability of the experience).
- VR is a relatively high-cost form of assessment.