1.7 Problem 2
Selective Selection
Three domains of job performance:
1. Task performance- proficiency with which employees perform key activities
relevant to the job
Divided into: individual performance; work-team performance
2. Contextual performance- contributions of employee to organisational, social,
psychological environment to help accomplish organisations goals
3. Counterproductive work behaviours- intentional behaviour of employee
viewed by organisation as contrary to its legitimate interested
Divided into: interpersonal (eg. bullying); organisational (eg. theft, absence)
Two main principles underlying personnel selection and assessment procedures:
1. Individual differences in peoples skill, abilities, characteristics
Not all equally suited to jobs, so matching people to jobs can be important
2. Future behaviour is partly predictable: goal of selection/assessment activities=
match people to jobs and predict future job performance of individuals
When choosing selection process, look at validity, reliability, and adverse impact
Validation Process
Predictors- pieces of evidence concerning current or past performance of candidates
used to decide whether to offer the job
Criterion- what your trying to predict using the predictors
This process is to find out the effectiveness of parts of the selection process
Criterion-Related Validity
Criterion-related validity- strength of the relationship between predictor and criterion
Eg. how well a personality test result predicted that an individual was outstanding
in their job
High validity= high predictor score + high criterion score
High validity= low predictor score + low criterion score
Predictive validity- how much a score on a test/measure predicts scores on
criterion measure
Concurrent validity- how much results of a test correlate to previously used
measures for the same construct
Performance measures
Recruiters decide on non-compensatory approach (to selection)- applicant must score
highly on all assessment criteria or compensatory approach- high scores on some
criteria can make up for low scores on another
Other Validity Types
Faith validity- belief that a selection method is valid because it is sold by a reputable
company, packaged in an expensive looking manner
X- Cook: money spent on looks may mean less money spent on research/development on
instrument: look into supporting data of instrument before accepting validity
Content validity- tests based on logic as opposed to technical statistical procedures
Eg. Sailor tested on all sailing skills/boat knowledge= high content validity
Face validity- when selection tests/procedures “look right”
Eg. asking artist candidate for portfolio as opposed to an IQ test
- Candidate likelier to think selection is fair if believe that selection process
seems relevant to job role
Construct validity- identifying psychological characteristics that underlie successful
performance of the specific job (eg. intelligence, emotional stability)
Must be measured by indirect means, often comparing new construct-measure
with long-standing construct measure of same thing. Eg. new construct-measure
of neuroticism should have same results for an individual as long-standing
construct-measure of neuroticism
Convergent validity- how much two measures of constructs theoretically related
are actually related
Discriminant validity- whether concepts that are not supposed to be related are
actually related
, Incremental validity- how much adding another predictor increases the predictive
power of the selection process
Eg. is the process greatly improved adding a psychometrics test?
Allows for cost benefit analysis
Reliability
Selection instrument reliability- extent to which the instrument measures
consistently under varying conditions
Types of reliability:
External reliability:
Test-retest reliability- participants are administered same test on two
separate occasions with significant time lag between administrations, but
test results are similar on both
o Construct being measured is stable and testing conditions same on
both occasions
Interrater reliability- different people rate the test the same eg. both
interviewers have the same opinion on a person
Parallel forms- test developers design two tests of equivalent difficulty
and similar content
Eg. 50 questions, use 25 Qu’s in test A, 25 Qu’s in test B
o External reliability between scales
Internal reliability:
o Statistical methods/formulae
o How much different parts of same measure produce results consistent with
each other
Cronbach’s coefficient alpha- 0.7 or above
Split Half method- examine association between scores on two halves of
a test
KR20- used for question with right or wrong answers
Adverse impact
Adverse impact- potential systematic differences in the assessments of candidates
belonging to protected-by-law group during hiring process or employment practices that
appear neutral but have discriminatory effect on a protected group
Some tests are valid yet biased against some subgroups eg. if relationship
between test score and job performance isn’t consistent between two subgroups
Unfair direct discrimination- selection process treats individual less favourably
because of their gender or ethnic group
Indirect discrimination often unintended and difficult to prove
o Eg. Employer applies requirement that’s harder for one group to comply
with
Differential item functioning- significant different in difficulty level across different
ethnic groups
Studies shown, significant ethnic subgroup differences in cognitive ability tests: favour
White candidates over Black and Hispanics candidates
Assessment centres using many selection methods minimises impact of bias in
one component
Job Analysis
Produce info about jobs to be used in job description for job role
Many techniques/procedures:
o Job-oriented J.A. procedures- focused on the work itself, describing equipment
used, end results/purpose of job, resources, and materials used
o Worker-oriented J.A. procedures- focus on psychological/behavioural
requirements of job
o Task oriented J.A.- task, equipment and end result of the job
o Future-oriented J.A.- used in newly created job roles with no pre-existing job
descriptions: focus on knowledge, skills, abilities associated with these new roles
o Traditional J.A. in personnel- fitting person to the job: becoming outdated
Voordelen van het kopen van samenvattingen bij Stuvia op een rij:
Verzekerd van kwaliteit door reviews
Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!
Snel en makkelijk kopen
Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.
Focus op de essentie
Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!
Veelgestelde vragen
Wat krijg ik als ik dit document koop?
Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.
Tevredenheidsgarantie: hoe werkt dat?
Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.
Van wie koop ik deze samenvatting?
Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper lablyth. Stuvia faciliteert de betaling aan de verkoper.
Zit ik meteen vast aan een abonnement?
Nee, je koopt alleen deze samenvatting voor €2,99. Je zit daarna nergens aan vast.