How do I evaluate and select a psychological test?

  1. Make sure the reliability coefficient (r) is larger than .70
    • r = .90 or above (excellent reliability);
    • r = .80 to .89 (good reliability);
    • r = .70 to .79 (adequate reliability);
    • when r is r < .70, we say that the test may have limited reliability - meaning that you will get different results each time you take the test, even given the exact same conditions and person taking the test.
  2. Make sure the validity coefficient is between the ranges of .20 to .40
    • The validity coefficient measures how well the test measures what it is supposed to measure.
    • The validity coefficient, unlike reliability, rarely exceeds .40 because any one test alone is seldom sufficient in predicting total job performance. Several tests will be necessary.
  3. Read through each item of the test and consider whether the test appears to measure what it meant to measure. If yes, the test is said to have face validity. Face validity is important because those who take the test will be making a judgment on whether the test is appropriate for the job. If they consider that it is not, then they will consider the process unfair and may go to another organization.
  4. Check for the norm group in the test manual and make sure the norm group culturally resembles the target test takers.

How do I organize an effective training programme?

  1. Complete a training needs assessment (TNA)
    1. This is to understand the capabilities of the current workforce and identify the areas which might require change
    2. The TNA should focus on three levels: the organization, the job, and the person.
      • These three levels include five components: organizational support, organizational analysis, job requirement analysis, task and KSAO analysis, and person analysis
  2. Define the goals and objective of an organization's training program
  3. Select a provider (internal or external)
    1. Ensure they have a good track record in delivering the type of training you need
    2. Check for pricing and availability
  4. Evaluate the training after it has been delivered, to find out if it has had the desired effect
    1. There are five steps in carrying out a training evaluation:
      1. Define the criteria for evaluation (e.g., increased sales, improved customer service ratings, short 360 ratings improve)
      2. Design the study (how will you get the data and how accurate (valid) will the data be?)
      3. Choose a measure to assess the criteria (This can be a survey or a series of interviews for example)
      4. Collect data for the study
      5. Analyze and Interpret the data (Consider how you will analyze the data before you start the study - will it be qualitative or quantitative and what is the best way to manage the analysis).

What is an assessment center?

An Assessment Center (AC) is a procedure that uses multiple assessment tools to measure the candidates' KSAOs from multiple perspectives. It can be used for selection or development purposes. The most often used exercises in an AC are in-basket exercise, leaderless group discussion (LGD), and case analysis. Sometimes there are also role-play exercises and interviews. Which exercises to include in an AC depends on what skills and behaviors you want to observe and evaluate.

How to operate an effective assessment center?

  1. Do a job analysis to identify important competencies. Based on these competencies, you will select the exercises to use
  2. Arrange for a number of (unbiased) raters to observe and evaluate the candidates. These could be professional psychologists or line managers, but the line managers should - ideally - not know the candidates.
  3. Train the observers on how to make observations and how to evaluate candidates' performance (This generally takes a full day). The assessor training should help to develop assessors on seven abilities noted below:
    1. Understanding the behavioral dimensions that will be used
    2. Observing the behavior of participants
    3. Categorizing participant behavior regarding appropriate behavioral dimensions
    4. Judging the quality of participant behavior
    5. Determining the rating of participants on each behavioral dimension across the exercises
    6. Determining the overall evaluation of participants across all behavioral dimensions
    7. Providing performance feedback to the candidates

What are the downsides of using assessment centers?

Potential problem: construct validity

Assessment Centers are sometimes criticized for their failure to demonstrate the pattern of correlations among dimension (competency) ratings that they are designed to produce. In an AC, each behavioral dimension is assessed in more than one exercise. The rating of each behavioral dimension should be similar across different exercises.

However, each exercise is designed to assess more than one behavioral dimension, thus the scores for each dimension in one exercise should be quite different.

Unfortunately, research has shown that the correlations among the scores of the same dimension across different exercises is very low (Sackett & Dreher, 1982). At the same time, the correlations among dimensions measured in the same exercise turned out to be very high. For example, a candidate might be rated highly on all five competencies in a direct report meeting simulation, but low on one of the competencies (such as strategic thinking) in another simulation, and medium in a third simulation.

Despite these findings, ACs are found to be highly predictive of job performance across different occupations. As well, candidates tend to view ACs as more face valid than cognitive ability tests, and as a result they are often more satisfied with the selection process, the job, and the organization.

How to conduct a structured interview?

The golden rule of the structured interview is "past behavior is the best predictor of future behavior".

The following steps describe how to develop questions for a structured interview:

  1. Conduct a job analysis to identify the performance competencies and their importance weightings
  2. Develop interview questions together with probing questions for each competency
  3. Develop a rating scale for the scoring of each competency

During the interview:

  1. Ask candidates about their past experience
  2. Probe their answers, using follow-up questions to ensure they provide complete information for each question.
  3. Take notes on what the candidates are saying for later rating
  4. Rate each candidate based on a systematic rating scale
  5. Make a final decision according to the ratings