I-O psychologists’ passion projects: Can AI predict your personality in a job interview?

Authenitication by facial recognition concept. Biometrics. Security system.

We continue our exploration of the field of Industrial-Organizational (I-O) Psychology, the American Psychological Association’s Division 14. I-O Psychology is the study of behavior in the workplace. I-O Psychologists frequently help businesses better hire, motivate, and retain employees. But they can apply their skills in many other ways.

We continue interviewing I-O Psychologists about their passion projects to show you how these individuals are applying their training to enact positive societal change. If you missed the earlier posts of this series, we encourage you to also read about the passion projects of Dr. Haley Woznyj, Dr. Ann Marie Ryan, and doctoral student Mike Morrison.

Meet Dr. Louis Tay, an Associate Professor at Purdue University’s Department of Psychological Sciences, and his graduate student, Louis Hickman. Louis Tay received his Ph.D. in I-O Psychology from the University of Illinois at Urbana-Champaign in 2011. At Purdue, he pursues cross-disciplinary research with the goals of improving the psychological measurement techniques. He is also developing science-based well-being programs and policies for organizations and societies more broadly.

Louis Hickman received an M.S. in Computer and Information Technology with a specialization in Natural Language Processing from Purdue. He is currently an I-O Psychology doctoral candidate at Purdue, working with Louis Tay on several research projects at the intersection of psychology and Artificial Intelligence (AI).

One of their projects involves investigating computer algorithms’ effect on inferring job candidates’ personality characteristics from their asynchronous (or one-way) video interviews. Asynchronous interviews remove the live interviewer present in traditional interviews. A candidate records themselves answering a set of interview questions while speaking into their computer or phone camera.

According to a recent Washington Post article, in the last five years, more than 700 companies have used AI to evaluate close to 12 million video interviews automatically. Providers of this type of technology promise to greatly reduce the amount of time it takes an employer to screen candidates (by automating the interview) and to recommend the best hires—for example, by identifying the candidates whose personalities most closely resemble those of the company’s current star performers.

This trend has outpaced related psychological research. Louis Tay and Louis Hickman saw an urgent need for research that would evaluate the accuracy and fairness with which AI can judge job candidates’ personality. Together with several colleagues, they applied for and received a National Science Foundation (NSF) grant to do precisely that.

Juliya: Louis T., can you give us a brief overview of your project?

Louis Tay: Sure. We want to determine how acurately video interviews capture individuals’ personality and if they are reliable over multiple occasions—that is, are personality scores similar if the same individual is interviewed more than once? We also need to determine if they can predict job performance and if they may be biased against particular demographic groups.

Juliya: How did you become involved in this research? What about it appealed to you?

Louis Tay: Our lab initially became interested in the use of automated video interviews to infer personality traits a couple of years ago when we were asked to consult for a company on this issue. We were surprised that many organizations were moving toward automated job interviews to infer interviewee characteristics. In the meantime, I-O psychological research hadn’t caught up in demonstrating the scientific basis of such an approach.

This research area really appeals to me because it combines multiple areas that I’m passionate about: technology, measurement, machine learning, and individual differences.

Louis Hickman: Early in 2018, several graduate students told me that they’d recently completed asynchronous/one-way video interviews as part of their job applications. I looked up the platforms they used and saw that these companies were marketing a tool for automatically assessing applicants in video interviews.

This was surprising because I knew of no psychological research showing such approaches to be reliable or valid. I suggested to Dr. Tay that we should research automated video interviews, and the timing was serendipitous because, as he mentioned, he had been contacted around the same time about consulting in this area. I saw this as a way to capitalize on my unique background in Computer and Information Technology while contributing to research in an emerging I-O psychology topic. Also, it is really exciting to be working on a project that intrigues nearly everyone.

Juliya: What resources does it take to undertake a project like this?

Louis Tay: It requires a lot of parts. First, we have to develop realistic job interview questions typically used to infer job applicant personality. Next, we have to collect video interview data from a large number of individuals. We then train raters and have them watch the interviews and rate the interviewees’ personality. These personality ratings serve as the “ground truth” for training and testing the machine learning algorithms.

Our team comprises multiple experts. We have I-O psychologists with expertise in personality and measurement (Dr. Sang Eun Woo and myself). We also have a computer scientist with expertise in the computational modeling of socio-affective-cognitive processes from video data (Dr. Sidney D’Mello). Finally, we have Louis Hickman working on the data collection, personality ratings, and building and testing of the predictive models.

Juliya: Louis H., can you tell us more about your role in this research?

Louis Hickman: I was fortunate to be able to lead some of the early consulting we did in this area, and then I designed our initial projects for collecting hundreds of mock video interviews. We used our insights from the consulting and early data collection to help write the NSF proposal.

This work has been a great opportunity to gain more experience in designing and managing data collection efforts, creating and conducting rater frame of reference training, text mining, training and testing machine learning models, and validating selection procedures.

Louis Tay: I would add that graduate students are absolutely essential in research projects, and they have excellent insights. Many projects I undertake are driven by the interests of doctoral students. If doctoral students are not passionate about the work they do, it becomes onerous drudgery. Louis Hickman is very interested in this work and has been a key member of the team. I see him quickly developing into a leading scholar in this field and I am personally learning a lot from him.

Juliya: How has your training as I-O Psychologists prepared you for this project? Did you need to pick up additional skills for this work?

Louis Tay: Our I-O Psychology training is foundational for the work that we do. I-O Psychologists have expertise in the scientific knowledge and principles of selecting individuals for jobs and the validation of selection procedures to show that they’re appropriate. We are also well aware of the legal issues surrounding the use of selection tools and procedures.

Despite the pace with which AI and machine learning are advancing, the new selection procedures this technology enables still need to be evaluated in the same manner as traditional procedures like personality tests and structured interviews.

Other technical skills that are needed for this work include understanding the types of features—like words and emotions—that can be extracted from video data and what their limitations are. We also need to learn about how machine learning algorithms are typically trained and where biases may occur.

Louis Hickman: Employee selection is a topic that interested me very much during my coursework, as it affects both organizational and societal outcomes. We use the body of knowledge I-O has developed in the last century to inform the design of our mock interviews, rater training, and validation process. Perhaps most importantly, the body of work on selection procedures’ reliability and validity guides us in examining whether automated video interviews can help organizations make better selection decisions.

Machine learning is becoming very important in many fields, including psychology. Many psychologists think they don’t have the skillset to understand or use machine learning in their research. But, machine learning essentially involves describing dataset A with a regression equation and then using that regression equation to try to predict outcomes in dataset B. So we can use our basic statistical training and go one step further.

The extensive machine learning we are using has expanded my statistical knowledge and R coding skills. I’ve also gained a much deeper understanding of employment interviews, personality traits, and validation approaches.

Despite how much I’ve already learned during this project, I keep discovering additional things that I feel I need to know. Because the project touches on so many topics, it represents a great opportunity to learn all these things in much greater depth than possible from coursework alone.

Juliya: What kind of practical outcomes do you hope to see from your work?

Louis Tay: A lot of companies are eager to use automated video interviews because of cost-savings, convenience, etc. However, I believe that the science of these automated video interviews has not been well-established. I hope that companies will exercise caution when considering automated video interviews and ask for scientific evidence. This includes:

  1. How accurately do they predict personality ratings from people’s self-reports or interviewer judgments?
  2. When someone is interviewed on multiple occasions, are the personality scores produced similar over time?
  3. To what extent do personality scores from automated interviews truly predict how well individuals will perform on the job?
  4. What are the demographics of the interviewee samples on which the algorithms are trained on? Are these samples representative of all applicants for a given position?

I believe that there may be instances where automated video interviews can be effective, but it may depend on the type of characteristic we seek to predict. For example, video interviews might be better able to predict extraversion than conscientiousness.

Also, we need more evidence to guide its use for selection – are automated video interviews useful for predicting performance on all types of jobs or just some? Further, due to legal considerations and to promote fairness, we need to ensure that any potential biases in algorithms are proactively examined and addressed.

Juliya: Is there anything else you’d like to share to help convey the importance of your work or to highlight more generally the impact I-Os can make?

Louis Tay: All organizations seek to find talented workers from a diverse group of individuals. We need to use psychological research to understand how the newest available tools can best find and select talent while not discriminating against any specific group. In doing so, we can ensure societal fairness and equal opportunity for talented individuals regardless of their demographic background.

AI and machine learning algorithms are promising approaches to improve the efficiency of selection. However, I-O Psychologists have a significant role to play in understanding their utility and making these approaches more transparent to society.

Louis Hickman: Adults spend about a third of their lives working, and whatever happens at work has a tendency to spill over into their health and personal lives as well. This makes I-O Psychology—the study of behavior at work—incredibly meaningful.

This project specifically is very important because the technologies we’re studying are already being adopted by organizations, yet little evidence exists to suggest that they are reliable, valid, or unbiased. As a result, senators have raised concerns to the Equal Employment Opportunity Commission, and a consumer advocacy group has filed a complaint with the Federal Trade Commission stating that such approaches may be discriminatory.

If these approaches do indeed discriminate against minorities, women, or any other group, then they could be causing societal harm. On the other hand, if they can reliably and validly assess job applicants, they may help organizations and society more broadly by reducing time to hire and improving selection outcomes. I hope our project will help provide answers to these questions.

Want to learn more about the field of I-O psychology? Read our earlier posts for an overview, to find out other ways I-O psychologists have given back to society, and to discover your dream job in the field.

About the Author

Juliya graduated from Michigan State University’s Organizational Psychology doctorate program. At Indeed, she develops assessments with the goal of helping people find jobs. In her free time, she volunteers with a local animal rescue.