Abstract
Whether an interviewee’s honest and deceptive responses can be detected by the signals of facial expressions in videos has been debated and called to be researched. We developed deep learningmodels enabled by computer vision to extract the temporal patterns of job applicants’ facial expressions and head movements to identify self-reported honest and deceptive impression management (IM) tactics from video frames in real asynchronous video interviews. A 12- to 15-min video was recorded for each of the <italic>N</italic> = 121 job applicants as they answered five structured behavioral interview questions. Each applicant completed a survey to self-evaluate their trustworthiness on four IM measures. Additionally, a field experiment was conducted to compare the concurrent validity associated with self-reported IMs between our modeling and human interviewers. Human interviewers’ performance in predicting these IMmeasures from another subset of 30 videos was obtained by having <italic>N</italic> = 30 human interviewers evaluate three recordings. Our models explained 91% and 84% of the variance in honest and deceptive IMs, respectively, and showed a stronger correlation with self-reported IMscores compared to human interviewers.
Original language | English |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Computational Social Systems |
DOIs | |
Publication status | Accepted/In press - 2024 |
Keywords
- 3-D convolutional neural network (3D-CNN)
- Artificial intelligence
- Computer vision
- Detectors
- Employment
- FaceMesh
- Feature extraction
- Head
- Interviews
- affective computing
- applicant faking
- emotion sensing
- long short-term memory (LSTM)
ASJC Scopus subject areas
- Modelling and Simulation
- Social Sciences (miscellaneous)
- Human-Computer Interaction