TY - GEN
T1 - MultiModal Deception Detection
T2 - 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
AU - Belavadi, Vibha
AU - Zhou, Yan
AU - Bakdash, Jonathan Z.
AU - Kantarcioglu, Murat
AU - Krawczyk, Daniel C.
AU - Nguyen, Linda
AU - Rakic, Jelena
AU - Thuriasingham, Bhavani
N1 - Funding Information:
The research reported herein was supported in part by NIH award 1R01HG006844, NSF awards CICI-1547324, IIS-1633331, CNS-1837627, OAC-1828467 and ARO award W911NF-17-1-0356. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Combat Capabilities Development Command Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - The increasing use of Artificial Intelligence (AI) systems in face recognition and video processing in recent times creates higher stakes for their application in daily life. Increasingly, critical decisions are being made using these AI systems in application domains such as employment, finance, and crime prevention. These applications are done through the use of more abstract concepts such as emotions, trait evaluations (e.g., trustworthiness), and behavior (e.g., deception). These abstract concepts are learned by the AI system using the verbal and non-verbal cues from the human subject stimuli (e,g., facial expressions, movements, audio, text) for inference. Because the use of AI systems often happens in high stakes scenarios, it is of utmost importance that the AI system participating in the decision-making process is highly reliable and credible. In this paper, we specifically consider the feasibility of using such an AI system for deception detection. We examine if deception can be caught using multimodal aspects such as facial expressions and movements, audio cues, video cues, etc. We experiment using three different datasets with varying degrees of deception to explore the problem of deception detection. We also study state-of-the-art deception detection systems and investigate whether we can extend their algorithm into new datasets. We conclude that there is a lack of reasonable evidence that AI-based deception detection is generalizable over different scenarios of lying (lying deliberately, lying under duress, and lying through half-truths) and that in the future additional factors will need to be considered to make such a claim.
AB - The increasing use of Artificial Intelligence (AI) systems in face recognition and video processing in recent times creates higher stakes for their application in daily life. Increasingly, critical decisions are being made using these AI systems in application domains such as employment, finance, and crime prevention. These applications are done through the use of more abstract concepts such as emotions, trait evaluations (e.g., trustworthiness), and behavior (e.g., deception). These abstract concepts are learned by the AI system using the verbal and non-verbal cues from the human subject stimuli (e,g., facial expressions, movements, audio, text) for inference. Because the use of AI systems often happens in high stakes scenarios, it is of utmost importance that the AI system participating in the decision-making process is highly reliable and credible. In this paper, we specifically consider the feasibility of using such an AI system for deception detection. We examine if deception can be caught using multimodal aspects such as facial expressions and movements, audio cues, video cues, etc. We experiment using three different datasets with varying degrees of deception to explore the problem of deception detection. We also study state-of-the-art deception detection systems and investigate whether we can extend their algorithm into new datasets. We conclude that there is a lack of reasonable evidence that AI-based deception detection is generalizable over different scenarios of lying (lying deliberately, lying under duress, and lying through half-truths) and that in the future additional factors will need to be considered to make such a claim.
KW - deception detection
KW - ethics
KW - facial expressions
KW - machine learning
KW - multi-modal data analysis
UR - http://www.scopus.com/inward/record.url?scp=85100426913&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85100426913&partnerID=8YFLogxK
U2 - 10.1109/TPS-ISA50397.2020.00023
DO - 10.1109/TPS-ISA50397.2020.00023
M3 - Conference contribution
AN - SCOPUS:85100426913
T3 - Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
SP - 99
EP - 106
BT - Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 December 2020 through 3 December 2020
ER -