With AI tools, human voices can be faked in seconds, instantly creating opportunities for
deception, fraud, and misinformation. Scientists typically try to detect fake audio by refining AI algorithms to catch it, but adversaries can then use that technology to generate better deepfakes. Our novel approach
insights from sociolinguistics–the study of human language in society–to train listeners to
better discern fake audio and to improve the science of deepfake detection. Can you catch a deepfake? Look at each image in the exhibit, and listen to each corresponding clip online on this page, and make your best guess!
Clip
Check your response
What did you think?
When you are ready click on the link to read the explanation
For more information on our AI methodology and linguistic features see our Paper
Z. Khanjani, L. Davis, A. Tuz, K. Nwosu, C. Mallinson and V. P. Janeja*(Corresponding author), “Learning to Listen and Listening to Learn: Spoofed Audio Detection Through Linguistic Data Augmentation,” 2023 IEEE International Conference on Intelligence and Security Informatics (ISI), Charlotte, NC, USA, 2023, pp. 01-06, doi: 10.1109/ISI58743.2023.10297267.
Education Efforts: Read more about how we are using these expert defined features in the classroom here.
Speech Sample Citations: