Audio Deep Fake Discernment

 

Dr. Vandana Janeja and Dr. Christine Mallinson, along with a team of students, lead the project on audio deep fake discernment funded by the NSF.

The goal of this project is to improve listeners’ capacity for identifying audio deepfakes by combining sociolinguistic and technological methods. This project creates an innovative method for investigating deepfakes that takes into account human behavioral perspectives by encouraging collaborative research across sociolinguistics, human-centered analytics, and data science. By giving listeners—especially college students—insights that can help them assess the reliability and correctness of online information, it specifically targets the problem of disinformation in society. This project also advances knowledge of the role deepfakes play in the dissemination of false information and the immoral use of language technology.

The objectives of the project are to: (1) Study and evaluate listener perceptions of audio deepfakes that have been created with varying degrees of linguistic complexity; (2) Study and evaluate the efficacy of training sessions that increase listeners? sociolinguistic perceptual ability and improve their ability to discern deepfake audio content; (3) Augment the audio deepfake discernment via multi-level temporal and linguistic signatures, informed by training and linguistic labeling; (4) Evaluate the impact of augmented signature information on listener perceptions of audio deepfakes; (5) Create open-access online modules and materials with social science and data science student involvement to improve listeners? discernment of audio cues on a wider public scale.

 

 

 

 

We gratefully acknowledge the support of the National Science Foundation, Award Number 2210011.