The human brain could have the ability to subconciously detect deepfakes, suggests a new study that may lead to the creation of tools for curbing the spread of disinformation.
Deepfakes are videos, images, audio, or text that appear to be authentic, but are computer-generated clones designed to mislead and sway public opinion.
Subjects tried to detect deepfakes and were assessed using electroencephalography (EEG) brain scans, said the study, published recently in the journal Vision Research.
The brains of these individuals could successfully detect deepfakes about 54 per cent of the time, said scientists, including those the University of Sydney in Australia.
However, when an earlier group was asked to verbally identify the same deepfakes, their success rate was only 37 per cent.
“Although the brain accuracy rate in this study is low – 54 per cent – it is statistically reliable. That tells us the brain can spot the difference between deepfakes and authentic images,” said study co-author Thomas Carlson from the University of Sydney.
There are now a growing number of deepfake videos online – from non-consensual explicit content to doctored media used in disinformation campaigns by foreign adversaries.
For instance, at the beginning of the Russian invasion of Ukraine, a deepfake video of president Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media.
With scientists across the world attempt to find new ways to identify deepfakes, researchers behind the new study said their findings could be a springboard in the fight against such doctored content online.
“If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter,” Dr Carlson said.
In the new study, participants were shown 50 images of real and computer-generated fake faces and asked them to identify which was which.
They showed a different group of participants the same images while their brain activity was recorded using EEGs, without them knowing half the images were fakes.
Comparing the two findings, scientists found people’s brains were better at detecting deepfakes than their eyes.
But scientists have cautioned that the findings are “just a starting point” and said further validation of the results is needed.
“More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected,” Dr Carlson said.