Explained: How to spot a deepfake picture | –…
The technique leverages methods traditionally used in astronomy to analyze light reflections. Adejumoke Owolabi, a data scientist at the University of Hull, played a pivotal role in this research. Owolabi sourced real images from the Flickr-Faces-HQ Dataset and created fake faces using an image generator. By comparing the reflections of light sources in the eyes of these images, Owolabi could predict with about 70% accuracy whether an image was real or fake.
How does it work?
The principle behind this method is based on the consistency of light reflections in the eyes. When a person is illuminated by a set of light sources, the reflections in both eyes should be similar. In many AI-generated images, these reflections are inconsistent due to the lack of attention to detail in simulating light physics. This discrepancy can be detected using two astronomical measurements: the CAS system and the Gini index. The CAS system quantifies the concentration, asymmetry, and smoothness of an object’s light distribution, while the Gini index measures the inequality of light distribution in images of galaxies.
A side by side comparison of deepfake eyes and the method used to spot them. Source: Image courtesy of Adejumoke Owolabi
This research is not without its challenges. While the method provides a significant step forward, it is not foolproof. There are instances of false positives and false negatives, indicating that this technique should be used in conjunction with other methods to ensure accuracy. Despite these limitations, the ability to detect deepfakes by analyzing eye reflections offers a promising tool in the fight against misinformation.
The implications of this research are far-reaching. Deepfake technology has the potential to be weaponized, spreading misinformation and causing harm. By developing reliable methods to detect these fakes, researchers are contributing to the broader effort to maintain the integrity of information in the digital age. The work of Pimbblet, Owolabi, and their colleagues represents a significant advancement in this ongoing battle.
The application of astronomy techniques to deepfake detection is a novel and exciting development. It highlights the interdisciplinary nature of modern scientific research, where methods from one field can be adapted to solve problems in another. As AI technology continues to evolve, so too must the methods used to detect and counteract its potential misuse. The research presented at the Royal Astronomical Society’s National Astronomy Meeting is a testament to the innovative thinking and collaboration that drives scientific progress.
“Why are alcohol and tobacco use more common risk factors for oral cancer in men than women? “