The key to detecting deepfakes may lie within deep space (2024)

The key to detecting deepfakes may lie within deep space (1)

The eyes, the old saying goes, are the window to the soul — but when it comes to deepfake images, they might be a window into unreality.

That's according to new research conducted at the University of Hull in the U.K., which applied techniques typically used in observing distant galaxies to determine whether images of human faces were real or not. The idea was sparked when Kevin Pimbblet, a professor of astrophysics at the University, was studying facial imagery created by artificial intelligence (AI) art generators Midjourney and Stable Diffusion. He wondered whether he could use physics to determine which images were fake and which were real. "It dawned on me that the reflections in the eyes were the obvious thing to look at," he told Space.com.

Deepfakes are either fake images or videos of people created by training AI on mountains of data. When generating images of a human face, the AI uses its vast knowledge to build an unreal face, pixel by pixel; these faces can be built from the ground up, or based on actual people. In the case of the latter, they're often used for malicious reasons. Nevertheless, given that real photos contain reflections, the AI adds these in —– but there are often subtle differences across both eyes.

With a desire to follow his instinct, Pimbblet recruited Adejumoke Owolabi, a Masters student at the University, to help develop software that could quickly scan the eyes of subjects in various images to see whether those reflections checked out. The pair built a program to assess the differences between the left and right eyeballs in photos of people, real and unreal. The real faces came from a diverse dataset of 70,000 faces on Flickr, while the deepfakes were created by the AI underpinning the website This Person Does Not Exist, a website that generates realistic images of people who you would think exist, but do not.

Related: Apollo 11 'disaster' video project highlights growing danger of deepfake tech

It's obvious once you know it's there: I refreshed This Person Does Not Exist five times and studied the reflections in the eyes. The faces were impressive. At a glance, there's nothing that stood out to suggest they were fake.

The key to detecting deepfakes may lie within deep space (2)

Closer inspection revealed some near-imperceptible differences in the lighting of either eyeball. They didn't exactly seem to match. In one case, the AI generated a man wearing glasses — the reflection in his lens also seemed a little off.

What my eye couldn't quantify, however, was how different the reflections were. To make such an assessment, you'd need a tool that can identify violations to the precise rules of optics. This is where the software Pimbblet and Owolabi comes in. They used two techniques from the astronomy playbook, "CAS parameters" and "the Gini index."

The key to detecting deepfakes may lie within deep space (3)

In astronomy, CAS parameters can determine the structure of a galaxy by examining the Concentration, Asymmetry and Smoothness (or "clumpiness") of a light profile. For instance, an elliptical galaxy will have a high C value and low A and S values — its light is concentrated within its center, but it has a more diffuse shell, which makes it both smoother and more symmetrical. However, the pair found CAS wasn't as useful for detecting deepfakes. Concentration works best with a single point of light, but reflections often appear as patches of light scattered across an eyeball. Asymmetry suffers from a similar problem — those patches make the reflection asymmetrical and Pimbblet said it was hard to get this measure "right".

Using the Gini coefficient worked a lot better. This is a way to measure inequality across a spectrum of values. It can be used to calculate a range of results related to inequality, such as the distribution of wealth, life expectancy or, perhaps most commonly, income. In this case, Gini was applied to pixel inequality.

"Gini takes the whole pixel distribution, is able to see if the pixel values are similarly distributed between left and right, and is a robust non-parametric approach to take here," Pimbblet said.

The work was presented at the Royal Astronomical Society meeting at the University of Hull on July 15, but is yet to be peer-reviewed and published. The pair are working to turn the study into a publication.

Pimbblet says the software is merely a proof of concept at this stage. The software still flags false positives and false negatives, with an error rate of about three in 10. It has also only been tested on a single AI model so far. "We have not tested against other models, but this would be an obvious next step," Pimbblet says.

Dan Miller, a psychologist at James Cook University in Australia, said the findings from the study offer useful information, but cautioned it may not be especially relevant to improving human detection of deepfakes — at least not yet, because the method requires sophisticated mathematical modeling of light. However, he noted "the findings could inform the development of deepfake detection software."

And software appears like it will be necessary, given how sophisticated the fakes are becoming. In a 2023 study, Miller assessed how well participants could spot a deepfake video, providing one group with a list of visual artifacts — like shadows or lighting — they should look for. But the research found that intervention didn't work at all. Subjects were only able to spot the fakes as well as a control group who hadn't been given the tips (this kind of suggests my personal mini-experiment above could be an outlier).

The entire field of AI feels like it has been moving at lightspeed since ChatGPT dropped in late 2022. Pimbblet suggests the pair's approach would work with other AI image generators, but notes it's also likely newer models will be able to "solve the physics lighting problem."

This research also raises an interesting question: If AI can generate reflections that can be assessed with astronomy-based methods… could AI also be used to generate entire galaxies?

Related Stories:

— Safety first: NASA pledges to use AI carefully and responsibly

— How artificial intelligence is helping us explore the solar system

— In the search for alien life, should we be looking for artificial intelligence?

Pimbblet says there have been forays into that realm. He points to a study from 2017 which assessed how well "generative adversarial networks" or GANs (the technology underpinning AI generators like Midjourney or ChatGPT) could recapitulate galaxies from degraded data. Observing telescopes on Earth and in space can be limited by noise and background, causing blurring and loss of quality (Even stunning James Webb Space Telescope images require some cleaning up).

In the 2017 study, researchers trained a large AI model on images of galaxies, then used the model to try and recover degraded imagery. It wasn't always perfect — but it was certainly possible to recover features of the galaxies from low-quality imagery.

A preprint study, in 2019, similarly used GANs to simulate entire galaxies.

The researchers suggest the work would be useful as huge amounts of data pour in from missions observing the universe. There's no way to look through all of it, so we may need to turn to AI. Generating these galaxies with AI could then, in turn, train AI to hunt for specific kinds of actual galaxies in huge datasets. It all sounds a bit dystopian, but, then again, so does detecting unreal faces with subtle changes in the reflections in their eyeballs.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

The key to detecting deepfakes may lie within deep space (4)

Jackson Ryan

Contributing Writer

Jackson Ryan is a science journalist hailing from Adelaide, Australia, with a focus on longform and narrative non-fiction work. He currently serves as the President of the Science Journalists Association of Australia. Between 2018 and 2023, he was the science editor at CNET. In 2022, he won the Eureka Prize for Science Journalism, which Aussies dub the "Science Oscars." Before all that, he got his doctorate in molecular biology and once hosted a kids TV show on the Disney Channel, called "GameFest." (Good luck finding it.) He lives with a collection of more than 70 Christmas sweaters and zero pets, the latter of which he hopes to rectify one day.

More about tech

Google-inspired AI model improves Cape Canaveral space launch weather forecasts by 50%Florida startup Star Catcher snags $12 million to help develop 1st off-Earth energy grid

Latest

SpaceX launching 23 Starlink satellites to orbit early Aug. 2
See more latest►

No comments yetComment from the forums

    Most Popular
    New Apollo Lunar Roving Vehicle features 'most accurate details' in a Lego set
    After ISS: The private space station era is dawning
    Hubble Telescope spots a stunning spiral galaxy shining in the 'Little Lion' (image)
    Magnetic fields on the sun could solve longstanding solar heating mystery
    Can the moon help preserve Earth's endangered species?
    Moon robots could build stone walls to protect lunar bases from rocket exhaust
    Ghostly 'zodiacal light' glows above the Very Large Telescope in Chile (photo)
    'A Quiet Place: Day One' VFX chief on the Death Angels' sensitive side (exclusive)
    Boeing Starliner 1st astronaut flight: Live updates
    We used 1,000 historical photos to reconstruct Antarctic glaciers before a dramatic collapse
    Renaissance astronomer Tycho Brahe's lab is home to a centuries-old chemical mystery
    The key to detecting deepfakes may lie within deep space (2024)

    FAQs

    The key to detecting deepfakes may lie within deep space? ›

    Scientists believe a method used to observe light in deep space could help to identify deepfake imagery generated by Artificial Intelligence (AI). A deepfake is a type of digital content created using AI to mimic a person's likeness or voice.

    What is the tool to detect deepfakes? ›

    Sentinel

    The system determines if the media is a deepfake or not and provides a visualization of the manipulation. Sentinel's deepfake detection technology is designed to protect the integrity of digital media.

    What is the algorithm used to detect deep fake? ›

    The algorithm used for Deepfake detection is CNN. For detection of faces from video frames, in the pre-processing stage, a Dlib classifier is used which will be used to detect face landmarks.

    What is the problem statement of deepfake detection? ›

    Example of real and deepfake 1.1 Problem Statement The problem at hand is the proliferation of deepfake content, which poses a substantial threat to the authenticity and trustworthiness of multimedia in various domains.

    How to spot fake AI images? ›

    Lighting and Shadows: AI-generated images can have inconsistent or unrealistic lighting and shadows. Students should check if the lighting on different objects in the image matches and if the shadows are consistent with the light sources. Background Anomalies: Backgrounds in AI images can be a giveaway.

    Is watching deepfake illegal? ›

    Are Deepfakes Illegal to Watch? Watching deepfakes is not illegal in itself, except in cases where the content involves unlawful material, such as child p*rnography. Existing legislation primarily targets the creation and distribution of deepfakes, especially when these actions involve non-consensual p*rnography.

    How to combat deep fakes? ›

    To combat such abuses, technologies can be used to detect deepfakes or enable authentication of genuine media. Detection technologies aim to identify fake media without needing to compare it to the original, unaltered media. These technologies typically use a form of AI known as machine learning.

    Why do deepfake detectors fail? ›

    This is because the deepfake detection models typically require the input of a certain size, whereas the original image/video may be a different size. To deal with this, pre-processing is performed, which may obscure the deepfake artifacts that the detector relies on.

    What techniques are used in deepfakes? ›

    While the act of creating fake content is not new, deepfakes leverage tools and techniques from machine learning and artificial intelligence, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs).

    Why is deepfake detection important? ›

    Deepfake detection models are designed to discern real content from manipulated media across diverse industry domains, from banking, digital lending, insurance, and gig economy.

    Can deepfakes be tracked? ›

    The database includes descriptors for each deepfake, such as the URL and how much it was seen or shared on social media. It also lists the original source, the sharer, the person or group that the deepfake was targeting, and theoretical indicators.

    Can deepfake audio be detected? ›

    Using AI to detect AI-generated deepfakes can work for audio — but not always. As deepfake generation technology improves and leaves ever-fewer telltale signs that humans can rely on, computational methods for detection are becoming the norm.

    What is a real life example of a deepfake? ›

    A well-known example of political deepfake misuse was the altered video of Nancy Pelosi. It involved slowing down a real-life video clip of her, making her seem impaired. There are also instances of audio deepfakes, like the fraudulent use of a CEO's voice, in major corporate heists.

    Top Articles
    Latest Posts
    Article information

    Author: Fredrick Kertzmann

    Last Updated:

    Views: 5994

    Rating: 4.6 / 5 (46 voted)

    Reviews: 85% of readers found this page helpful

    Author information

    Name: Fredrick Kertzmann

    Birthday: 2000-04-29

    Address: Apt. 203 613 Huels Gateway, Ralphtown, LA 40204

    Phone: +2135150832870

    Job: Regional Design Producer

    Hobby: Nordic skating, Lacemaking, Mountain biking, Rowing, Gardening, Water sports, role-playing games

    Introduction: My name is Fredrick Kertzmann, I am a gleaming, encouraging, inexpensive, thankful, tender, quaint, precious person who loves writing and wants to share my knowledge and understanding with you.