Send email Copy Email Address

2021-08-20
Annabelle Theobald

The battle for the credibility of our data

CISPA researcher Prof. Dr. Mario Fritz is researching possible ways to detect deepfakes.

How will people still be able to form an opinion in the future if they cannot trust their eyes and ears? This question will become more pressing in the coming years. Computers are already capable of imitating human speech and generating texts on a large scale that appear as if a human being wrote them. Artificial intelligence (AI) can manipulate photos and videos so skillfully that they are no longer recognizable as fakes to the human eye. "We have to assume that already shortly, the technology behind these so-called deepfakes will be developed to such an extent that they will also fool AI," says CISPA researcher Mario Fritz. It is the responsibility of researchers and industry to develop methods that make it possible to distinguish between real and fake content. Fritz and his team are currently working on digital watermarks and fingerprints, among other things, which are incorporated into the DNA of photos and texts and thus make their genesis clear.

Almost 200 years ago, the world's first permanent photograph was developed - and some thirty years later, the first techniques for manipulating photographs were already available. Therefore, retouching is not a phenomenon of the present. Since the 1980s, electronic image processing has already enormously reduced the effort required for image manipulation. However, the real gamechanger are so-called GANs, which have been around since 2014 and have been under constant development ever since. The abbreviation GAN stands for generative adversarial networks and describes machine learning models that consist of two competing artificial neural networks. While the task of one is to generate images or video sequences artificially, the other is to distinguish real from fake data. The two neural networks repeat the process of synthesizing and classifying data repeatedly, learning from each other. "Coupling the two algorithms creates entirely new possibilities," Fritz says.

With the help of GANs, forgeries can unleash their manipulative power on a large scale. Moreover, the method can easily be transferred to other domains, Fritz explains. After all, although fake images and videos receive the most media attention, they are not the only area of application for the technology. Computer-generated texts, for example, could soon generate fake news on a large scale, indistinguishable from editorial and human-produced content. This could be used to influence our social and political discourses and steer them in a particular direction. "If in the future, the authenticity of data is generally called into question, this will result in a substantial risk for society," says Mario Fritz.

However, the technology also holds great potential. For example, according to Fritz, in addition to graphical gimmicks, GANs can produce massive data sets that can be used to train other AI. For example, large quantities of images and video sequences could be artificially produced to depict scenarios that occur relatively infrequently. Autonomous vehicles could thus also be prepared for the unexpected and their susceptibility to errors reduced.

However, the models still have a few problems. One of them is: if fed with already existing data such as photos or videos, they have the property of paying less attention to rarely occurring data or even ignoring it entirely due to their underlying training procedure. As a result, underrepresented minorities are often not represented, and the models exacerbate pre-existing bias. With partners including the University of Maryland, the CISPA researcher has therefore developed a method to prevent this.

Fritz anticipates that the models will continue to evolve and become more common in the coming years. "Today, the cost of training the most complex and effective models is still quite high." So the availability of computing power is a hurdle. In addition, he says, some expertise is needed. As of yet, this prevents deepfakes that are no longer recognizable as such from being produced by anyone.  Furthermore, it is still possible, at least in many cases, to recognize deepfakes as such with the help of AI. "But it is important that we make the shift from passive to active defense mechanisms," Fritz says.

Hence, one approach of his team is to inject secret messages into texts and images - a kind of digital watermark or fingerprint. These could be produced at the same time as the synthesis of artificial data or added later. They disappear into the deep structure of images and texts in such a way that they cannot be seen. With a decoder, however, the secret message can be viewed and not only reveals that the photo is a deepfake. It could also contain information about the creator. "We are still testing the extent to which the characters can be altered or removed. So far, we cannot achieve any security guarantees, but the methods could withstand current attacks." The first battle has thus been won. Nevertheless, the fight over the authenticity of the data is still in full swing.

translated by Oliver Schedler