How the CISPA spinoff „Detesia“ is taking on deepfakes with first pilot projects
Voices in a podcast that are indistinguishable from the real person. An artificially generated photo that looks like a real image of someone. Or a manipulated video on social media spreading false claims. These are all examples of so-called “deepfakes,” media content such as text, audio, photos, or videos created using artificial intelligence, designed to deceive audiences and thereby threaten our freedom, democracy, and social cohesion. When reality and fiction become indistinguishable, disinformation can undermine democratic discourse. There are already plenty of examples—and we’re only at the beginning of the AI revolution. That’s why media companies and security authorities are sounding the alarm and urgently looking for solutions. The CISPA spinoff Detesia offers a powerful tool in the fight against computer-generated counterfeits with its deepfake detection software. We spoke with Tim Walita, COO of Detesia, and asked him some of the most pressing questions about deepfakes.
CISPA: When did the idea for Detesia come about? And how did your team come together?
Tim Walita: At first, we were working on an approach where chatbots would be able to independently create phishing emails and systematically build a person’s trust. Moreover, we had the idea to use deepfake simulations—for example, through voice cloning or visual deepfakes of a CEO—to recreate realistic attack scenarios.
Following that, we started asking ourselves how we could provide employees with practical protective measures. That’s when we realized that many deepfakes are now so well-made that even humans can barely detect them. So, we began searching for technical solutions to identify this kind of content. About two and a half years ago, the options in this area were extremely limited, and we saw that as a clear market gap.
What different hurdles did you have to overcome on the way to becoming a CISPA spinoff?
Recruiting staff proved challenging because we operate in a highly specialized market and need top tier experts. On top of that, we had to build a working corporate structure, efficient processes, and the right team culture from scratch, with no prior experience. Our demanding customer base also posed a hurdle, since they place exceptionally high demands on security, trust, and quality.
How dangerous are deepfakes? And what types of deepfakes exist?
Deepfakes represent a serious threat. According to the World Economic Forum, misinformation is currently among the greatest dangers facing humanity. As deepfakes become easier to produce and increasingly realistic, they play a growing role in spreading disinformation. Examples like the fake video of Zelensky or the doctored footage of an alleged bombing at the Pentagon demonstrate how powerful these digital counterfeits can be in shaping public opinion and triggering real world issues. A recent report from Regula Forensics also shows that in 2024, one out of every two companies worldwide fell victim to deepfake fraud—a clear indicator of the rapid rise in AI driven crime. What’s more, today’s deepfakes can be created with minimal technical effort (as our own LinkedIn example with Peter illustrates), while law enforcement authorities are often still unprepared for this new threat. Visual deepfakes can broadly be divided into two categories:
How does your deepfake detection work?
We train multiple AI models on both real and fake data so they can learn to distinguish between the two. Each model focuses on different types of counterfeits and detection strategies. Currently, we have one model specialized in fully synthetic images and videos, and another that zeroes in on partial manipulations around the face. A key part of our approach is explainability. Using a custom-developed method, we can pinpoint which areas of an image or video the model relied on when classifying it as real or fake. If a fake is detected, the system highlights—with a high degree of certainty—the area where the manipulation occurred. We’re also developing a multimodal detector specialized in lip sync deepfakes. This detector analyzes mismatches between visible mouth movements and the spoken audio to flag potential counterfeits.
What potential applications are there for this? Are industry or government institutions already interested?
The possible applications for our deep fake detection technology are extremely diverse and span multiple sectors: In the field of law enforcement, for example, it can help verify the authenticity of evidence. One conceivable scenario would be that a defendant claims an incriminating video is a deepfake. In such cases, our solution can bring clarity and ensure the integrity of the evidence.
In the media and journalism sector, the verification of information sources plays an increasingly important role. As manipulated content proliferates, the risk of misinformation in news reporting grows. Our tool can screen visual and audiovisual content for tampering before they’re published, ensuring that only verified footage reaches audiences.
There are also concrete use cases in the financial sector, particularly in the area of KYC (Know Your Customer). KYC providers must confirm that the individuals they onboard are real. Deep fake videos could be used to deceive identity verification systems—our technology guards against such fraudulent attempts.
Another field of application is the insurance sector. With more customers filing digital claims—submitting photos or videos of damage online—there’s a risk of AI generated or doctored imagery being used to inflate claims
There is already considerable interest from both government bodies and private companies. Law enforcement authorities, media organizations, and firms in the financial and insurance industries have recognized the potential of this technology and are actively seeking solutions to protect themselves against deepfake manipulation.
What are your next steps with Detesia?
Our next steps are mainly focusing on two main areas: On the one hand, we are working on the continuous improvement of our prototype. To this end, we are researching more powerful models and expanding our training data in order to further enhance detection accuracy. On the other hand, we are launching the first pilot projects this year. In doing so, we want to gain a better understanding of concrete real world use cases, collect valuable feedback, and tailor the prototype specifically to our customers’ practical needs.
Thank you for the interview, Tim!
For more information on Detesia, please visit detesia.com.