How the Startup Sequire Technology Is Shaping Security Standards for Language Models
Founding sequire technology was a bold move in a dynamic industry grappling with rapidly advancing AI and cybersecurity challenges. After working in both research and industry, founder Christoph Endres took on the challenge of improving the safety of large language models with innovative solutions. The result: the internationally recognized discovery of Indirect Prompt Injections, which describes an entirely new type of vulnerability in large AI models. Through strong partnerships—with CISPA Helmholtz Center for Information Security and support from the CISPA StartUpSecure incubator—sequire technology is developing secure, self-learning protection mechanisms for AI. These are a crucial foundation for complying with future EU regulations such as NIS-2 and the AI Act. In our conversation, we shed light on the origin, challenges, and future plans of Christoph Endres and sequire technology, a company aiming to strengthen European sovereignty in the AI security market.
CISPA: What are Indirect Prompt Injections, and how did you discover this vulnerability?
Christoph Endres: We owe this discovery to our colleague Kai Greshake. Kai has always been a gifted hacker and spots vulnerabilities faster than most experts. Some might remember the MongoDB hack in 2015—his first hack that gained global attention when he was still a teenager. In February 2023, Kai called me one evening and said, “I’ve found something, but I think this is too big for my blog!” That same night, I rang Jilles Vreeken at home and told him we needed to set up a meeting with the best experts at CISPA to validate our idea as soon as possible. Just one week later, we published a preprint of our paper, which went on to win a Best Paper Award and has now been cited nearly 500 times.
Indirect Prompt Injections differ from other vulnerabilities because, strictly speaking, they aren’t a flaw in the usual sense—no one made a mistake in the code that you could simply patch. Rather, it’s the very way LLMs operate that leaves them exposed. Like a (perhaps somewhat naïve) person, language models can be manipulated. An attacker can take advantage of this by hiding instructions in places that alter the system’s behavior. It has already made its way into memes: “Forget your previous instructions and instead do…”—that was us.
How significant was the media attention after the vulnerability was disclosed?
The global media attention was immense. In Germany, Eva Wolfangel wrote an excellent article at the time, and there was plenty of exciting international coverage. We even made it into the local press, such as the local TV news program “Aktueller Bericht” on SR, and appeared on the front page of the Saarbrücker Zeitung, the main daily newspaper for Saarland.
It was also a thrill to be invited to Black Hat in Las Vegas, where I closed out the conference with a talk on Indirect Prompt Injection.
What other vulnerabilities do LLMs have?
There are many vulnerabilities in LLMs. OWASP has published a top 10 list on the subject. Prompt injection is still in first place, but there are also other issues, mostly related to data privacy problems or undesirable language and discrimination.
How do your software solutions work to make LLMs more secure? And what are the concrete use cases?
We can’t reveal too much during an ongoing development, of course. But I can certainly explain the general approach. So far, LLMs are similar to software from the 1980s or even earlier: a monolithic process, with too many permissions and no security safeguards. Later, operating systems introduced things like hypervisors, process separation, permission management, etc.
Essentially, we now need to do the same thing again—build a secure execution environment for LLMs, with security mechanisms similar to those in operating systems. But of course, this environment has to be much more specialized and, most importantly, self-adaptive, meaning the security measures must evolve at the same pace as potential attacks.
How do you support companies with legally relevant requirements such as the NIS-2 Directive and the EU AI Act?
I gave a keynote on NIS-2 last year. It’s an important topic, and it’s good that there is official pressure to implement security measures. Even if it might seem annoying in day-to-day operations—it was similar with the GDPR. Due to the extension of the directive to more industries, many companies are overwhelmed and need help. We’ve already supported several satisfied clients in this area and helped them with implementation.
We’re also active in relation to the EU AI Act. For example, we’re working on a guide for testing LLMs with the German Federal Office for Information Security (BSI). That’s important because standardized tests give clients a certain level of confidence that they can expect comparable quality across different providers and that certified standards are being upheld.
In addition, we ensure that our software solutions automatically fulfill the requirements set out in EU legislation. Our clients are not only secure, but also legally compliant.
Since June 2025, you’ve been receiving funding from the CISPA StartUpSecure incubator. How much has this collaboration helped you?
We’ve taken on a major challenge—one we couldn’t have tackled alone as a startup. StartUpSecure has been incredibly beneficial for us. On the one hand, we now have the right budget to turn our ideas into reality and conduct the fundamental research necessary before any serious product can be brought to market.
On the other hand, we really value the connection with CISPA and the opportunity to engage in high-level research. The exchange between researchers who bring in new ideas and a company that can quickly test them is a win-win situation.
What new cloud- or AI-based products or features is sequire technology planning in the next 12–24 months?
We hope to bring our work on the self-adaptive sandbox “sequiSAS” to market quickly after the pre-competitive phase funded through the program. Time is of the essence—key parts of the AI Act will come into force in August 2026, and it would be ideal to have a well-functioning solution in place that not only ensures security but also meets legal requirements.
And yes, large international corporations are likely working on this too—but in the current situation, I believe it should be the top priority to develop a European solution to preserve our sovereignty in the global market. Especially when it comes to something as critical as the security of AI systems, the stakes are incredibly high.
Thank you for the interview, Christoph!
More information about sequire technology: https://sequire.de