Send email Copy Email Address

2022-01-14
Annabelle Theobald

CISPA start-up QuantPi explains how machines think

"We want to make artificial intelligence trustworthy," says Lukas Bieringer, Chief Operating Officer of the Saarbrücken-based startup QuantPi, formulating an ambitious goal. After all, trust in artificial intelligence requires that we have an understanding of it and that its decisions are comprehensible to us.  With its AutoXAI software, QuantPi, founded in 2020 by AI and business experts from CISPA and Saarland University, enables companies to gain comprehensive insights into the decision-making processes of the AI models they use.

We encounter artificial intelligence (AI) every day: when shopping online, AI decides which items are advertised to us, streaming services use it to tailor suggestions to our preferences, and search engines use AI to learn which search results are relevant to us. AI is considered the technology of the future. Strictly speaking, the term AI covers a whole range of technologies and methods based on various machine learning algorithms. The aim is to imitate human thinking and make data-based predictions. In many areas, AI systems still lag well behind human capabilities. In others, however, they surpass them by a considerable margin. Self-driving cars, the Internet of Things and, of course, medicine are all building on the growing potential of AI. 

But for all the hope associated with it, there is still one major problem with AI so far, which is often summed up in the term "black box." "You often just don't know exactly what AI has learned and how it arrives at its predictions," explains Philipp Adamidis, chief executive officer (CEO) and one of the three founders of QuantPi. The more complicated the systems and algorithms that are used are, the more difficult it is to track their decisions. "There would be far more opportunities to use AI, but the lack of transparency and explainability can become a legal and economic risk for companies," Adamidis says. For this reason, many companies fail to translate AI prototypes into real-world applications and products. 

The software developed by QuantPi aims to change that. Based on a mathematical theory for modeling complex networks developed by Dr. Antoine Gautier in his dissertation, the three founders Adamidis, Gautier and Artur Suleymanov have developed a method for making decision-making processes of AI systems comprehensible and transparent.The software analyzes and monitors AI models - regardless of which learning algorithm is used in the respective customer's company.   The software clearly displays - according to the wishes and needs of the respective entrepreneurs - which data and criteria have been incorporated into an AI decision. This not only makes it possible to understand why the AI acts in a certain way, but also to improve data processing procedures.

In order to continuously develop the novel technology and transfer it to companies around the world, work rarely stops at the Halle 4 co-working space in Saarbrücken. QuantPi's international team now consists of 12 employees. "We continue to grow and are always looking for skilled people," says Bieringer.

After a long phase of research and development up to market readiness, QuantPi's findings are now finally set to conquer the market. The QuantPi software is already being used in some pilot projects. "If everything works out, companies won't even need our help with implementation from 2023 onwards, as the software should adapt automatically to the AI models and respective requirements," says Bieringer.

The CISPA Incubator and  the "Techtransfer" team supported the founders with workshops and helped them apply for funding from the StartUpSecure program of the German Federal Ministry of Education and Research (BmBF). In addition, CISPA faculty Prof. Dr. Jilles Vreeken, an expert in the field of trustworthy artificial intelligence, is one of the young team's advisory board members. Future topics have also already been identified: Together with physics professor Dr. Frank Wilhelm-Mauch from Saarland University, the researchers want to work on thinking the explainability of AI into a future full of quantum computers. Bieringer sums up, "There is still a lot to do."

translated by Tobias Ebelshäuser