Send email Copy Email Address

2022-03-22
Annabelle Theobald

CISPA PhD student Xinlei He receives coveted Norton Labs Graduate Fellowship

The NortonLifeLock company funds Xinlei He's innovative research on the security and privacy of machine learning systems with $20,000.

Xinlei He, a doctoral student in the research group of CISPA faculty Dr. Yang Zhang, has been awarded the Norton Labs Graduate Fellowship. The IT security company NortonLifeLock (formerly Symantec) uses this fellowship to support outstanding graduate students and innovative research with tangible real-world applications. "I feel very honored as an early-stage PhD student, and this is a great validation for me and my work. My supervisor Yang and my colleagues have helped me a lot in developing better research visions and doing more solid works," says He. In addition to financial support, the company offers awardees a paid internship at one of Norton Labs' locations. These are located in Canada, the U.S., Ireland, or France, among other countries. "It's a great way to see real-world industry problems. There is often a gap between academic research and the daily reality in companies. The internship could help me better construct the relationship between them."

The Chinese-born researcher at CISPA currently focuses mainly on the security of so-called self-supervised machine learning models. These are novel models that learn with the help of artificial neural networks without requiring humans to label the necessary training data manually beforehand. Classifying training data, i.e., telling a model what to see in a photo e.g., is enormously costly and time-consuming. Successful training often requires millions of labeled example data sets, which are not easy to obtain. Self-supervised learning methods are therefore becoming increasingly popular.

A trendy type of self-supervised learning is so-called contrastive learning. With this approach, learning algorithms can identify similarities and differences by focusing on certain features of images. Without knowing what exactly is in a picture, they can abstractly represent these features in a vector and use these representations of photos to conduct different tasks.

In his paper "Quantifying and Mitigating Privacy Risks of Contrastive Learning," which he also presented at the prestigious CCS security conference in late 2021, Xinlei He was the first to analyze the privacy risks of this learning method. "Contrastive learning is shown to be vulnerable to so-called attribute inference attacks. This means that attackers can draw conclusions about sensitive information of the data from the output of the models that have nothing to do with their actual task. This is because the models tend to map too many training data features in the representation." To solve this problem, He developed the Talos contrastive learning mechanism. In this process, sensitive data that is unnecessary for task performance is censored, so to speak, for the models, thus significantly increasing data protection when used.

Xinlei He also plans to focus more on self-supervised machine learning in the future. "I think there is still a lot to be done in this area regarding security, privacy, and robustness." The fellowship will help him push his vision further. "It also reinforces me to further pursue in this direction in the future.”

translated by Oliver Schedler