Vision-language models (VLMs) are increasingly applied to identify unsafe or inappropriate images due to their internal ethical standards and powerful reasoning abilities. However, it is still unclear whether they can recognize various unsafe con?cepts when presented in different modalities, such as text and images. To address this, we first compile the UnsafeConcepts dataset, featuring 75 unsafe concepts, i.e., “Swastika,” “Sex?ual Harassment,” and “Assaults,” along with associated 1.5Kimages. We then conduct a systematic evaluation of VLMs’ perception (concept recognition) and alignment (ethical rea?soning) capabilities. We assess eight popular VLMs and findt hat, although most VLMs accurately perceive unsafe con?cepts, they sometimes mistakenly classify these concepts as safe. We also identify a consistent modality gap among open? source VLMs in distinguishing between visual and textual unsafe concepts. To bridge this gap, we introduce a simplified reinforcement learning (RL)-based approach using proximal policy optimization (PPO) to strengthen the ability to iden?tify unsafe concepts from images. Our approach uses reward scores based directly on VLM responses, bypassing the need for collecting human-annotated preference data to train a new reward model. Experimental results show that our approach effectively enhances VLM alignment on images while pre?serving general capabilities. It outperforms baselines such as supervised fine-tuning (SFT) and direct preference optimiza?tion (DPO). We hope our dataset, evaluation findings, and proposed alignment solution contribute to the community’s efforts in advancing safe VLMs.
Usenix Security Symposium (USENIX-Security)
2025-08-14
2025-11-12