Prompt Stealing Attacks Against Text-to-Image Generation Models
The study investigates "prompt stealing" attacks on text-to-image generation models, where adversaries extract proprietary prompts from generated images, threatening intellectual property and business models in prompt marketplaces. Using a dataset of 61,467 prompt-image pairs, researchers developed "PromptStealer," a tool combining a subject generator and modifier detector. Experimental results show that PromptStealer outperforms existing methods in accurately reconstructing stolen prompts. To mitigate such threats, the study proposes "PromptShield," an image perturbation technique to obscure prompts, which showed promising defense results. This research highlights the importance of protecting creative industries and intellectual property in AI-driven ecosystems.