Class-incremental learning (CIL) poses significant challenges in open-world scenarios, where models must learn new classes over time without forgetting previous ones and handle inputs from unknown classes that a closed-set model would misclassify. In this paper, we present an in-depth analysis of post-hoc OOD detection methods and investigate their potential to eliminate the need for a memory buffer. When post hoc OOD detection is applied at inference time, we discover that it can effectively replace buffer-based strategies. We examine the performance of these methods in terms of classification accuracy of seen samples and rejection rates of unseen samples. We show that our approach achieves competitive performance compared to recent multi-head and single-head methods that rely on memory buffers and other buffer-free approaches. The results show that the proposed approach outperforms them in a closed-world setting and detects unseen samples while being significantly resource-efficient. Experimental results on CIFAR-10, CIFAR-100, and Tiny ImageNet support our findings and offer new insights into the design of efficient and privacy-preserving CIL systems for open-world settings.
2026-04
2025-10-29