Machine unlearning---the ability to remove designated concepts from a pre-trained model---has advanced swiftly. However, existing methods typically assume that unlearning requests arrive all at once, whereas in practice, they often occur sequentially. In this paper, we present the first systematic study of continual unlearning in text-to-image generation. We show that popular unlearning methods suffer from rapid retention failures: after only a few requests, the model drastically forgets retained knowledge and produces degraded images. Our analysis attributes this behavior to cumulative parameter drift, which causes successive unlearned models to progressively diverge from the pre-training manifold. Motivated by this insight, we investigate add-on mechanisms that (1) mitigate drift and (2) crucially, remain compatible with existing unlearning methods. Extensive experiments demonstrate that constraining model updates and merging independently unlearned models are effective solutions, suggesting promising directions for future exploration. Taken together, our study positions continual unlearning as a fundamental problem in image generation, offering insights, accessible baselines, and open challenges to advance safe and accountable generative AI.
 
        Unlearning 12 Styles
 
        Unlearning 12 Objects
Unlearning sequentially leads to faster model degradation compared to unlearning simultaneously. Sequential unlearns from the previous checkpoint while simultaneous, restarts from the base model and unlearns all requests (previous + current).
 
        Simultaneous unlearning requires significantly more resources.
 
        Unlearning 12 Styles
 
        Unlearning 12 Objects
 
        Qualitative Results: Styles
 
        Qualitative Results: Objects
We see strong improvements to retaining the cross-domain. When unlearning style we do well retaining objects and vice versa. It seems disentangling concepts within the same domain is a harder challenge.
 
        Unlearning 12 Styles
 
        Unlearning 12 Objects
Using our projection method we see strong improvements to retaining within-domain concepts.
@inproceedings{lee2025empirical,
  title={An Empirical Exploration of Continual Unlearning for Image Generation},
  author={Lee, Justin and Mai, Zheda and Fan, Chongyu and Chao, Wei-Lun},
  booktitle={ICML 2025 Workshop on Machine Unlearning for Generative AI}
}