BIA: Black Hole-Driven Identity Absorbing in Diffusion Models

Identity Unlearning

CVPR 2025
Kyungpook National University, South Korea
*Corresponding Author

πŸ”₯[NEW!] Black Hole-Driven Identity Absorption (BIA) β€” a novel latent-space method for erasing identities in diffusion models. BIA avoids random traversals by using a "Black Hole-Driven" identity absorber that preserves visual realism, outperforms SOTA on SID & FID, and achieves clean erasure while maintaining other attributes.

Abstract

Instruction-tuned diffusion models offer promising pathways for controllable image generation, yet the challenge of identity removal remains underexplored. We introduce a novel identity erasure method within the latent space of diffusion models.

  1. Identity Erasure in Latent Space. We tackle the challenge of removing specific identities from generated images by operating directly in the latent space of diffusion models, addressing privacy and personalization needs.
  2. BIA Method. We propose BIA (Black Hole-Driven Identity Absorption), a novel mechanism that uses a black hole metaphor to absorb and neutralize identity-specific representations, preventing them from reappearing during generation.
  3. Semantic Validity. Instead of relying on random or orthogonal latent traversals, BIA wraps and attracts validated neighboring identity representations to ensure high-quality, semantically consistent outputs without the target identity.
  4. Performance. BIA outperforms state-of-the-art methods on identity similarity (SID) and FID metrics, and achieves superior qualitative results and user study preferences, ensuring clean identity removal while preserving other attributes.
  5. Open-source. We plan to release the BIA framework, training code, and evaluation benchmarks to support future research in privacy-aware generative modeling.

BIA: Black Hole Region Formation

We begin by inverting the source image xr to obtain its latent code hr in the h-space. To strengthen the black hole region for identity unlearning, we sample n neighboring latent codes hj = hr + Ξ”, where Ξ” is derived from a random latent code hran scaled within the range [0, Ξ±max].

Each decoded image xj is embedded using ArcFace to extract its identity feature fj. We compute the cosine similarity simr,j between fj and the original identity feature fr. Latent codes with simr,j > thr are considered identity-similar (label = 1), while others are labeled dissimilar (label = 0).

These labeled latent codes are used to train a Support Vector Machine (SVM), which defines a separation direction did and constructs a boundary around hr in latent space. This boundary effectively delineates the black hole region 𝔅h, which absorbs identity-specific information during the unlearning process.

BIA Framework

Identity Absorption and Black Hole Wrapping

An overview of the proposed Black Hole-Driven Identity Absorption (BIA) framework is illustrated. Starting with a source image xr, we extract its latent code hr in h-space. Using the black hole region formation process, a black hole is formed around hr, attracting neighboring latent codes to absorb identity-specific features. To guide this absorption, a wrapper loss Lwrapper is introduced, encouraging identity-similar points within the black hole to align with non-identical ones, while preserving image quality. Additionally, a semantic loss Lsem ensures that hr and its edited version Δ₯r retain non-identity attributes. This leads to a generative model that effectively removes identity information while preserving high semantic and visual fidelity.

Qualitative Comparison

For each source image (first row), GUIDE [53], Baseline, and our approach BIA demonstrate identity unlearning within the pre-trained model.

BibTeX


 
  @inproceedings{BIA2025,
    author      = {Shaheryar, Muhammad and Lee, Jong Taek and Jung, Soon Ki},
    title       = {Black Hole-Driven Identity Absorbing in Diffusion Models},
    booktitle   = {CVPR},
    year        = {2025}
  }
  

Acknowledgement

This website is adapted from LLaVA, licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. We thank the LLaMA team for giving us accessto open-source projects.