We propose DDAVS, an audio-visual segmentation framework that disentangles audio semantics and performs delayed bidirectional modality alignment to robustly localize sounding objects at the pixel level. DDAVS introduces an Audio Query Module with a prototype memory bank, a contrastive optimization module, and a multi-stage Audio-Visual Alignment Module, achieving state-of-the-art performance on AVS-Objects and VPO benchmarks, especially in challenging multi-source, subtle, distant, and off-screen scenarios.
Audio–Visual Segmentation (AVS) aims to localize sound-producing objects at the pixel level by jointly leveraging auditory and visual information. However, existing methods often suffer from multi-source entanglement and audio–visual misalignment, which lead to biases toward louder or larger objects while overlooking weaker, smaller, or co-occurring sources. To address these challenges, we propose DDAVS, a Disentangled Audio Semantics and Delayed Bidirectional Alignment framework. To mitigate multi-source entanglement, DDAVS employs learnable queries to extract audio semantics and anchor them within a structured semantic space derived from an audio prototype memory bank. This is further optimized through contrastive learning to enhance discriminability and robustness. To alleviate audio–visual misalignment, DDAVS introduces dual cross-attention with delayed modality interaction, improving the robustness of multimodal alignment. Extensive experiments on the AVS-Objects and VPO benchmarks demonstrate that DDAVS consistently outperforms existing approaches, exhibiting strong performance across single-source, multi-source, and multi-instance scenarios. These results validate the effectiveness and generalization ability of our framework under challenging real-world audio–visual segmentation conditions.