Special Sessions
We are delighted to announce the opportunity to submit papers for two special sessions at the upcoming International Workshop on Acoustic Signal Enhancement (IWAENC) 2024, which will be held in the Music House (Musikkens Hus) in Aalborg, Denmark from Sept. 9 – 12, 2024. These sessions offer unique platforms to explore cutting-edge advancements in audio signal processing and deep learning:
1. Special Session on “Signal Processing and Deep Learning-Based Approaches to Audio Telepresence”
Organizers: Mingsian R. Bai and Boaz Rafaely
Short Description: This special session will explore advances in the theory, implementation, and applications of audio and speech processing for telepresence. Signal processing and deep learning are explored in the context of audio telepresence. The goal is to bring together the core technologies relevant to telepresence using laptops or pads, smart speakers, VR eyeglasses, gaming stations, and others.
The scope of the special session shall include, but not be limited to, the following topics:
- Microphone and loudspeaker array signal processing for AT: source counting, localization, beamforming, soundfield synthesis and zone control, etc.
- Binaural AT using headphones and global AT using loudspeaker arrays
- Signal processing-based systems, deep learning-based systems, and hybrid systems of the two
- AT-specific performance metrics and evaluation
- Scalability of signal enhancement and ambience preservation
- Enhancement techniques, including denoising, dereverberation, acoustic echo cancellation, etc., for AT
- Interpolation of array Relative Transfer Functions (RTFs)
- Online, real-time, and low-complexity implementations of telepresence systems
- Application scenarios of AT
2. Special Session on “AI-Guided Signal Processing for Efficient, Controllable, and Interpretable Audio Enhancement”
Organizers: Pejman Mowlaee, Jesper Rindom Jensen and Tim Fingscheidt
Short Description: The focus of this special session to exploit domain-expertise to break down the audio enhancement problems, identifying meaningful ways of using machine learning in combination with traditional audio signal processing. In addition to enabling the use of smaller and more efficient machine learning models, it may be a key to bringing back the flexibility, controllability, and interpretability of signal processing approaches, while leveraging the robustness of data-driven approaches.
Researchers in the field are invited to submit papers on the following, non-exhaustive list of topics:
- Combination of optimal filtering and machine-learning-based statistics estimation
- Data-driven beamformer designs
- Hybrid methods involving statistical signal processing and machine-learning-based approaches
- Blind source separation and extraction guided by machine-learned models
- Enhancement/extraction methods guided by machine learning (e.g., for target selection)