Involved in the 3DTV Content Search Project Sponsored by European Project FP7 [URL]
Publication
Selected Papers
Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” accepted at Neural Information Processing Systems (NeurIPS), 2023 [arXiv][dataset]
Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives,” in Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 6569–6591, 2023 [ACL][arXiv][code][bibtex] – Outstanding Paper Award [Certificate]
Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Linear Inverse Problems with Denoising Diffusion Restoration,” in Proc. International Conference on Machine Learning (ICML), pp. 25501–25522, 2023 [PRML][OpenReview][arXiv][code]
Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the Underlying Score Fokker-Planck Equation,” in Proc. International Conference on Machine Learning (ICML), pp. 18365–18398, 2023 [PRML][OpenReview][arXiv][code]
Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, Taylor Berg-Kirkpatrick, “CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos,” in Proc. International Conference on Learning Representations (ICLR), 2023 [OpenReview][arXiv][demo][code]
Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” In Findings of Conference on Empirical Methods in Natural Language Processing (EMNLP), pp.1656–1675, 2022 [ACL][arXiv][code][bibtex]
Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Shusuke Takahashi, Toshiyuki Kumakura, Yuki Mitsufuji, “SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization,” Proc. International Conference on Machine Learning (ICML), pp.20987–21012, 2022 [PMLR][arXiv][code][bibtex]
Naoya Takahashi, Yuki Mitsufuji, “Densely Connected Multi-Dilated Convolutional Networks for Dense Prediction Tasks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 993–1002, 2021 [CVF][IEEE][arXiv][code][bibtex]
Giorgio Fabbro, Stefan Uhlich, Chieh-Hsin Lai, Woosung Choi, Marco Martínez-Ramírez, Weihsiang Liao, Igor Gadelha, Geraldo Ramos, Eddie Hsu, Hugo Rodrigues, Fabian-Robert Stöter, Alexandre Défossez, Yi Luo, Jianwei Yu, Dipam Chakraborty, Sharada Mohanty, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Nabarun Goswami, Tatsuya Harada, Minseok Kim, Jun Hyung Lee, Yuanliang Dong, Xinran Zhang, Jiafeng Liu, Yuki Mitsufuji, “The Sound Demixing Challenge 2023 – Music Demixing Track,” submitted to Transactions of the International Society for Music Information Retrieval (TISMIR), 2023 [arXiv]
Stefan Uhlich, Giorgio Fabbro, Masato Hirano, Shusuke Takahashi, Gordon Wichern, Jonathan Le Roux, Dipam Chakraborty, Sharada Mohanty, Kai Li, Yi Luo, Jianwei Yu, Rongzhi Gu, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Mikhail Sukhovei, Yuki Mitsufuji, “The Sound Demixing Challenge 2023 – Cinematic Demixing Track,” submitted to Transactions of the International Society for Music Information Retrieval (TISMIR), 2023 [arXiv]
Naoya Takahashi, Mayank Kumar, Singh, Yuki Mitsufuji, “Robust One-Shot Singing Voice Conversion,” to be submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv][demo]
Masato Hirano, Shimada Kazuki, Yuichiro Koyama, Shusuke Takahashi, Yuki Mitsufuji, “Diffusion-based Signal Refiner for Speech Separation,” to be submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv]
Ryosuke Sawata, Naoya Takahashi, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, “The Whole Is Greater than the Sum of Its Parts: Improving DNN-based Music Source Separation,” under review at IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv]
Yuhta Takida, Wei-Hsiang Liao, Toshimitsu Uesaka, Shusuke Takahashi, Yuki Mitsufuji, “Preventing Oversmoothing in VAE via Generalized Variance Parameterization,” Neurocomputing, vol. 509, pp. 137–156, 2022 [Elsevier][arXiv]
Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Défossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk, “Music Demixing Challenge 2021,” Frontiers in Signal Processing (Front. signal process.), vol. 1, 2022 [Frontiers][arXiv][challenge][bibtex]
Jihui Aimee Zhang, Naoki Murata, Yu Maeno, Prasanga N. Samarasinghe, Thushara D. Abhayapala, Yuki Mitsufuji, “Coherence-Based Performance Analysis on Noise Reduction in Multichannel Active Noise Control Systems,” Journal of the Acoustical Society of America (JASA), vol. 148, issue 3, 2020 [ASA]
Yuki Mitsufuji, Norihiro Takamune, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Evanescent-Region-Aware Non-Negative Tensor Factorization in Spherical Harmonic Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 29, pp. 607–617, 2020 [IEEE][bibtex]
Tetsu Magariyachi, Yuki Mitsufuji, “Analytic Error Control Methods for Efficient Rotation in Dynamic Binaural Rendering of Ambisonics,” Journal of the Acoustical Society of America (JASA), vol. 147, issue 1, 2020 [ASA]
Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Naoki Murata, Thushara D. Abhayapala, “Spherical-Harmonic-Domain Feedforward Active Noise Control Using Sparse Decomposition of Reference Signals from Distributed Sensor Arrays,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 28, pp. 656–670, 2019 [IEEE][bibtex]
Yuki Mitsufuji, Stefan Uhlich, Norihiro Takamune, Daichi Kitamura, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Non-Negative Matrix Factorization Using Banded Spatial Covariance Matrices in Wavenumber Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 28, pp. 49–60, 2019 [IEEE][bibtex]
Fabian-Robert Stöter, Stefan Uhlich, Antoine Liutkus, Yuki Mitsufuji, “Open-Unmix – A Reference Implementation for Music Source Separation,” Journal of Open Source Software (JOSS), vol. 4, no. 41, pp. 1667, 2019 [OSI][code][bibtex]
Yuki Mitsufuji, Axel Röbel, “On the Use of a Spatial Cue as Prior Information for Stereo Sound Source Separation Based on Spatially Weighted Non-Negative Tensor Factorization,” EURASIP Journal of Advancement of Signal Processing (EURASIP J.Adv.Signal Process.), issue 1, 2014 [Springer][bibtex]
Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” accepted at Neural Information Processing Systems (NeurIPS), 2023 [arXiv][dataset]
Kazuki Shimada, Kengo Uchida, Yuichiro Koyama, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, Tatsuya Kawahara, “Zero- and Few-shot Sound Event Localization and Detection,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
Carlos Hernandez-Olivan, Koichi Saito, Naoki Murata, Chieh-Hsin Lai, Marco A. Martínez-Ramirez, Wei-Hsiang Liao, Yuki Mitsufuji, “VRDMG: Vocal Restoration via Diffusion Posterior Sampling with Multiple Guidance,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv][demo]
Hao Shi, Kazuki Shimada, Masato Hirano, Takashi Shibuya, Yuichiro Koyama, Zhi Zhong, Shusuke Takahashi, Tatsuya Kawahara, Yuki Mitsufuji, “Diffusion-Based Speech Enhancement with Joint Generative and Predictive Decoders,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
Takashi Shibuya, Yuhta Takida, Yuki Mitsufuji, “BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv][demo][code]
Eleonora Grassucci, Yuki Mitsufuji, Ping Zhang, Danilo Comminiello, “Enhancing Semantic Communication with Deep Generative Models – An ICASSP Special Session Overview,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
Zhi Zhong, Hao Shi, Masato Hirano, Kazuki Shimada, Kazuya Tateishi, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, “Extending Audio Masked Autoencoders Toward Audio Restoration,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2023 [IEEE][arXiv]
Keisuke Toyama, Taketo Akama, Yukara Ikemiya, Yuhta Takida, WeiHsiang Liao, Yuki Mitsufuji, “Automatic Piano Transcription with Hierarchical Frequency-Time Transformer,” in Proc. International Society for Music Information Retrieval (ISMIR) Conference, 2023 [arXiv][code]
Ryosuke Sawata, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, “Diffiner: A Versatile Diffusion-based Generative Refiner for Speech Enhancement,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 3824–3828, 2023 [ISCA][arXiv][code]
Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives,” in Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 6569–6591, 2023 [ACL][arXiv][code][bibtex] – Outstanding Paper Award [Certificate]
Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Linear Inverse Problems with Denoising Diffusion Restoration,” in Proc. International Conference on Machine Learning (ICML), pp. 25501–25522, 2023 [PRML][OpenReview][arXiv][code]
Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the Underlying Score Fokker-Planck Equation,” in Proc. International Conference on Machine Learning (ICML), pp. 18365–18398, 2023 [PRML][OpenReview][arXiv][code]
Yuhta Takida, Masaaki Imaizumi, Takashi Shibuya, Chieh-Hsin Lai, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji, “SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer,” under review, 2023 [arXiv][code]
Zhi Zhong, Masato Hirano, Kazuki Shimada, Kazuya Tateishi, Shusuke Takahashi, Yuki Mitsufuji, “An Attention-based Approach to Hierarchical Multi-label Music Instrument Classification,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp.1–5, 2023 [IEEE][arXiv]
Koichi Saito, Naoki Murata, Toshimitsu Uesaka, Chieh-Hsin Lai, Yuhta Takida, Takao Fukui, Yuki Mitsufuji, “Unsupervised Vocal Dereverberation with Diffusion-based Generative Models,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo]
Junghyun Koo, Marco A. Martı́nez-Ramı́rez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee, Yuki Mitsufuji, “Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo][code]
Naoya Takahashi, Mayank Kumar, Singh, Yuki Mitsufuji, “Hierarchical Diffusion Models for Singing Voice Neural Vocoder,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo]
Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, Yuki Mitsufuji, “DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo][code]
Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, Taylor Berg-Kirkpatrick, “CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos,” in Proc. International Conference on Learning Representations (ICLR), 2023 [OpenReview][arXiv][demo][code]
Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” In Findings of Conference on Empirical Methods in Natural Language Processing (EMNLP), pp.1656–1675, 2022 [ACL][arXiv][code][bibtex]
Marco A. Martínez Ramírez, WeiHsiang Liao, Giorgio Fabbro, Stefan Uhlich, Chihiro Nagashima, Yuki Mitsufuji, “Automatic Music Mixing with Deep Learning and Out-of-Domain Data,” in Proc. the 23rd International Society for Music Information Retrieval (ISMIR) Conference, pp.411–418, 2022 [ISMIR][arXiv][demo][code]
Johannes Imort, Giorgio Fabbro, Marco A. Martinez Ramirez, Stefan Uhlich, Yuichiro Koyama, Yuki Mitsufuji, “Distortion Audio Effects: Learning How to Recover the Clean Signal,” in Proc. the 23rd International Society for Music Information Retrieval (ISMIR) Conference, pp.218–225, 2022 [ISMIR][arXiv][demo]
Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Shusuke Takahashi, Toshiyuki Kumakura, Yuki Mitsufuji, “SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization,” in Proc. International Conference on Machine Learning (ICML), pp.20987–21012, 2022 [PMLR][arXiv][code][bibtex]
Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Naoya Takahashi, Emiru Tsunoo, Yuki Mitsufuji, “Multi-ACCDOA: Localizing and Detecting Overlapping Sounds from the Same Class with Auxiliary Duplicating Permutation Invariant Training,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 316–320, 2022 [IEEE][arXiv][bibtex]
Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang, “Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 466–470, 2022 [IEEE][arXiv][demo][code][bibtex]
Yuichiro Koyama, Kazuhide Shigemi, Masafumi Takahashi, Kazuki Shimada, Naoya Takahashi, Emiru Tsunoo, Shusuke Takahashi, Yuki Mitsufuji, “Spatial Data Augmentation with Simulated Room Impulse Responses for Sound Event Localization and Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8872–8876, 2022 [IEEE][arXiv][bibtex]
Yuichiro Koyama, Naoki Murata, Stefan Uhlich, Giorgio Fabbro, Shusuke Takahashi, Yuki Mitsufuji, “Music Source Separation with Deep Equilibrium Models,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 296–300, 2022 [IEEE][arXiv][bibtex]
Ricardo Falcon-Perez, Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Yuki Mitsufuji, “Spatial Mixup: Directional Loudness Modification as Data Augmentation for Sound Event Localization and Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 431–435, 2022 [IEEE][arXiv][code][bibtex]
Naoya Takahashi, Yuki Mitsufuji, “Amicable Examples for Informed Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 241–245, 2022 [IEEE][arXiv][bibtex]
Naoya Takahashi, Mayank Kumar Singh, Yuki Mitsufuji, “Source Mixing and Separation Robust Audio Steganography,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4368–4372, 2022 [arXiv]
Yasuhide Hyodo, Chihiro Sugai, Junya Suzuki, Masafumi Takahashi, Masahiko Koizumi, Asako Tomura, Yuki Mitsufuji, Yota Komoriya, “Psychophysiological Effect of Immersive Spatial Audio Experience Enhanced Using Sound Field Synthesis,” in Proc. International Conference on Affective Computing & Intelligent Interaction (ACII), pp. 1–8, 2021 [IEEE][bibtex]
Naoya Takahashi, Kumar Singh Singh, Yuki Mitsufuji, “Hierarchical Disentangled Representation Learning for Singing Voice Conversion,” International Joint Conference on Neural Networks (IJCNN), pp. 1–7, 2021 [IEEE][arXiv][bibtex]
Naoya Takahashi, Yuki Mitsufuji, “Densely Connected Multi-Dilated Convolutional Networks for Dense Prediction Tasks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 993–1002, 2021 [CVF][IEEE][arXiv][code][bibtex]
Kazuki Shimada, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, “ACCDOA: Activity-Coupled Cartesian Direction of Arrival Representation for Sound Event Localization And Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 915–919, 2021 [IEEE][arXiv][code][bibtex]
Naoya Takahashi, Shota Inoue, Yuki Mitsufuji, “Adversarial Attacks on Audio Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 521–525, 2021 [IEEE][arXiv][bibtex]
Ryosuke Sawata, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, “All for One and One for All: Improving Music Separation by Bridging Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 51–55, 2021 [IEEE][arXiv][code][bibtex]
Yu Maeno, Yuhta Takida, Naoki Murata, Yuki Mitsufuji, “Array-Geometry-Aware Spatial Active Noise Control Based on Direction-of-Arrival Weighting,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8414–8418, 2020 [IEEE][bibtex]
Naoya Takahashi, Mayank Kumar Singh, Sakya Basak, Parthasaarathy Sudarsanam, Sriram Ganapathy, Yuki Mitsufuji, “Improving Voice Separation by Incorporating End-To-End Speech Recognition,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 41–45, 2020 [IEEE][arXiv][bibtex]
Naoki Murata, Jihui Zhang, Yu Maeno, Yuki Mitsufuji, “Global and Local Mode Domain Adaptive Algorithms for Spatial Active Noise Control Using Higher-Order Sources,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 526–530, 2019 [IEEE][bibtex]
Naoya Takahashi, Sudarsanam Parthasaarathy, Nabarun Goswami, Yuki Mitsufuji, “Recursive Speech Separation for Unknown Number of Speakers,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 1348–1352, 2019 [ISCA][arXiv][bibtex]
Naoya Takahashi, Purvi Agrawal, Nabarun Goswami, Yuki Mitsufuji, “PhaseNet: Discretized Phase Modeling with Deep Neural Networks for Audio Source Separation,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 2713–2717, 2018 [ISCA][bibtex]
Wei-Hsiang Liao, Yuki Mitsufuji, Keiichi Osako, Kazunobu Ohkuri, “Microphone Array Geometry for Two Dimensional Broadband Sound Field Recording,” in Proc. 145th Audio Engineering Society (AES) Convention, 2018 [AES][bibtex]
Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Thushara D. Abhayapala, “Mode-domain Spatial Active Noise Control Using Multiple Circular Arrays,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), pp. 441–445, 2018 [IEEE][bibtex]
Naoya Takahashi, Nabarun Goswami, Yuki Mitsufuji, “MMDenseLSTM: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), 2018 [IEEE][arXiv][bibtex]
Yuki Mitsufuji, Asako Tomura, Kazunobu Ohkuri, “Creating a Highly-Realistic ”Acoustic Vessel Odyssey” Using Sound field Synthesis with 576 Loudspeakers,” in Proc. Audio Engineering Society (AES) Conference on Spatial Reproduction-Aesthetics and Science, 2018 [AES][bibtex]
Yu Maeno, Yuki Mitsufuji, Thushara D. Abhayapala, “Mode Domain Spatial Active Noise Control Using Sparse Signal Representation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 211–215, 2018 [IEEE][arXiv][bibtex]
Naoya Takahashi, Yuki Mitsufuji, “Multi-Scale Multi-Band DenseNets for Audio Source Separation,” in Proc. Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 21–25, 2017 [IEEE][arXiv][bibtex]
Stefan Uhlich, Marcello Porcu, Franck Giron, Michael Enenkl, Thomas Kemp, Naoya Takahashi, Yuki Mitsufuji, “Improving Music Source Separation Based on Deep Neural Networks Through Data Augmentation and Network Blending,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2017 [IEEE][bibtex]
Keiichi Osako, Yuki Mitsufuji, Rita Singh, Bhiksha Raj, “Supervised Monaural Source Separation Based on Autoencoders,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 11–15, 2017 [IEEE][bibtex]
Yuki Mitsufuji, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Non-Negative Tensor Factorization in Wavenumber Domain,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 56–60, 2016 [IEEE][bibtex]
Stefan Uhlich, Franck Giron, Yuki Mitsufuji, “Deep Neural Network Based Instrument Extraction from Music,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2135–2139, 2015 [IEEE][bibtex]
Xin Guo, Stefan Uhlich, Yuki Mitsufuji, “NMF-Based Blind Source Separation Using a Linear Predictive Coding Error Clustering Criterion,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2015 [IEEE][bibtex]
Yuki Mitsufuji, Marco Liuni, Alex Baker, Axel Röbel, “Online Non-Negative Tensor Deconvolution for Source Detection in 3DTV Audio,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3082–3086, 2014 [IEEE][bibtex]
Yuki Mitsufuji, Axel Röbel, “Sound Source Separation Based on Non-Negative Tensor Factorization Incorporating Spatial Cue as Prior Knowledge,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 71–75, 2013 [IEEE][bibtex]
Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji, Stefano Ermon, “On the Equivalence of Consistency-Type Models: Consistency Models, Consistent Diffusion Models, and Fokker-Planck Regularization,” ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling (ICML SPIGM), 2023 [OpenReview][arXiv]
Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “Toward an Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” CVPR 2023 Workshop Sight and Sound (CVPR WSS), 2023 [URL][dataset]
Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” in Proc. AAAI 2023 Workshop on Knowledge Augmented Methods for NLP (KnowledgeNLP-AAAI’23), 2023 [AAAI][code]
Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “Regularizing Score-based Models with Score Fokker-Planck Equations,” in Proc. NeurIPS 2022 Workshop on Score-Based Methods (NeurIPS SBM), 2022 [OpenReview]
Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen, “STARSS22: A Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” in Proc. the 7th Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE Workshop), 2022 [DCASE][arXiv][dataset]
Fabian-Robert Stöter, Maria Clara Machry, Delton de Andrade Vaz, Stefan Uhlich, Yuki Mitsufuji, Antoine Liutkus, “Open.Unmix.app – Towards Audio Separation on the Edge,” Wave Audio Conference (WAC), 2021 [URL][demo]
Joachim Muth, Stefan Uhlich, Nathanael Perraudin, Thomas Kemp, Fabien Cardinaux, Yuki Mitsufuji, “Improving DNN-based Music Source Separation Using Phase Features,” Joint Workshop on Machine Learning for Music at ICML, IJCAI/ECAI and AAMAS, 2018 [arXiv]
Outstanding Paper Award for “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives” at the Annual Meeting of the Association for Computational Linguistics (ACL), 2023 [URL][Certificate]
Local Commendation for Invention 2022 Award [URL][Certificate]
Ranked 1st in Task 3 at DCASE2021 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [URL][arXiv]
Ranked 3rd in Task 3 at DCASE2020 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [arXiv]
Japan Media Arts Festival 2019 Jury Selections – Acoustic Vessel Odyssey [URL][AES]
Ranked 1st in Music Task at the 2018 Signal Separation Evaluation Campaign [URL]
Ranked 1st in Music Task at the 2016 Signal Separation Evaluation Campaign [URL]
Ranked 1st in Music Task at the 2015 Signal Separation Evaluation Campaign [URL]
Granted Patents
US11067661B2 “Information processing device and information processing method” [URL]
US10924849B2 “Sound source separation device and method” [URL]
US10880638B2 “Sound field forming apparatus and method” [URL]
US10757505B2 “Signal processing device, method, and program stored on a computer-readable medium, enabling a sound to be reproduced at a remote location and a different sound to be reproduced at a location neighboring the remote location” [URL]
US10674255B2 “Sound processing device, method and program” [URL]
US10650841B2 “Sound source separation apparatus and method” [URL]
US10602266B2 “Audio processing apparatus and method, and program” [URL]
US10595148B2 “Sound processing apparatus and method, and program” [URL]
US10567872B2 “Locally silenced sound field forming apparatus and method” [URL]
US10524075B2 “Sound processing apparatus, method, and program” [URL]
US10477309B2 “Sound field reproduction device, sound field reproduction method, and program” [URL]
US10412531B2 “Audio processing apparatus, method, and program” [URL]
US10380991B2 “Signal processing device, signal processing method, and program for selectable spatial correction of multichannel audio signal” [URL]
US10206034B2 “Sound field collecting apparatus and method, sound field reproducing apparatus and method” [URL]
US10015615B2 “Sound field reproduction apparatus and method, and program” [URL]
US9711161B2 “Voice processing apparatus, voice processing method, and program” [URL]
US9654872B2 “Input device, signal processing method, program, and recording medium” [URL]
US9426564B2 “Audio processing device, method and program” [URL]
US9406312B2 “Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program” [URL]
US9380398B2 “Sound processing apparatus, method, and program” [URL]
US9208795B2 “Frequency band extending device and method, encoding device and method, decoding device and method, and program” [URL]
US8295507B2 “Frequency band extending apparatus, frequency band extending method, player apparatus, playing method, program and recording medium” [URL]
Academic Activity
Competition Organizer
Task Organizer at DCASE2023 on “Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes” [URL][report][dataset]
Task Organizer at DCASE2022 on “Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes” [URL][report][dataset]
General Chair of Music Demixing (MDX) Challenge 2021 [URL] [report]
Co-Chair of Music Demixing (MDX) Workshop 2021 [URL]
Committee Member and Session Chair
IEEE Audio and Acoustic Signal Processing Technical Committee (AASP TC) Member 2023–2026 [URL]
IEEE ICCE Japan Program Committee Chair 2021–2023
Session Chair at IEEE ICASSP 2024 on “Generative Semantic Communication: How Generative Models Enhance Semantic Communications” [URL]
Oral Session Chair at IEEE ICASSP 2023 on “Diffusion-based Generative Models for Audio and Speech” [URL]
Session Chair at IEEE ICASSP 2022 on “Signal Processing and Neural Approaches for Soundscapes (SiNApS)” [URL]
Session Chair at IEEE ICASSP 2020 on “Active Control of Acoustic Noise over Spatial Regions” [URL]
PhD Supervision
TRAMUCA: Transparency in AI-powered Music Creation Algorithms, 4-year Fully-funded PhD Studentship by Sony and MTG-UPF, Joint Supervision with Dr. Emilia Gómez and Dr. Xavier Serra [URL]
Lecture at University
“AI x Creators: Pushing Creative Abilities to the Next” at The University of Tokyo on Dec. 16, 2022 [URL]
“AI & Network Communication Systems”, 7-lecture Course at Tokyo Institute of Technology, 3rd Quarter (Fall), 2022 [URL]
“AI x Creators: Pushing Creative Abilities to the Next” at The University of Tokyo on Feb. 16, 2022 [URL]
“Content Creation by Cutting Edge AI-powered Music Technology” at Tokyo Institute of Technology on Dec. 1, 2021 [URL]
“AI x Creators: Pushing Creative Abilities to the Next Level” at Keio University on Oct. 21, 2021