About me

Expertise

  • PhD in Information Science & Technology Conferred by the University of Tokyo [URL][Certificate]
  • Having Papers out at ML/CV/NLP/Audio Conferences, e.g., ICLR, NeurIPS, ICML, CVPR, ACL, EMNLP, ICASSP (See “Selected Papers“)
  • Fluent in Japanese (Native), English (英検1級), French (仏検準1級)

Experience

  • Lead Research Scientist, Sony Research Inc.
  • Head of Creative AI Lab, Sony R&D [URL][demo]
    • Music Restoration of a Canadian Pianist Glen Gould [YouTube]
    • Soundtrack Restoration of a Classic Movie Lawrence of Arabia [YouTube]
  • Distinguish Engineer, Sony R&D [URL]
  • Specially Appointed Associate Professor at Tokyo Institute of Technology [URL][lecture]
    • Deep Generative Modeling
    • Content Restoration and Generation
  • IEEE Senior Member
  • Invited Researcher at IRCAM 2011–2012 [URL]
    • Involved in the 3DTV Content Search Project Sponsored by European Project FP7 [URL]

Publication

Selected Papers

  1. Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” accepted at Neural Information Processing Systems (NeurIPS), 2023 [arXiv][dataset]
  2. Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives,” in Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 6569–6591, 2023 [ACL][arXiv][code][bibtex] – Outstanding Paper Award [Certificate]
  3. Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Linear Inverse Problems with Denoising Diffusion Restoration,” in Proc. International Conference on Machine Learning (ICML), pp. 25501–25522, 2023 [PRML][OpenReview][arXiv][code]
  4. Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the Underlying Score Fokker-Planck Equation,” in Proc. International Conference on Machine Learning (ICML), pp. 18365–18398, 2023 [PRML][OpenReview][arXiv][code]
  5. Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, Taylor Berg-Kirkpatrick, “CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos,” in Proc. International Conference on Learning Representations (ICLR), 2023 [OpenReview][arXiv][demo][code]
  6. Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” In Findings of Conference on Empirical Methods in Natural Language Processing (EMNLP), pp.1656–1675, 2022 [ACL][arXiv][code][bibtex]
  7. Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Shusuke Takahashi, Toshiyuki Kumakura, Yuki Mitsufuji, “SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization,” Proc. International Conference on Machine Learning (ICML), pp.20987–21012, 2022 [PMLR][arXiv][code][bibtex]
  8. Naoya Takahashi, Yuki Mitsufuji, “Densely Connected Multi-Dilated Convolutional Networks for Dense Prediction Tasks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 993–1002, 2021 [CVF][IEEE][arXiv][code][bibtex]

Journal Papers

  1. Giorgio Fabbro, Stefan Uhlich, Chieh-Hsin Lai, Woosung Choi, Marco Martínez-Ramírez, Weihsiang Liao, Igor Gadelha, Geraldo Ramos, Eddie Hsu, Hugo Rodrigues, Fabian-Robert Stöter, Alexandre Défossez, Yi Luo, Jianwei Yu, Dipam Chakraborty, Sharada Mohanty, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Nabarun Goswami, Tatsuya Harada, Minseok Kim, Jun Hyung Lee, Yuanliang Dong, Xinran Zhang, Jiafeng Liu, Yuki Mitsufuji, “The Sound Demixing Challenge 2023 – Music Demixing Track,” submitted to Transactions of the International Society for Music Information Retrieval (TISMIR), 2023 [arXiv]
  2. Stefan Uhlich, Giorgio Fabbro, Masato Hirano, Shusuke Takahashi, Gordon Wichern, Jonathan Le Roux, Dipam Chakraborty, Sharada Mohanty, Kai Li, Yi Luo, Jianwei Yu, Rongzhi Gu, Roman Solovyev, Alexander Stempkovskiy, Tatiana Habruseva, Mikhail Sukhovei, Yuki Mitsufuji, “The Sound Demixing Challenge 2023 – Cinematic Demixing Track,” submitted to Transactions of the International Society for Music Information Retrieval (TISMIR), 2023 [arXiv]
  3. Naoya Takahashi, Mayank Kumar, Singh, Yuki Mitsufuji, “Robust One-Shot Singing Voice Conversion,” to be submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv][demo]
  4. Masato Hirano, Shimada Kazuki, Yuichiro Koyama, Shusuke Takahashi, Yuki Mitsufuji, “Diffusion-based Signal Refiner for Speech Separation,” to be submitted to IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv]
  5. Ryosuke Sawata, Naoya Takahashi, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, “The Whole Is Greater than the Sum of Its Parts: Improving DNN-based Music Source Separation,” under review at IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), 2023 [arXiv]
  6. Yuhta Takida, Wei-Hsiang Liao, Toshimitsu Uesaka, Shusuke Takahashi, Yuki Mitsufuji, “Preventing Oversmoothing in VAE via Generalized Variance Parameterization,” Neurocomputing, vol. 509, pp. 137–156, 2022 [Elsevier][arXiv]
  7. Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Défossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk, “Music Demixing Challenge 2021,” Frontiers in Signal Processing (Front. signal process.), vol. 1, 2022 [Frontiers][arXiv][challenge][bibtex]
  8. Jihui Aimee Zhang, Naoki Murata, Yu Maeno, Prasanga N. Samarasinghe, Thushara D. Abhayapala, Yuki Mitsufuji, “Coherence-Based Performance Analysis on Noise Reduction in Multichannel Active Noise Control Systems,” Journal of the Acoustical Society of America (JASA), vol. 148, issue 3, 2020 [ASA]
  9. Yuki Mitsufuji, Norihiro Takamune, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Evanescent-Region-Aware Non-Negative Tensor Factorization in Spherical Harmonic Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 29, pp. 607–617, 2020 [IEEE][bibtex]
  10. Tetsu Magariyachi, Yuki Mitsufuji, “Analytic Error Control Methods for Efficient Rotation in Dynamic Binaural Rendering of Ambisonics,” Journal of the Acoustical Society of America (JASA), vol. 147, issue 1, 2020 [ASA]
  11. Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Naoki Murata, Thushara D. Abhayapala, “Spherical-Harmonic-Domain Feedforward Active Noise Control Using Sparse Decomposition of Reference Signals from Distributed Sensor Arrays,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 28, pp. 656–670, 2019 [IEEE][bibtex]
  12. Yuki Mitsufuji, Stefan Uhlich, Norihiro Takamune, Daichi Kitamura, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Non-Negative Matrix Factorization Using Banded Spatial Covariance Matrices in Wavenumber Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (Trans. ASLP), vol. 28, pp. 49–60, 2019 [IEEE][bibtex]
  13. Fabian-Robert Stöter, Stefan Uhlich, Antoine Liutkus, Yuki Mitsufuji, “Open-Unmix – A Reference Implementation for Music Source Separation,” Journal of Open Source Software (JOSS), vol. 4, no. 41, pp. 1667, 2019 [OSI][code][bibtex]
  14. Yuki Mitsufuji, Axel Röbel, “On the Use of a Spatial Cue as Prior Information for Stereo Sound Source Separation Based on Spatially Weighted Non-Negative Tensor Factorization,” EURASIP Journal of Advancement of Signal Processing (EURASIP J. Adv. Signal Process.), issue 1, 2014 [Springer][bibtex]

Conference Papers

  1. Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” accepted at Neural Information Processing Systems (NeurIPS), 2023 [arXiv][dataset]
  2. Kazuki Shimada, Kengo Uchida, Yuichiro Koyama, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, Tatsuya Kawahara, “Zero- and Few-shot Sound Event Localization and Detection,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
  3. Carlos Hernandez-Olivan, Koichi Saito, Naoki Murata, Chieh-Hsin Lai, Marco A. Martínez-Ramirez, Wei-Hsiang Liao, Yuki Mitsufuji, “VRDMG: Vocal Restoration via Diffusion Posterior Sampling with Multiple Guidance,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv][demo]
  4. Hao Shi, Kazuki Shimada, Masato Hirano, Takashi Shibuya, Yuichiro Koyama, Zhi Zhong, Shusuke Takahashi, Tatsuya Kawahara, Yuki Mitsufuji, “Diffusion-Based Speech Enhancement with Joint Generative and Predictive Decoders,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
  5. Takashi Shibuya, Yuhta Takida, Yuki Mitsufuji, “BigVSAN: Enhancing GAN-based Neural Vocoders with Slicing Adversarial Network,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv][demo][code]
  6. Eleonora Grassucci, Yuki Mitsufuji, Ping Zhang, Danilo Comminiello, “Enhancing Semantic Communication with Deep Generative Models – An ICASSP Special Session Overview,” submitted to International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024 [arXiv]
  7. Zhi Zhong, Hao Shi, Masato Hirano, Kazuki Shimada, Kazuya Tateishi, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, “Extending Audio Masked Autoencoders Toward Audio Restoration,” in Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2023 [IEEE][arXiv]
  8. Keisuke Toyama, Taketo Akama, Yukara Ikemiya, Yuhta Takida, WeiHsiang Liao, Yuki Mitsufuji, “Automatic Piano Transcription with Hierarchical Frequency-Time Transformer,” in Proc. International Society for Music Information Retrieval (ISMIR) Conference, 2023 [arXiv][code]
  9. Ryosuke Sawata, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji, “Diffiner: A Versatile Diffusion-based Generative Refiner for Speech Enhancement,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 3824–3828, 2023 [ISCA][arXiv][code]
  10. Silin Gao, Beatriz Borges, Soyoung Oh, Deniz Bayazit, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives,” in Proc. the Annual Meeting of the Association for Computational Linguistics (ACL), pp. 6569–6591, 2023 [ACL][arXiv][code][bibtex] – Outstanding Paper Award [Certificate]
  11. Naoki Murata, Koichi Saito, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “GibbsDDRM: A Partially Collapsed Gibbs Sampler for Solving Blind Linear Inverse Problems with Denoising Diffusion Restoration,” in Proc. International Conference on Machine Learning (ICML), pp. 25501–25522, 2023 [PRML][OpenReview][arXiv][code]
  12. Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the Underlying Score Fokker-Planck Equation,” in Proc. International Conference on Machine Learning (ICML), pp. 18365–18398, 2023 [PRML][OpenReview][arXiv][code]
  13. Yuhta Takida, Masaaki Imaizumi, Takashi Shibuya, Chieh-Hsin Lai, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji, “SAN: Inducing Metrizability of GAN with Discriminative Normalized Linear Layer,” under review, 2023 [arXiv][code]
  14. Zhi Zhong, Masato Hirano, Kazuki Shimada, Kazuya Tateishi, Shusuke Takahashi, Yuki Mitsufuji, “An Attention-based Approach to Hierarchical Multi-label Music Instrument Classification,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp.1–5, 2023 [IEEE][arXiv]
  15. Koichi Saito, Naoki Murata, Toshimitsu Uesaka, Chieh-Hsin Lai, Yuhta Takida, Takao Fukui, Yuki Mitsufuji, “Unsupervised Vocal Dereverberation with Diffusion-based Generative Models,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo]
  16. Junghyun Koo, Marco A. Martı́nez-Ramı́rez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee, Yuki Mitsufuji, “Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo][code]
  17. Naoya Takahashi, Mayank Kumar, Singh, Yuki Mitsufuji, “Hierarchical Diffusion Models for Singing Voice Neural Vocoder,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo]
  18. Kin Wai Cheuk, Ryosuke Sawata, Toshimitsu Uesaka, Naoki Murata, Naoya Takahashi, Shusuke Takahashi, Dorien Herremans, Yuki Mitsufuji, “DiffRoll: Diffusion-based Generative Music Transcription with Unsupervised Pretraining Capability,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023 [IEEE][arXiv][demo][code]
  19. Hao-Wen Dong, Naoya Takahashi, Yuki Mitsufuji, Julian McAuley, Taylor Berg-Kirkpatrick, “CLIPSep: Learning Text-queried Sound Separation with Noisy Unlabeled Videos,” in Proc. International Conference on Learning Representations (ICLR), 2023 [OpenReview][arXiv][demo][code]
  20. Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” In Findings of Conference on Empirical Methods in Natural Language Processing (EMNLP), pp.1656–1675, 2022 [ACL][arXiv][code][bibtex]
  21. Marco A. Martínez Ramírez, WeiHsiang Liao, Giorgio Fabbro, Stefan Uhlich, Chihiro Nagashima, Yuki Mitsufuji, “Automatic Music Mixing with Deep Learning and Out-of-Domain Data,” in Proc. the 23rd International Society for Music Information Retrieval (ISMIR) Conference, pp.411–418, 2022 [ISMIR][arXiv][demo][code]
  22. Johannes Imort, Giorgio Fabbro, Marco A. Martinez Ramirez, Stefan Uhlich, Yuichiro Koyama, Yuki Mitsufuji, “Distortion Audio Effects: Learning How to Recover the Clean Signal,” in Proc. the 23rd International Society for Music Information Retrieval (ISMIR) Conference, pp.218–225, 2022 [ISMIR][arXiv][demo]
  23. Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Shusuke Takahashi, Toshiyuki Kumakura, Yuki Mitsufuji, “SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization,” in Proc. International Conference on Machine Learning (ICML), pp.20987–21012, 2022 [PMLR][arXiv][code][bibtex]
  24. Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Naoya Takahashi, Emiru Tsunoo, Yuki Mitsufuji, “Multi-ACCDOA: Localizing and Detecting Overlapping Sounds from the Same Class with Auxiliary Duplicating Permutation Invariant Training,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 316–320, 2022 [IEEE][arXiv][bibtex]
  25. Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang, “Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 466–470, 2022 [IEEE][arXiv][demo][code][bibtex]
  26. Yuichiro Koyama, Kazuhide Shigemi, Masafumi Takahashi, Kazuki Shimada, Naoya Takahashi, Emiru Tsunoo, Shusuke Takahashi, Yuki Mitsufuji, Spatial Data Augmentation with Simulated Room Impulse Responses for Sound Event Localization and Detection, in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8872–8876, 2022 [IEEE][arXiv][bibtex]
  27. Yuichiro Koyama, Naoki Murata, Stefan Uhlich, Giorgio Fabbro, Shusuke Takahashi, Yuki Mitsufuji, Music Source Separation with Deep Equilibrium Models,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 296–300, 2022 [IEEE][arXiv][bibtex]
  28. Ricardo Falcon-Perez, Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Yuki Mitsufuji, Spatial Mixup: Directional Loudness Modification as Data Augmentation for Sound Event Localization and Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 431–435, 2022 [IEEE][arXiv][code][bibtex]
  29. Naoya Takahashi, Yuki Mitsufuji, Amicable Examples for Informed Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 241–245, 2022 [IEEE][arXiv][bibtex]
  30. Naoya Takahashi, Mayank Kumar Singh, Yuki Mitsufuji, Source Mixing and Separation Robust Audio Steganography,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4368–4372, 2022 [arXiv]
  31. Yasuhide Hyodo, Chihiro Sugai, Junya Suzuki, Masafumi Takahashi, Masahiko Koizumi, Asako Tomura, Yuki Mitsufuji, Yota Komoriya, “Psychophysiological Effect of Immersive Spatial Audio Experience Enhanced Using Sound Field Synthesis,” in Proc. International Conference on Affective Computing & Intelligent Interaction (ACII), pp. 1–8, 2021 [IEEE][bibtex]
  32. Naoya Takahashi, Kumar Singh Singh, Yuki Mitsufuji, “Hierarchical Disentangled Representation Learning for Singing Voice Conversion,” International Joint Conference on Neural Networks (IJCNN), pp. 1–7, 2021 [IEEE][arXiv][bibtex]
  33. Naoya Takahashi, Yuki Mitsufuji, “Densely Connected Multi-Dilated Convolutional Networks for Dense Prediction Tasks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 993–1002, 2021 [CVF][IEEE][arXiv][code][bibtex]
  34. Kazuki Shimada, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, “ACCDOA: Activity-Coupled Cartesian Direction of Arrival Representation for Sound Event Localization And Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 915–919, 2021 [IEEE][arXiv][code][bibtex]
  35. Naoya Takahashi, Shota Inoue, Yuki Mitsufuji, “Adversarial Attacks on Audio Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 521–525, 2021 [IEEE][arXiv][bibtex]
  36. Ryosuke Sawata, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, “All for One and One for All: Improving Music Separation by Bridging Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 51–55, 2021 [IEEE][arXiv][code][bibtex]
  37. Yu Maeno, Yuhta Takida, Naoki Murata, Yuki Mitsufuji, “Array-Geometry-Aware Spatial Active Noise Control Based on Direction-of-Arrival Weighting,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8414–8418, 2020 [IEEE][bibtex]
  38. Naoya Takahashi, Mayank Kumar Singh, Sakya Basak, Parthasaarathy Sudarsanam, Sriram Ganapathy, Yuki Mitsufuji, “Improving Voice Separation by Incorporating End-To-End Speech Recognition,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 41–45, 2020 [IEEE][arXiv][bibtex]
  39. Naoki Murata, Jihui Zhang, Yu Maeno, Yuki Mitsufuji, “Global and Local Mode Domain Adaptive Algorithms for Spatial Active Noise Control Using Higher-Order Sources,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 526–530, 2019 [IEEE][bibtex]
  40. Naoya Takahashi, Sudarsanam Parthasaarathy, Nabarun Goswami, Yuki Mitsufuji, “Recursive Speech Separation for Unknown Number of Speakers,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 1348–1352, 2019 [ISCA][arXiv][bibtex]
  41. Naoya Takahashi, Purvi Agrawal, Nabarun Goswami, Yuki Mitsufuji, “PhaseNet: Discretized Phase Modeling with Deep Neural Networks for Audio Source Separation,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 2713–2717, 2018 [ISCA][bibtex]
  42. Wei-Hsiang Liao, Yuki Mitsufuji, Keiichi Osako, Kazunobu Ohkuri, “Microphone Array Geometry for Two Dimensional Broadband Sound Field Recording,” in Proc. 145th Audio Engineering Society (AES) Convention, 2018 [AES][bibtex]
  43. Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Thushara D. Abhayapala, “Mode-domain Spatial Active Noise Control Using Multiple Circular Arrays,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), pp. 441–445, 2018 [IEEE][bibtex]
  44. Naoya Takahashi, Nabarun Goswami, Yuki Mitsufuji, “MMDenseLSTM: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), 2018 [IEEE][arXiv][bibtex]
  45. Yuki Mitsufuji, Asako Tomura, Kazunobu Ohkuri, “Creating a Highly-Realistic ”Acoustic Vessel Odyssey” Using Sound field Synthesis with 576 Loudspeakers,” in Proc. Audio Engineering Society (AES) Conference on Spatial Reproduction-Aesthetics and Science, 2018 [AES][bibtex]
  46. Yu Maeno, Yuki Mitsufuji, Thushara D. Abhayapala, “Mode Domain Spatial Active Noise Control Using Sparse Signal Representation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 211–215, 2018 [IEEE][arXiv][bibtex]
  47. Naoya Takahashi, Yuki Mitsufuji, “Multi-Scale Multi-Band DenseNets for Audio Source Separation,” in Proc. Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 21–25, 2017 [IEEE][arXiv][bibtex]
  48. Stefan Uhlich, Marcello Porcu, Franck Giron, Michael Enenkl, Thomas Kemp, Naoya Takahashi, Yuki Mitsufuji, “Improving Music Source Separation Based on Deep Neural Networks Through Data Augmentation and Network Blending,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2017 [IEEE][bibtex]
  49. Keiichi Osako, Yuki Mitsufuji, Rita Singh, Bhiksha Raj, “Supervised Monaural Source Separation Based on Autoencoders,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 11–15, 2017 [IEEE][bibtex]
  50. Yuki Mitsufuji, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Non-Negative Tensor Factorization in Wavenumber Domain,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 56–60, 2016 [IEEE][bibtex]
  51. Stefan Uhlich, Franck Giron, Yuki Mitsufuji, “Deep Neural Network Based Instrument Extraction from Music,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2135–2139, 2015 [IEEE][bibtex]
  52. Xin Guo, Stefan Uhlich, Yuki Mitsufuji, “NMF-Based Blind Source Separation Using a Linear Predictive Coding Error Clustering Criterion,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2015 [IEEE][bibtex]
  53. Yuki Mitsufuji, Marco Liuni, Alex Baker, Axel Röbel, “Online Non-Negative Tensor Deconvolution for Source Detection in 3DTV Audio,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3082–3086, 2014 [IEEE][bibtex]
  54. Yuki Mitsufuji, Axel Röbel, “Sound Source Separation Based on Non-Negative Tensor Factorization Incorporating Spatial Cue as Prior Knowledge,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 71–75, 2013 [IEEE][bibtex]

Workshop and Demo

  1. Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Naoki Murata, Yuki Mitsufuji, Stefano Ermon, “On the Equivalence of Consistency-Type Models: Consistency Models, Consistent Diffusion Models, and Fokker-Planck Regularization,” ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling (ICML SPIGM), 2023 [OpenReview][arXiv]
  2. Kazuki Shimada, Archontis Politis, Parthasaarathy Sudarsanam, Daniel Krause, Kengo Uchida, Sharath Adavanne, Aapo Hakala, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen, Yuki Mitsufuji, “Toward an Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” CVPR 2023 Workshop Sight and Sound (CVPR WSS), 2023 [URL][dataset]
  3. Silin Gao, Jena D. Hwang, Saya Kanno, Hiromi Wakaki, Yuki Mitsufuji, Antoine Bosselut, “ComFact: A Benchmark for Linking Contextual Commonsense Knowledge,” in Proc. AAAI 2023 Workshop on Knowledge Augmented Methods for NLP (KnowledgeNLP-AAAI’23), 2023 [AAAI][code]
  4. Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, “Regularizing Score-based Models with Score Fokker-Planck Equations,” in Proc. NeurIPS 2022 Workshop on Score-Based Methods (NeurIPS SBM), 2022 [OpenReview]
  5. Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen, “STARSS22: A Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” in Proc. the 7th Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE Workshop), 2022 [DCASE][arXiv][dataset]
  6. Fabian-Robert Stöter, Maria Clara Machry, Delton de Andrade Vaz, Stefan Uhlich, Yuki Mitsufuji, Antoine Liutkus, “Open.Unmix.app – Towards Audio Separation on the Edge,” Wave Audio Conference (WAC), 2021 [URL][demo]
  7. Joachim Muth, Stefan Uhlich, Nathanael Perraudin, Thomas Kemp, Fabien Cardinaux, Yuki Mitsufuji, “Improving DNN-based Music Source Separation Using Phase Features,” Joint Workshop on Machine Learning for Music at ICML, IJCAI/ECAI and AAMAS, 2018 [arXiv]

Competition and Award

  • Elevated to the grade of IEEE Senior Member
  • Outstanding Paper Award for “PeaCoK: Persona Commonsense Knowledge for Consistent and Engaging Narratives” at the Annual Meeting of the Association for Computational Linguistics (ACL), 2023 [URL][Certificate]
  • Local Commendation for Invention 2022 Award [URL][Certificate]
  • Ranked 1st in Task 3 at DCASE2021 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [URL][arXiv]
  • Ranked 3rd in Task 3 at DCASE2020 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [arXiv]
  • Japan Media Arts Festival 2019 Jury Selections – Acoustic Vessel Odyssey [URL][AES]
  • Ranked 1st in Music Task at the 2018 Signal Separation Evaluation Campaign [URL]
  • Ranked 1st in Music Task at the 2016 Signal Separation Evaluation Campaign [URL]
  • Ranked 1st in Music Task at the 2015 Signal Separation Evaluation Campaign [URL]

Granted Patents

  • US11067661B2 “Information processing device and information processing method” [URL]
  • US10924849B2 “Sound source separation device and method” [URL]
  • US10880638B2 “Sound field forming apparatus and method” [URL]
  • US10757505B2 “Signal processing device, method, and program stored on a computer-readable medium, enabling a sound to be reproduced at a remote location and a different sound to be reproduced at a location neighboring the remote location” [URL]
  • US10674255B2 “Sound processing device, method and program” [URL]
  • US10657973B2 “Method, apparatus and system” [URL]
  • US10650841B2 “Sound source separation apparatus and method” [URL]
  • US10602266B2 “Audio processing apparatus and method, and program” [URL]
  • US10595148B2 “Sound processing apparatus and method, and program” [URL]
  • US10567872B2 “Locally silenced sound field forming apparatus and method” [URL]
  • US10524075B2 “Sound processing apparatus, method, and program” [URL]
  • US10477309B2 “Sound field reproduction device, sound field reproduction method, and program” [URL]
  • US10412531B2 “Audio processing apparatus, method, and program” [URL]
  • US10380991B2 “Signal processing device, signal processing method, and program for selectable spatial correction of multichannel audio signal” [URL]
  • US10206034B2 “Sound field collecting apparatus and method, sound field reproducing apparatus and method” [URL]
  • US10015615B2 “Sound field reproduction apparatus and method, and program” [URL]
  • US9711161B2 “Voice processing apparatus, voice processing method, and program” [URL]
  • US9654872B2 “Input device, signal processing method, program, and recording medium” [URL]
  • US9426564B2 “Audio processing device, method and program” [URL]
  • US9406312B2 “Signal processing apparatus and signal processing method, encoder and encoding method, decoder and decoding method, and program” [URL]
  • US9380398B2 “Sound processing apparatus, method, and program” [URL]
  • US9208795B2 “Frequency band extending device and method, encoding device and method, decoding device and method, and program” [URL]
  • US8295507B2 “Frequency band extending apparatus, frequency band extending method, player apparatus, playing method, program and recording medium” [URL]

Academic Activity

Competition Organizer

  • Task Organizer at DCASE2023 on “Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes” [URL][report][dataset]
  • General Chair of Sound Demixing (SDX) Challenge 2023 [URL][report MDX track][report CDX track]
  • Chair of Sound Demixing (SDX) Workshop 2023 [URL]
  • Task Organizer at DCASE2022 on “Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes” [URL][report][dataset]
  • General Chair of Music Demixing (MDX) Challenge 2021 [URL] [report]
  • Co-Chair of Music Demixing (MDX) Workshop 2021 [URL]

Committee Member and Session Chair

  • IEEE Audio and Acoustic Signal Processing Technical Committee (AASP TC) Member 20232026 [URL]
  • IEEE ICCE Japan Program Committee Chair 20212023
  • Session Chair at IEEE ICASSP 2024 on “Generative Semantic Communication: How Generative Models Enhance Semantic Communications” [URL]
  • Oral Session Chair at IEEE ICASSP 2023 on “Diffusion-based Generative Models for Audio and Speech” [URL]
  • Session Chair at IEEE ICASSP 2022 on “Signal Processing and Neural Approaches for Soundscapes (SiNApS)” [URL]
  • Session Chair at IEEE ICASSP 2020 on “Active Control of Acoustic Noise over Spatial Regions” [URL]

PhD Supervision

  • TRAMUCA: Transparency in AI-powered Music Creation Algorithms, 4-year Fully-funded PhD Studentship by Sony and MTG-UPF, Joint Supervision with Dr. Emilia Gómez and Dr. Xavier Serra [URL]

Lecture at University

  • “AI x Creators: Pushing Creative Abilities to the Next” at The University of Tokyo on Dec. 16, 2022 [URL]
  • “AI & Network Communication Systems”, 7-lecture Course at Tokyo Institute of Technology, 3rd Quarter (Fall), 2022 [URL]
  • “AI x Creators: Pushing Creative Abilities to the Next” at The University of Tokyo on Feb. 16, 2022 [URL]
  • “Content Creation by Cutting Edge AI-powered Music Technology” at Tokyo Institute of Technology on Dec. 1, 2021 [URL]
  • “AI x Creators: Pushing Creative Abilities to the Next Level” at Keio University on Oct. 21, 2021

Invited Talk

  1. “Meet Asia”, MIDEM Digital 2021
  2. “How AI is Shaking up the Music Industry”, MIDEM Digital 2021 [URL]
  3. “AI & THE FUTURE OF TELEVISION Part 1: Content Production”, MIPCOM Online+ 2020

Web Article

  1. New Excitement and Fun Ways to Enjoy Video and Audio Content “AI Sound Separation x Entertainment” [URL]
  2. Reviving the Sound of Classic Movies with AI “AI Sound Separation” [URL]
  3. The freedom to extract audio gives you the freedom to create new music “Audio source separation” [URL]

Invited Talk (Japanese)

  1. DCAJビジネスセミナー「ソニーのR&Dが仕掛ける最先端音響技術」 [URL]
  2. 先端テクノロジーコース「ソニーの技術力×アーティストの表現力 サウンドVRがつくる演出最前線」
  3. SDMシンポジウム「Sonic Surf VR: 音のVRを実現する波面合成技術とコンテンツクリエーションについて」 [URL]

Web Article (Japanese)

  1. Nov. 2022, 日経Robotics12月号, ソニーが新型の深層生成モデルを自社開発、まずは高性能VAEの利用を容易に [URL]
  2. Oct. 2022, DTMステーション, ソニーによる世界最高の音源分離技術で実現した、ボーカルだけをキレイに抽出できるSoundmain Studioの新機能 [URL]
  3. Jul. 2022, DTMステーション, ソニー開発のディープラーニングによる世界最高の音源分離技術を利用できる、音楽制作サービス、Soundmain [URL]
  4. Jan. 2022, レコード芸術2月号 傑作ファイヴ2021 俺のオーディオ pp. 188–189 [URL]
  5. Jun. 2021, Phile Web, ソニーが時空を越えたアーティストのコラボを実現、「AI音源分離」技術とは何か [URL]
  6. Jun. 2021, Sony Group Career Forum 2022, AIで音楽ビジネスを変える、ソニーのグループシナジーに迫る。 [URL]
  7. Apr. 2021, AI Start Lab, ソニーが提示する、AIによる音源分離で広がるエンターテイメント世界の可能性とは [URL]
  8. Jan. 2021, Stereo Sound Online, ソニーの「AIによる音源分離」は、過去の名作に新しい魅力を与える。世界初の画期的技術はどうやって実現できたのか(前):麻倉怜士のいいもの研究所 レポート42 [URL]
  9. Jan. 2021, Stereo Sound Online, ソニーの「AIによる音源分離」は、過去の名作に新しい魅力を与える。世界初の画期的技術はどうやって実現できたのか(後):麻倉怜士のいいもの研究所 レポート43 [URL]
  10. Dec. 2020, Cocotame, 『LINE MUSIC』でカラオケを実現させた「音源分離技術」は過去と現在の音をつなぐ夢の技術だった【前編】 [URL]
  11. Dec. 2020, Cocotame, 『LINE MUSIC』でカラオケを実現させた「音源分離技術」は過去と現在の音をつなぐ夢の技術だった【後編】 [URL]
  12. Jul. 2020, 日経エレクトロニクス,「音だって超現実~音場を操り、世界を一変~」 [URL]
  13. Sep. 2019, Sounmain Blog, 音楽制作の世界が変わる。世界最先端の「音源分離技術」が作りだす未来とは? [URL]
  14. May 2019, サウンド&レコーディングマガジン, 6月号 ソニーの最新技術Sonic Surf VRを体感するインスタレーション展 Touch that Sound! [URL]
  15. Mar. 2019, Impress Watch, ソニー「Sonic Surf VR」で音が自在に動く不思議体験。仕組みを聞いた [URL]

Media Appearance (Japanese)

  1. Sep. 2021, Tokyo FMラジオ ミュージックバード, 石丸幹二と共演?ソニーの新技術で甦るグレン・グールド [radio]
  2. Apr. 2021, Podcast, ソニーが語る「AI×音楽」の可能性。アーティストの働き方にも変化? [podcast]
  3. Apr. 2021, YouTube Channel サンボマスター, 【近藤洋一 Sony テクノロジー体験編~後編~】 [YouTube]
  4. Jul. 2020, NHK TV放送 ららら♪クラシック,「渋谷慶一郎が語る~テクノロジーと音楽~」 [TV]