About me

Expertise

  • PhD in Information Science & Technology Conferred by the University of Tokyo [URL]
  • Interested in Generative Modeling, Commonsense Representation and Reasoning, Adversarial Robustness
  • Fluent in Japanese (Native), English (英検1級), French (仏検準1級)

Experience

  • Head of Tokyo Laboratory 30, Creative AI Lab, Sony
    • Music Restoration of a Canadian Pianist Glen Gould [YouTube]
    • Soundtrack Restoration of a Classic Movie Lawrence of Arabia [YouTube]
  • Distinguish Engineer, Sony R&D [URL]
  • Specially Appointed Associate Professor at Tokyo Institute of Technology [URL]
  • Invited Researcher at IRCAM 2011–2012 [URL]
    • Involved in the 3DTV content search project sponsored by European Project FP7 [URL]

Publication

Journal

  1. Yuki Mitsufuji, Giorgio Fabbro, Stefan Uhlich, Fabian-Robert Stöter, Alexandre Defossez, Minseok Kim, Woosung Choi, Chin-Yun Yu, Kin-Wai Cheuk, “Music Demixing Challenge 2021,” vol. 1, Frontiers in Signal Processing, 2022. [Frontiers][arXiv][challenge][bibtex]
  2. Yuhta Takida, Wei-Hsiang Liao, Toshimitsu Uesaka, Shusuke Takahashi, Yuki Mitsufuji, “Preventing Oversmoothing in VAE via Generalized Variance Parameterization,” Neurocomputing (under review)
  3. Jihui Aimee Zhang, Naoki Murata, Yu Maeno, Prasanga N. Samarasinghe, Thushara D. Abhayapala, Yuki Mitsufuji, “Coherence-Based Performance Analysis on Noise Reduction in Multichannel Active Noise Control Systems,” Journal of the Acoustical Society of America (JASA), vol. 148, issue 3, 2020. [ASA]
  4. Yuki Mitsufuji, Norihiro Takamune, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Evanescent-Region-Aware Non-Negative Tensor Factorization in Spherical Harmonic Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 29, pp. 607–617, 2020. [IEEE]
  5. Tetsu Magariyachi, Yuki Mitsufuji, “Analytic Error Control Methods for Efficient Rotation in Dynamic Binaural Rendering of Ambisonics,” Journal of the Acoustical Society of America (JASA), vol. 147, issue 1, 2020. [ASA]
  6. Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Naoki Murata, Thushara D. Abhayapala, “Spherical-Harmonic-Domain Feedforward Active Noise Control Using Sparse Decomposition of Reference Signals from Distributed Sensor Arrays,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 28, pp. 656–670, 2019. [IEEE]
  7. Yuki Mitsufuji, Stefan Uhlich, Norihiro Takamune, Daichi Kitamura, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Non-Negative Matrix Factorization Using Banded Spatial Covariance Matrices in Wavenumber Domain,” IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 28, pp. 49–60, 2019. [IEEE]
  8. Fabian-Robert Stöter, Stefan Uhlich, Antoine Liutkus, Yuki Mitsufuji, “Open-Unmix – A Reference Implementation for Music Source Separation,” Journal of Open Source Software (JOSS), vol. 4, no. 41, pp. 1667, 2019. [OSI][code]
  9. Yuki Mitsufuji, Axel Roebel, “On the Use of a Spatial Cue as Prior Information for Stereo Sound Source Separation Based on Spatially Weighted Non-Negative Tensor Factorization,” EURASIP Journal of Advancement of Signal Processing, issue 1, 2014. [EURASIP]

Conference Proceedings

  1. Marco A. Martínez Ramírez, WeiHsiang Liao, Chihiro Nagashima, Giorgio Fabbro, Stefan Uhlich, Yuki Mitsufuji, “Automatic Music Mixing with Deep Learning and Out-of-Domain Data,” accepted at ISMIR 2022.
  2. Johannes Imort, Giorgio Fabbro, Marco A. Martinez Ramirez, Stefan Uhlich, Yuichiro Koyama, Yuki Mitsufuji, “Overdrive, Distortion, and Fuzz: Learning How to Recover the Clean Signal,” accepted at ISMIR 2022. [arXiv][demo]
  3. Yuhta Takida, Takashi Shibuya, WeiHsiang Liao, Chieh-Hsin Lai, Junki Ohmura, Toshimitsu Uesaka, Naoki Murata, Shusuke Takahashi, Toshiyuki Kumakura, Yuki Mitsufuji, “SQ-VAE: Variational Bayes on Discrete Representation with Self-annealed Stochastic Quantization,” Proc. International Conference on Machine Learning (ICML), pp.20987–21012, 2022. [PMLR][arXiv][code][bibtex]
  4. Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Naoya Takahashi, Emiru Tsunoo, Yuki Mitsufuji, “Multi-ACCDOA: Localizing and Detecting Overlapping Sounds from the Same Class with Auxiliary Duplicating Permutation Invariant Training,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 316–320, 2022. [IEEE][arXiv]
  5. Bo-Yu Chen, Wei-Han Hsu, Wei-Hsiang Liao, Marco A. Martínez Ramírez, Yuki Mitsufuji, Yi-Hsuan Yang, “Automatic DJ Transitions with Differentiable Audio Effects and Generative Adversarial Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 466–470, 2022. [IEEE][arXiv][demo][code]
  6. Yuichiro Koyama, Kazuhide Shigemi, Masafumi Takahashi, Kazuki Shimada, Naoya Takahashi, Emiru Tsunoo, Shusuke Takahashi, Yuki Mitsufuji, Spatial Data Augmentation with Simulated Room Impulse Responses for Sound Event Localization and Detection, in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8872–8876, 2022. [IEEE][arXiv]
  7. Yuichiro Koyama, Naoki Murata, Stefan Uhlich, Giorgio Fabbro, Shusuke Takahashi, Yuki Mitsufuji, Music Source Separation with Deep Equilibrium Models,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 296–300, 2022. [IEEE][arXiv]
  8. Ricardo Falcon-Perez, Kazuki Shimada, Yuichiro Koyama, Shusuke Takahashi, Yuki Mitsufuji, Spatial Mixup: Directional Loudness Modification as Data Augmentation for Sound Event Localization and Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 431–435, 2022. [IEEE][arXiv][code]
  9. Naoya Takahashi, Yuki Mitsufuji, Amicable Examples for Informed Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 241–245, 2022. [IEEE][arXiv]
  10. Naoya Takahashi, Mayank Kumar Singh, Yuki Mitsufuji, Source Mixing and Separation Robust Audio Steganography,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 4368–4372, 2022. [arXiv]
  11. Yasuhide Hyodo, Chihiro Sugai, Junya Suzuki, Masafumi Takahashi, Masahiko Koizumi, Asako Tomura, Yuki Mitsufuji, Yota Komoriya, “Psychophysiological Effect of Immersive Spatial Audio Experience Enhanced Using Sound Field Synthesis,” in Proc. International Conference on Affective Computing & Intelligent Interaction (ACII), pp. 1–8, 2021. [IEEE]
  12. Naoya Takahashi, Kumar Singh Singh, Yuki Mitsufuji, “Hierarchical Disentangled Representation Learning for Singing Voice Conversion,” International Joint Conference on Neural Networks (IJCNN), pp. 1–7, 2021. [IEEE][arXiv]
  13. Naoya Takahashi, Yuki Mitsufuji, “Densely Connected Multi-Dilated Convolutional Networks for Dense Prediction Tasks,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 993–1002, 2021. [CVF][IEEE][arXiv][code]
  14. Kazuki Shimada, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, “ACCDOA: Activity-Coupled Cartesian Direction of Arrival Representation for Sound Event Localization And Detection,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 915–919, 2021. [IEEE][arXiv][code]
  15. Naoya Takahashi, Shota Inoue, Yuki Mitsufuji, “Adversarial Attacks on Audio Source Separation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 521–525, 2021. [IEEE][arXiv]
  16. Ryosuke Sawata, Stefan Uhlich, Shusuke Takahashi, Yuki Mitsufuji, “All for One and One for All: Improving Music Separation by Bridging Networks,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 51–55, 2021. [IEEE][arXiv][code]
  17. Yu Maeno, Yuhta Takida, Naoki Murata, Yuki Mitsufuji, “Array-Geometry-Aware Spatial Active Noise Control Based on Direction-of-Arrival Weighting,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 8414–8418, 2020. [IEEE]
  18. Naoya Takahashi, Mayank Kumar Singh, Sakya Basak, Parthasaarathy Sudarsanam, Sriram Ganapathy, Yuki Mitsufuji, “Improving Voice Separation by Incorporating End-To-End Speech Recognition,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 41–45, 2020. [IEEE][arXiv]
  19. Naoki Murata, Jihui Zhang, Yu Maeno, Yuki Mitsufuji, “Global and Local Mode Domain Adaptive Algorithms for Spatial Active Noise Control Using Higher-Order Sources,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 526–530, 2019. [IEEE]
  20. Naoya Takahashi, Sudarsanam Parthasaarathy, Nabarun Goswami, Yuki Mitsufuji, “Recursive Speech Separation for Unknown Number of Speakers,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 1348–1352, 2019. [ISCA][arXiv]
  21. Naoya Takahashi, Purvi Agrawal, Nabarun Goswami, Yuki Mitsufuji, “PhaseNet: Discretized Phase Modeling with Deep Neural Networks for Audio Source Separation,” in Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 2713–2717, 2018. [ISCA]
  22. Wei-Hsiang Liao, Yuki Mitsufuji, Keiichi Osako, Kazunobu Ohkuri, “Microphone Array Geometry for Two Dimensional Broadband Sound Field Recording,” in Proc. 145th Audio Engineering Society (AES) Convention, 2018. [AES]
  23. Yu Maeno, Yuki Mitsufuji, Prasanga N. Samarasinghe, Thushara D. Abhayapala, “Mode-domain Spatial Active Noise Control Using Multiple Circular Arrays,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), pp. 441–445, 2018. [IEEE]
  24. Naoya Takahashi, Nabarun Goswami, Yuki Mitsufuji, “MMDenseLSTM: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation,” in Proc. International Workshop on Acoustic Signal Enhancement (IWAENC), 2018. [IEEE][arXiv]
  25. Yuki Mitsufuji, Asako Tomura, Kazunobu Ohkuri, “Creating a Highly-Realistic ”Acoustic Vessel Odyssey” Using Sound field Synthesis with 576 Loudspeakers,” in Proc. Audio Engineering Society (AES) Conference on Spatial Reproduction-Aesthetics and Science, 2018. [AES]
  26. Yu Maeno, Yuki Mitsufuji, Thushara D. Abhayapala, “Mode Domain Spatial Active Noise Control Using Sparse Signal Representation,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 211–215, 2018. [IEEE][arXiv]
  27. Naoya Takahashi, Yuki Mitsufuji, “Multi-Scale Multi-Band DenseNets for Audio Source Separation,” in Proc. Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 21–25, 2017. [IEEE][arXiv]
  28. Stefan Uhlich, Marcello Porcu, Franck Giron, Michael Enenkl, Thomas Kemp, Naoya Takahashi, Yuki Mitsufuji, “Improving Music Source Separation Based on Deep Neural Networks Through Data Augmentation and Network Blending,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2017. [IEEE]
  29. Keiichi Osako, Yuki Mitsufuji, Rita Singh, Bhiksha Raj, “Supervised Monaural Source Separation Based on Autoencoders,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 11–15, 2017. [IEEE]
  30. Yuki Mitsufuji, Shoichi Koyama, Hiroshi Saruwatari, “Multichannel Blind Source Separation Based on Non-Negative Tensor Factorization in Wavenumber Domain,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 56–60, 2016. [IEEE]
  31. Stefan Uhlich, Franck Giron, Yuki Mitsufuji, “Deep Neural Network Based Instrument Extraction from Music,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2135–2139, 2015. [IEEE]
  32. Xin Guo, Stefan Uhlich, Yuki Mitsufuji, “NMF-Based Blind Source Separation Using a Linear Predictive Coding Error Clustering Criterion,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 261–265, 2015. [IEEE]
  33. Yuki Mitsufuji, Marco Liuni, Alex Baker, Axel Roebel, “Online Non-Negative Tensor Deconvolution for Source Detection in 3DTV Audio,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3082–3086, 2014. [IEEE]
  34. Yuki Mitsufuji, Axel Roebel, “Sound Source Separation Based on Non-Negative Tensor Factorization Incorporating Spatial Cue as Prior Knowledge,” in Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 71–75, 2013. [IEEE]

Workshop and Demo

  1. Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen, “STARSS22: A Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events,” submitted to DCASE2022 Workshop [arXiv][dataset]
  2. Fabian-Robert Stöter, Maria Clara Machry, Delton de Andrade Vaz, Stefan Uhlich, Yuki Mitsufuji, Antoine Liutkus, “Open.Unmix.app – Towards Audio Separation on the Edge,” Wave Audio Conference (WAC), 2021. [URL][demo]
  3. Joachim Muth, Stefan Uhlich, Nathanael Perraudin, Thomas Kemp, Fabien Cardinaux, Yuki Mitsufuji, “Improving DNN-based Music Source Separation Using Phase Features,” Joint Workshop on Machine Learning for Music at ICML, IJCAI/ECAI and AAMAS, 2018. [arXiv]

Competition and Award

  • Japan Media Arts Festival 2019 Jury Selections – Acoustic Vessel Odyssey [URL][AES]
  • Ranked 1st in Task 3 at DCASE2021 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [URL][arXiv]
  • Ranked 3rd in Task 3 at DCASE2020 Challenge (IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events) [arXiv]
  • Ranked 1st in Music Task at Signal Separation Evaluation Campaign 2018
  • Ranked 1st in Music Task at Signal Separation Evaluation Campaign 2016
  • Ranked 1st in Music Task at Signal Separation Evaluation Campaign 2015

Academic Activity

Competition Organizer

  • Task Organizer at DCASE2022 on “Sound Event Localization and Detection Evaluated in Real Spatial Sound Scenes” [URL]
  • General Chair of Music Demixing (MDX) Challenge 2021 [URL] [report]
  • Co-Chair of Music Demixing (MDX) Workshop 2021 [URL]

Conference Session Chair

  • IEEE ICCE Japan Program Committee Chair 20212023
  • Session Chair at IEEE ICASSP 2020 on “Active Control of Acoustic Noise over Spatial Regions” [URL]
  • Session Chair at IEEE ICASSP 2022 on “Signal Processing and Neural Approaches for Soundscapes (SiNApS)” [URL]

Lecture at University

  • “AI x Creators: Pushing Creative Abilities to the Next” at The University of Tokyo on Feb. 16, 2022 [URL]
  • “Content Creation by Cutting Edge AI-powered Music Technology” at Tokyo Institute of Technology on Dec. 1, 2021 [URL]
  • “AI x Creators: Pushing Creative Abilities to the Next Level” at Keio University on Oct. 21, 2021

Invited Talk

  1. “Meet Asia”, MIDEM Digital 2021
  2. “How AI is Shaking up the Music Industry”, MIDEM Digital 2021 [URL]
  3. “AI & THE FUTURE OF TELEVISION Part 1: Content Production”, MIPCOM Online+ 2020

Web Article

  1. New Excitement and Fun Ways to Enjoy Video and Audio Content “AI Sound Separation x Entertainment” [URL]
  2. Reviving the Sound of Classic Movies with AI “AI Sound Separation” [URL]
  3. The freedom to extract audio gives you the freedom to create new music “Audio source separation” [URL]

Invited Talk (Japanese)

  1. DCAJビジネスセミナー「ソニーのR&Dが仕掛ける最先端音響技術」 [URL]
  2. 先端テクノロジーコース「ソニーの技術力×アーティストの表現力 サウンドVRがつくる演出最前線」
  3. SDMシンポジウム「Sonic Surf VR: 音のVRを実現する波面合成技術とコンテンツクリエーションについて」 [URL]

Web Article (Japanese)

  1. Jul. 2022, DTMステーション, ソニー開発のディープラーニングによる世界最高の音源分離技術を利用できる、音楽制作サービス、Soundmain [URL]
  2. Jan. 2022, レコード芸術2月号 傑作ファイヴ2021 俺のオーディオ pp. 188–189 [URL]
  3. Jun. 2021, Phile Web, ソニーが時空を越えたアーティストのコラボを実現、「AI音源分離」技術とは何か [URL]
  4. Jun. 2021, Sony Group Career Forum 2022, AIで音楽ビジネスを変える、ソニーのグループシナジーに迫る。 [URL]
  5. Apr. 2021, AI Start Lab, ソニーが提示する、AIによる音源分離で広がるエンターテイメント世界の可能性とは [URL]
  6. Jan. 2021, Stereo Sound Online, ソニーの「AIによる音源分離」は、過去の名作に新しい魅力を与える。世界初の画期的技術はどうやって実現できたのか(前):麻倉怜士のいいもの研究所 レポート42 [URL]
  7. Jan. 2021, Stereo Sound Online, ソニーの「AIによる音源分離」は、過去の名作に新しい魅力を与える。世界初の画期的技術はどうやって実現できたのか(後):麻倉怜士のいいもの研究所 レポート43 [URL]
  8. Dec. 2020, Cocotame, 『LINE MUSIC』でカラオケを実現させた「音源分離技術」は過去と現在の音をつなぐ夢の技術だった【前編】 [URL]
  9. Dec. 2020, Cocotame, 『LINE MUSIC』でカラオケを実現させた「音源分離技術」は過去と現在の音をつなぐ夢の技術だった【後編】 [URL]
  10. Jul. 2020, 日経エレクトロニクス,「音だって超現実~音場を操り、世界を一変~」 [URL]
  11. Sep. 2019, Sounmain Blog, 音楽制作の世界が変わる。世界最先端の「音源分離技術」が作りだす未来とは? [URL]
  12. May 2019, サウンド&レコーディングマガジン, 6月号 ソニーの最新技術Sonic Surf VRを体感するインスタレーション展 Touch that Sound! [URL]
  13. Mar. 2019, Impress Watch, ソニー「Sonic Surf VR」で音が自在に動く不思議体験。仕組みを聞いた [URL]

Media Appearance (Japanese)

  1. Sep. 2021, Tokyo FMラジオ ミュージックバード, 石丸幹二と共演?ソニーの新技術で甦るグレン・グールド [radio]
  2. Apr. 2021, Podcast, ソニーが語る「AI×音楽」の可能性。アーティストの働き方にも変化? [podcast]
  3. Apr. 2021, YouTube Channel サンボマスター, 【近藤洋一 Sony テクノロジー体験編~後編~】 [YouTube]
  4. Jul. 2020, NHK TV放送 ららら♪クラシック,「渋谷慶一郎が語る~テクノロジーと音楽~」 [TV]