Chenda Li

620 total citations
24 papers, 390 citations indexed

About

Chenda Li is a scholar working on Signal Processing, Artificial Intelligence and Computational Mechanics. According to data from OpenAlex, Chenda Li has authored 24 papers receiving a total of 390 indexed citations (citations by other indexed papers that have themselves been cited), including 23 papers in Signal Processing, 19 papers in Artificial Intelligence and 4 papers in Computational Mechanics. Recurrent topics in Chenda Li's work include Speech and Audio Processing (23 papers), Music and Audio Processing (18 papers) and Speech Recognition and Synthesis (18 papers). Chenda Li is often cited by papers focused on Speech and Audio Processing (23 papers), Music and Audio Processing (18 papers) and Speech Recognition and Synthesis (18 papers). Chenda Li collaborates with scholars based in China, United States and Japan. Chenda Li's co-authors include Yanmin Qian, Shinji Watanabe, Wangyou Zhang, Xuankai Chang, Jing Shi, Tomoki Hayashi, Naoyuki Kamo, Hirofumi Inaguma, Pengcheng Guo and Yosuke Higuchi and has published in prestigious journals such as IEEE/ACM Transactions on Audio Speech and Language Processing, arXiv (Cornell University) and ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

In The Last Decade

Chenda Li

22 papers receiving 375 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Chenda Li China 9 313 297 33 24 16 24 390
Naoyuki Kamo Japan 10 255 0.8× 257 0.9× 48 1.5× 16 0.7× 24 1.5× 19 341
Aswin Shanmugam Subramanian United States 11 264 0.8× 291 1.0× 42 1.3× 17 0.7× 15 0.9× 23 355
Xiaojia Zhao United States 7 287 0.9× 358 1.2× 39 1.2× 34 1.4× 24 1.5× 8 395
Matt Shannon United Kingdom 11 317 1.0× 250 0.8× 15 0.5× 26 1.1× 10 0.6× 13 372
Hannah Muckenhirn Switzerland 8 405 1.3× 416 1.4× 24 0.7× 36 1.5× 18 1.1× 11 487
Mirco Ravanelli Canada 11 251 0.8× 303 1.0× 48 1.5× 36 1.5× 45 2.8× 31 390
Sri Harish Mallidi United States 14 426 1.4× 362 1.2× 15 0.5× 24 1.0× 23 1.4× 32 494
Yanhua Long China 9 226 0.7× 197 0.7× 13 0.4× 23 1.0× 6 0.4× 62 280
Cheng Yu Taiwan 6 232 0.7× 333 1.1× 77 2.3× 34 1.4× 49 3.1× 12 372
Ladislav Mošner Czechia 9 186 0.6× 203 0.7× 19 0.6× 18 0.8× 16 1.0× 25 235

Countries citing papers authored by Chenda Li

Since Specialization
Citations

This map shows the geographic impact of Chenda Li's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Chenda Li with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Chenda Li more than expected).

Fields of papers citing papers by Chenda Li

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Chenda Li. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Chenda Li. The network helps show where Chenda Li may publish in the future.

Co-authorship network of co-authors of Chenda Li

This figure shows the co-authorship network connecting the top 25 collaborators of Chenda Li. A scholar is included among the top collaborators of Chenda Li based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Chenda Li. Chenda Li is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Zhang, Wangyou, Samuele Cornell, Robin Scheibler, et al.. (2025). Interspeech 2025 URGENT Speech Enhancement Challenge. 858–862. 2 indexed citations
2.
Li, Chenda, et al.. (2025). Efficient Multilingual ASR Finetuning via LoRA Language Experts. 1138–1142.
3.
Li, Chenda, et al.. (2024). Unified Cross-Modal Attention: Robust Audio-Visual Speech Recognition and Beyond. IEEE/ACM Transactions on Audio Speech and Language Processing. 32. 1941–1953. 5 indexed citations
4.
Li, Chenda, et al.. (2023). Light-Weight Visualvoice: Neural Network Quantization On Audio Visual Speech Separation. 34. 1–5. 3 indexed citations
5.
6.
Li, Chenda, Zhuo Chen, Dongmei Wang, et al.. (2023). Target Sound Extraction with Variable Cross-Modality Clues. 1–5. 10 indexed citations
7.
Li, Chenda, Yao Qian, Zhuo Chen, et al.. (2023). Adapting Multi-Lingual ASR Models for Handling Multiple Talkers. 1314–1318. 6 indexed citations
8.
Li, Chenda, et al.. (2022). Time-Domain Audio-Visual Speech Separation on Low Quality Videos. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 256–260. 7 indexed citations
9.
Cornell, Samuele, Xuankai Chang, Wangyou Zhang, et al.. (2022). Towards Low-Distortion Multi-Channel Speech Enhancement: The ESPNET-Se Submission to the L3DAS22 Challenge. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 9201–9205. 17 indexed citations
10.
Li, Chenda, Zhuo Chen, & Yanmin Qian. (2022). Dual-Path Modeling With Memory Embedding Model for Continuous Speech Separation. IEEE/ACM Transactions on Audio Speech and Language Processing. 30. 1508–1520. 5 indexed citations
11.
Wang, Wei, et al.. (2022). The Sjtu System For Multimodal Information Based Speech Processing Challenge 2021. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 9261–9265. 6 indexed citations
12.
Luo, Yi, Zhuo Chen, Cong Han, et al.. (2021). Rethinking The Separation Layers In Speech Separation Networks. 1–5. 6 indexed citations
13.
Guo, Pengcheng, Xuankai Chang, Tomoki Hayashi, et al.. (2021). Recent Developments on Espnet Toolkit Boosted By Conformer. 5874–5878. 149 indexed citations
14.
Watanabe, Shinji, Xuankai Chang, Pengcheng Guo, et al.. (2021). The 2020 ESPnet Update: New Features, Broadened Applications, Performance Improvements, and Future Plans. 30. 1–6. 29 indexed citations
15.
Li, Chenda, et al.. (2021). Audio-Visual Multi-Talker Speech Recognition in a Cocktail Party. 3021–3025. 8 indexed citations
16.
Han, Cong, Yi Luo, Chenda Li, et al.. (2021). Continuous Speech Separation Using Speaker Inventory for Long Recording. 3036–3040. 6 indexed citations
17.
Li, Chenda, Jing Shi, Wangyou Zhang, et al.. (2020). ESPnet-se: end-to-end speech enhancement and separation toolkit designed for asr integration. arXiv (Cornell University). 53 indexed citations
18.
Li, Chenda & Yanmin Qian. (2020). Listen, Watch and Understand at the Cocktail Party: Audio-Visual-Contextual Speech Separation. 1426–1430. 22 indexed citations
19.
Li, Chenda & Yanmin Qian. (2020). Deep Audio-Visual Speech Separation with Attention Mechanism. 7314–7318. 20 indexed citations
20.
Li, Chenda & Yanmin Qian. (2019). Prosody Usage Optimization for Children Speech Recognition with Zero Resource Children Speech. 3446–3450. 11 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026