Yu Tsao

10.4k total citations · 2 hit papers
328 papers, 5.4k citations indexed

About

Yu Tsao is a scholar working on Signal Processing, Artificial Intelligence and Cognitive Neuroscience. According to data from OpenAlex, Yu Tsao has authored 328 papers receiving a total of 5.4k indexed citations (citations by other indexed papers that have themselves been cited), including 237 papers in Signal Processing, 170 papers in Artificial Intelligence and 55 papers in Cognitive Neuroscience. Recurrent topics in Yu Tsao's work include Speech and Audio Processing (228 papers), Speech Recognition and Synthesis (156 papers) and Music and Audio Processing (124 papers). Yu Tsao is often cited by papers focused on Speech and Audio Processing (228 papers), Speech Recognition and Synthesis (156 papers) and Music and Audio Processing (124 papers). Yu Tsao collaborates with scholars based in Taiwan, United States and Japan. Yu Tsao's co-authors include Xugang Lu, Szu‐Wei Fu, Chiori Hori, Shigeki Matsuda, Hsin‐Min Wang, Ying-Hui Lai, Syu‐Siang Wang, Shih‐Hau Fang, Kuo-Hsuan Hung and Cheng Yu and has published in prestigious journals such as PLoS ONE, NeuroImage and Scientific Reports.

In The Last Decade

Yu Tsao

288 papers receiving 5.2k citations

Hit Papers

Speech enhancement based on deep denoising autoencoder 2013 2026 2017 2021 2013 2019 100 200 300 400 500

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Yu Tsao Taiwan 34 3.2k 2.4k 828 814 632 328 5.4k
Javier Ramı́rez Spain 44 1.2k 0.4× 2.0k 0.8× 1.1k 1.3× 380 0.5× 1.7k 2.8× 263 6.6k
Sridha Sridharan Australia 44 3.2k 1.0× 3.2k 1.3× 707 0.9× 305 0.4× 3.9k 6.2× 480 8.3k
Sridhar Krishnan Canada 39 1.6k 0.5× 856 0.4× 1.0k 1.2× 168 0.2× 779 1.2× 342 5.4k
Jiuwen Cao China 38 741 0.2× 2.2k 0.9× 883 1.1× 305 0.4× 926 1.5× 220 4.9k
Yuanqing Li China 45 1.3k 0.4× 834 0.3× 3.7k 4.5× 438 0.5× 1.0k 1.6× 230 6.8k
Fernando De la Torre United States 46 1.3k 0.4× 1.4k 0.6× 525 0.6× 464 0.6× 5.3k 8.4× 157 7.8k
Kup‐Sze Choi Hong Kong 36 397 0.1× 1.8k 0.7× 1.0k 1.2× 182 0.2× 1.0k 1.6× 207 4.6k
Varun Bajaj India 42 1.4k 0.4× 971 0.4× 3.0k 3.6× 101 0.1× 728 1.2× 169 5.6k
Stefanos Zafeiriou United Kingdom 52 3.0k 0.9× 1.9k 0.8× 683 0.8× 1.2k 1.5× 8.5k 13.5× 251 11.8k
Chin‐Hui Lee United States 51 8.4k 2.6× 8.7k 3.6× 1.0k 1.2× 1.3k 1.6× 1.5k 2.4× 462 12.0k

Countries citing papers authored by Yu Tsao

Since Specialization
Citations

This map shows the geographic impact of Yu Tsao's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Yu Tsao with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Yu Tsao more than expected).

Fields of papers citing papers by Yu Tsao

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Yu Tsao. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Yu Tsao. The network helps show where Yu Tsao may publish in the future.

Co-authorship network of co-authors of Yu Tsao

This figure shows the co-authorship network connecting the top 25 collaborators of Yu Tsao. A scholar is included among the top collaborators of Yu Tsao based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Yu Tsao. Yu Tsao is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Hou, Jen-Cheng, Kuo-Hsuan Hung, Yi‐Ting Chen, et al.. (2025). Leveraging Self-Supervised Audio-Visual Pretrained Models to Improve Vocoded Speech Intelligibility in Cochlear Implant Simulation. IEEE Transactions on Biomedical Engineering. 73(4). 1561–1572.
2.
Liu, Kai-Chun, Sheng-Yu Peng, Yu Tsao, et al.. (2024). A Cross-Modal Autoencoder for Contactless Electrocardiography Monitoring Using Frequency-Modulated Continuous Wave Radar. IEEE Sensors Journal. 24(24). 41462–41473. 1 indexed citations
4.
Wang, Syu‐Siang, et al.. (2024). Unsupervised Face-Masked Speech Enhancement Using Generative Adversarial Networks With Human-in-the-Loop Assessment Metrics. IEEE/ACM Transactions on Audio Speech and Language Processing. 32. 3826–3837.
5.
Tsao, Yu, et al.. (2023). ElectrodeNet—A Deep-Learning-Based Sound Coding Strategy for Cochlear Implants. IEEE Transactions on Cognitive and Developmental Systems. 16(1). 346–357. 4 indexed citations
6.
Wang, Hsin‐Min, et al.. (2022). Improved Lite Audio-Visual Speech Enhancement. IEEE/ACM Transactions on Audio Speech and Language Processing. 30. 1345–1359. 22 indexed citations
7.
Chen, Li‐Chin, et al.. (2022). EPG2S: Speech Generation and Speech Enhancement Based on Electropalatography and Audio Signals Using Multimodal Learning. IEEE Signal Processing Letters. 29. 2582–2586. 7 indexed citations
8.
Wang, Wei-Chien, Mandar Gogate, Kia Dashtipour, et al.. (2022). A Novel Temporal Attentive-Pooling based Convolutional Recurrent Architecture for Acoustic Signal Enhancement. IEEE Transactions on Artificial Intelligence. 3(5). 833–842. 3 indexed citations
9.
Fu, Szu‐Wei, et al.. (2022). Deep Learning-Based Non-Intrusive Multi-Objective Speech Assessment Model With Cross-Domain Features. IEEE/ACM Transactions on Audio Speech and Language Processing. 31. 54–70. 49 indexed citations
10.
Hung, Kuo-Hsuan, et al.. (2022). Boosting Self-Supervised Embeddings for Speech Enhancement. Interspeech 2022. 186–190. 28 indexed citations
11.
Lu, Xugang, et al.. (2021). Coupling a Generative Model With a Discriminative Learning Framework for Speaker Verification. IEEE/ACM Transactions on Audio Speech and Language Processing. 29. 3631–3641. 3 indexed citations
12.
Wang, Syu‐Siang, et al.. (2020). Enhancing Intelligibility of Dysarthric Speech Using Gated Convolutional-Based Voice Conversion System. 4686–4690. 16 indexed citations
13.
Hidayati, Shintami Chusnul, Cheng-Chun Hsu, John See, et al.. (2020). Dress With Style: Learning Style From Joint Deep Embedding of Clothing Styles and Body Shapes. IEEE Transactions on Multimedia. 23. 365–377. 41 indexed citations
14.
Fu, Szu‐Wei, et al.. (2020). STOI-Net: A Deep Learning based Non-Intrusive Speech Intelligibility Assessment Model. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. 482–486. 1 indexed citations
15.
Fu, Szu‐Wei, Chien-Feng Liao, Kuo-Hsuan Hung, et al.. (2020). Boosting Objective Scores of Speech Enhancement Model through MetricGAN Post-Processing. arXiv (Cornell University). 2 indexed citations
16.
Torres-Sospedra, Joaquín, Antonio R. Jiménez, Adriano Moreira, et al.. (2018). Off-Line Evaluation of Mobile-Centric Indoor Positioning Systems: The Experiences from the 2017 IPIN Competition. Sensors. 18(2). 487–487. 60 indexed citations
17.
Tsao, Yu, et al.. (2018). Robust S1 and S2 heart sound recognition based on spectral restoration and multi-style training. Biomedical Signal Processing and Control. 49. 173–180. 9 indexed citations
18.
Wang, Syu‐Siang, et al.. (2018). Locally linear embedding based post-filtering for speech enhancement. Journal of information science and engineering. 34(6). 1469–1491. 1 indexed citations
19.
Hung, Jeih-weih, et al.. (2014). Speech enhancement using segmental nonnegative matrix factorization. 4483–4487. 30 indexed citations
20.
Jing, How, Yu Tsao, Kuan‐Yu Chen, & Hsin‐Min Wang. (2013). Semantic Na"ive Bayes Classifier for Document Classification. International Joint Conference on Natural Language Processing. 1117–1123. 8 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026