Szu‐Wei Fu

3.2k total citations · 1 hit paper
42 papers, 1.2k citations indexed

About

Szu‐Wei Fu is a scholar working on Signal Processing, Artificial Intelligence and Cognitive Neuroscience. According to data from OpenAlex, Szu‐Wei Fu has authored 42 papers receiving a total of 1.2k indexed citations (citations by other indexed papers that have themselves been cited), including 33 papers in Signal Processing, 22 papers in Artificial Intelligence and 10 papers in Cognitive Neuroscience. Recurrent topics in Szu‐Wei Fu's work include Speech and Audio Processing (32 papers), Speech Recognition and Synthesis (20 papers) and Music and Audio Processing (13 papers). Szu‐Wei Fu is often cited by papers focused on Speech and Audio Processing (32 papers), Speech Recognition and Synthesis (20 papers) and Music and Audio Processing (13 papers). Szu‐Wei Fu collaborates with scholars based in Taiwan, United States and Japan. Szu‐Wei Fu's co-authors include Yu Tsao, Xugang Lu, Kuo-Hsuan Hung, Hisashi Kawai, Yi-Yen Hsieh, Shao‐Yi Chien, Cheng Yu, Chien-Feng Liao, Mirco Ravanelli and Hsin‐Min Wang and has published in prestigious journals such as Langmuir, The Journal of the Acoustical Society of America and IEEE Access.

In The Last Decade

Szu‐Wei Fu

41 papers receiving 1.2k citations

Hit Papers

Noise Reduction in ECG Signals Using Fully Convolutional ... 2019 2026 2021 2023 2019 50 100 150 200

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Szu‐Wei Fu Taiwan 16 849 526 295 284 165 42 1.2k
Syu‐Siang Wang Taiwan 13 501 0.6× 258 0.5× 151 0.5× 166 0.6× 75 0.5× 49 758
Gayadhar Pradhan India 16 505 0.6× 439 0.8× 99 0.3× 49 0.2× 102 0.6× 75 942
Mohammed Bahoura Canada 20 660 0.8× 122 0.2× 156 0.5× 164 0.6× 149 0.9× 56 1.2k
Khaled Daqrouq Saudi Arabia 15 356 0.4× 335 0.6× 163 0.6× 27 0.1× 151 0.9× 62 1.0k
Reinhold Orglmeister Germany 15 485 0.6× 184 0.3× 576 2.0× 130 0.5× 839 5.1× 90 1.7k
Chengshi Zheng China 18 1.1k 1.3× 536 1.0× 314 1.1× 592 2.1× 116 0.7× 133 1.3k
Hong-Goo Kang South Korea 17 734 0.9× 512 1.0× 96 0.3× 152 0.5× 45 0.3× 136 1.1k
S. Shahnawazuddin India 16 537 0.6× 541 1.0× 42 0.1× 29 0.1× 52 0.3× 77 813
Qisong Wu China 20 228 0.3× 180 0.3× 62 0.2× 332 1.2× 496 3.0× 76 1.3k
Binwei Weng United States 9 250 0.3× 37 0.1× 191 0.6× 119 0.4× 204 1.2× 15 828

Countries citing papers authored by Szu‐Wei Fu

Since Specialization
Citations

This map shows the geographic impact of Szu‐Wei Fu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Szu‐Wei Fu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Szu‐Wei Fu more than expected).

Fields of papers citing papers by Szu‐Wei Fu

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Szu‐Wei Fu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Szu‐Wei Fu. The network helps show where Szu‐Wei Fu may publish in the future.

Co-authorship network of co-authors of Szu‐Wei Fu

This figure shows the co-authorship network connecting the top 25 collaborators of Szu‐Wei Fu. A scholar is included among the top collaborators of Szu‐Wei Fu based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Szu‐Wei Fu. Szu‐Wei Fu is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
2.
Chen, Zhehuai, Szu‐Wei Fu, Chao-Han Huck Yang, et al.. (2025). Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data. 1–5. 1 indexed citations
3.
Khan, Muhammad Salman, et al.. (2024). Exploiting Consistency-Preserving Loss and Perceptual Contrast Stretching to Boost SSL-Based Speech Enhancement. Nova Science Publishers (Nova Science Publishers, Inc.). 1–6. 2 indexed citations
4.
Chen, Zhehuai, et al.. (2024). DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment. 4159–4163. 2 indexed citations
5.
Chen, Yu‐Wen, et al.. (2024). A Study On Incorporating Whisper For Robust Speech Assessment. 1–6. 5 indexed citations
6.
Fu, Szu‐Wei, et al.. (2024). Multi-objective non-intrusive hearing-aid speech assessment model. The Journal of the Acoustical Society of America. 156(5). 3574–3587. 3 indexed citations
7.
Fu, Szu‐Wei, et al.. (2022). Deep Learning-Based Non-Intrusive Multi-Objective Speech Assessment Model With Cross-Domain Features. IEEE/ACM Transactions on Audio Speech and Language Processing. 31. 54–70. 49 indexed citations
8.
Hung, Kuo-Hsuan, et al.. (2022). Boosting Self-Supervised Embeddings for Speech Enhancement. Interspeech 2022. 186–190. 28 indexed citations
9.
Yu, Cheng, et al.. (2022). Perceptual Contrast Stretching on Target Feature for Speech Enhancement. Interspeech 2022. 5448–5452. 13 indexed citations
10.
Fu, Szu‐Wei, Cheng Yu, Kuo-Hsuan Hung, Mirco Ravanelli, & Yu Tsao. (2022). MetricGAN-U: Unsupervised Speech Enhancement/ Dereverberation Based Only on Noisy/ Reverberated Speech. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7412–7416. 28 indexed citations
11.
Fu, Szu‐Wei, et al.. (2022). Improving Meeting Inclusiveness using Speech Interruption Analysis. Proceedings of the 30th ACM International Conference on Multimedia. 887–895. 4 indexed citations
12.
Yu, Cheng, et al.. (2022). OSSEM: one-shot speaker adaptive speech enhancement using meta learning. Interspeech 2022. 981–985. 2 indexed citations
14.
Fu, Szu‐Wei, et al.. (2020). STOI-Net: A Deep Learning based Non-Intrusive Speech Intelligibility Assessment Model. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. 482–486. 1 indexed citations
15.
Fu, Szu‐Wei, Chien-Feng Liao, Kuo-Hsuan Hung, et al.. (2020). Boosting Objective Scores of Speech Enhancement Model through MetricGAN Post-Processing. arXiv (Cornell University). 2 indexed citations
16.
Liu, Changle, et al.. (2020). Multichannel Speech Enhancement by Raw Waveform-Mapping Using Fully Convolutional Networks. IEEE/ACM Transactions on Audio Speech and Language Processing. 28. 1888–1900. 38 indexed citations
17.
Fu, Szu‐Wei, Yu Tsao, & Xugang Lu. (2016). SNR-Aware Convolutional Neural Network Modeling for Speech Enhancement. 3768–3772. 123 indexed citations
18.
Fu, Szu‐Wei, et al.. (2016). Joint Dictionary Learning-Based Non-Negative Matrix Factorization for Voice Conversion to Improve Speech Intelligibility After Oral Surgery. IEEE Transactions on Biomedical Engineering. 64(11). 2584–2594. 31 indexed citations
19.
20.
Ding, Jian–Jiun, et al.. (2014). End-point preserved stroke extraction. 157 161. 318–323. 1 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026