Chengzhu Yu

1.4k total citations
36 papers, 759 citations indexed

About

Chengzhu Yu is a scholar working on Artificial Intelligence, Signal Processing and Computer Vision and Pattern Recognition. According to data from OpenAlex, Chengzhu Yu has authored 36 papers receiving a total of 759 indexed citations (citations by other indexed papers that have themselves been cited), including 30 papers in Artificial Intelligence, 27 papers in Signal Processing and 4 papers in Computer Vision and Pattern Recognition. Recurrent topics in Chengzhu Yu's work include Speech Recognition and Synthesis (30 papers), Speech and Audio Processing (25 papers) and Music and Audio Processing (16 papers). Chengzhu Yu is often cited by papers focused on Speech Recognition and Synthesis (30 papers), Speech and Audio Processing (25 papers) and Music and Audio Processing (16 papers). Chengzhu Yu collaborates with scholars based in United States, China and Japan. Chengzhu Yu's co-authors include John H. L. Hansen, Chunlei Zhang, Dong Yu, Chao Weng, Jia Cui, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani, Takuya Yoshioka and Gang Liu and has published in prestigious journals such as The Journal of the Acoustical Society of America, IEEE Journal of Selected Topics in Signal Processing and IEEE/ACM Transactions on Audio Speech and Language Processing.

In The Last Decade

Chengzhu Yu

34 papers receiving 671 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Chengzhu Yu United States 16 633 597 66 64 35 36 759
Shinji Takaki Japan 16 823 1.3× 817 1.4× 124 1.9× 111 1.7× 59 1.7× 50 1.0k
Shinnosuke Takamichi Japan 16 682 1.1× 756 1.3× 115 1.7× 56 0.9× 21 0.6× 116 978
Jaesung Huh South Korea 11 378 0.6× 425 0.7× 57 0.9× 32 0.5× 14 0.4× 24 556
Jeih-weih Hung Taiwan 12 464 0.7× 361 0.6× 76 1.2× 106 1.7× 35 1.0× 88 556
Jason Pelecanos United States 12 836 1.3× 833 1.4× 57 0.9× 36 0.6× 16 0.5× 43 929
Hannah Muckenhirn Switzerland 8 416 0.7× 405 0.7× 36 0.5× 24 0.4× 18 0.5× 11 487
Tsubasa Ochiai Japan 7 681 1.1× 897 1.5× 71 1.1× 51 0.8× 16 0.5× 11 1.0k
K. K. Chin Japan 10 426 0.7× 423 0.7× 34 0.5× 71 1.1× 35 1.0× 26 569
Hossein Sameti Iran 14 663 1.0× 577 1.0× 73 1.1× 199 3.1× 48 1.4× 103 848
Xuankai Chang United States 21 1.1k 1.7× 1.4k 2.3× 72 1.1× 70 1.1× 31 0.9× 66 1.6k

Countries citing papers authored by Chengzhu Yu

Since Specialization
Citations

This map shows the geographic impact of Chengzhu Yu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Chengzhu Yu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Chengzhu Yu more than expected).

Fields of papers citing papers by Chengzhu Yu

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Chengzhu Yu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Chengzhu Yu. The network helps show where Chengzhu Yu may publish in the future.

Co-authorship network of co-authors of Chengzhu Yu

This figure shows the co-authorship network connecting the top 25 collaborators of Chengzhu Yu. A scholar is included among the top collaborators of Chengzhu Yu based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Chengzhu Yu. Chengzhu Yu is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Liu, Yufei, Chengzhu Yu, Shuai Wang, et al.. (2021). Non-Parallel Any-to-Many Voice Conversion by Replacing Speaker Statistics. 1369–1373. 1 indexed citations
2.
Chen, Ling-Hui, et al.. (2020). The Tencent speech synthesis system for Blizzard Challenge 2020. 28–32. 1 indexed citations
3.
Li, Shengchen, et al.. (2020). Peking Opera Synthesis via Duration Informed Attention Network. 1226–1230. 5 indexed citations
4.
Yu, Chengzhu, et al.. (2020). Pitchnet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network. 7749–7753. 29 indexed citations
5.
Hansen, John H. L., et al.. (2019). The 2019 Inaugural Fearless Steps Challenge: A Giant Leap for Naturalistic Audio. 1851–1855. 18 indexed citations
6.
Hansen, John H. L., et al.. (2019). Fearless steps: Taking the next step towards advanced speech technology for naturalistic audio. The Journal of the Acoustical Society of America. 146(4_Supplement). 2956–2956.
7.
Yu, Chengzhu, Chunlei Zhang, Chao Weng, Jia Cui, & Dong Yu. (2018). A Multistage Training Framework for Acoustic-to-Word Model. 786–790. 15 indexed citations
8.
Weng, Chao, Jia Cui, Guangsen Wang, et al.. (2018). Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition. 761–765. 38 indexed citations
9.
Zhang, Chunlei, Chengzhu Yu, Chao Weng, Jia Cui, & Dong Yu. (2018). An Exploration of Directly Using Word as ACOUSTIC Modeling Unit for Speech Recognition. abs 1412 6980. 64–69. 4 indexed citations
10.
Hansen, John H. L., et al.. (2018). Fearless Steps: Apollo-11 Corpus Advancements for Speech Technologies from Earth to the Moon. 2758–2762. 20 indexed citations
11.
Wang, Dongmei, Chengzhu Yu, & John H. L. Hansen. (2017). Robust Harmonic Features for Classification-Based Pitch Estimation. IEEE/ACM Transactions on Audio Speech and Language Processing. 25(5). 952–964. 20 indexed citations
12.
Zhang, Chunlei, Chengzhu Yu, & John H. L. Hansen. (2017). An Investigation of Deep-Learning Frameworks for Speaker Verification Antispoofing. IEEE Journal of Selected Topics in Signal Processing. 11(4). 684–694. 94 indexed citations
13.
Yu, Chengzhu & John H. L. Hansen. (2017). A study of voice production characteristics of astronuat speech during Apollo 11 for speaker modeling in space. The Journal of the Acoustical Society of America. 141(3). 1605–1614. 5 indexed citations
14.
Yu, Chengzhu, Chunlei Zhang, Finnian Kelly, Abhijeet Sangwan, & John H. L. Hansen. (2016). Text-Available Speaker Recognition System for Forensic Applications. 1844–1847. 3 indexed citations
15.
Yu, Chengzhu, Chunlei Zhang, Shivesh Ranjan, et al.. (2016). UTD-CRSS system for the NIST 2015 language recognition i-vector machine learning challenge. 9. 5835–5839. 6 indexed citations
16.
Delcroix, Marc, Keisuke Kinoshita, Chengzhu Yu, et al.. (2016). Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions. 5270–5274. 20 indexed citations
17.
Yu, Chengzhu, et al.. (2014). Uncertainty propagation in front end factor analysis for noise robust speaker recognition. 4017–4021. 39 indexed citations
18.
Yu, Chengzhu, Kamil Wójcicki, Philipos C. Loizou, John H. L. Hansen, & Michael T. Johnson. (2014). Evaluation of the importance of time-frequency contributions to speech intelligibility in noise. The Journal of the Acoustical Society of America. 135(5). 3007–3016. 8 indexed citations
19.
Yu, Chengzhu, et al.. (2013). A new mask-based objective measure for predicting the intelligibility of binary masked speech. PubMed. 7030–7033. 2 indexed citations
20.

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026