Heiga Zen

15.7k total citations · 3 hit papers
124 papers, 5.5k citations indexed

About

Heiga Zen is a scholar working on Artificial Intelligence, Signal Processing and Computer Vision and Pattern Recognition. According to data from OpenAlex, Heiga Zen has authored 124 papers receiving a total of 5.5k indexed citations (citations by other indexed papers that have themselves been cited), including 120 papers in Artificial Intelligence, 83 papers in Signal Processing and 9 papers in Computer Vision and Pattern Recognition. Recurrent topics in Heiga Zen's work include Speech Recognition and Synthesis (115 papers), Speech and Audio Processing (77 papers) and Natural Language Processing Techniques (45 papers). Heiga Zen is often cited by papers focused on Speech Recognition and Synthesis (115 papers), Speech and Audio Processing (77 papers) and Natural Language Processing Techniques (45 papers). Heiga Zen collaborates with scholars based in Japan, United States and United Kingdom. Heiga Zen's co-authors include Keiichi Tokuda, Alan W. Black, Keiichi Tokuda, Andrew Senior, Mike Schuster, Tomoki Toda, Yoshihiko Nankaku, Junichi Yamagishi, Tadashi Kitamura and Takashi Masuko and has published in prestigious journals such as Proceedings of the IEEE, The Journal of the Acoustical Society of America and IEEE Signal Processing Magazine.

In The Last Decade

Heiga Zen

118 papers receiving 4.9k citations

Hit Papers

Statistical parametric speech synthesis 2009 2026 2014 2020 2009 2013 2019 250 500 750

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Heiga Zen Japan 34 4.9k 3.7k 570 504 154 124 5.5k
Yanmin Qian China 34 3.6k 0.7× 3.2k 0.9× 261 0.5× 444 0.9× 105 0.7× 228 4.4k
Philip C. Woodland United Kingdom 42 7.3k 1.5× 5.1k 1.4× 417 0.7× 842 1.7× 55 0.4× 230 8.0k
Pierre Dumouchel Canada 24 5.0k 1.0× 4.7k 1.3× 343 0.6× 531 1.1× 127 0.8× 86 5.5k
Frank K. Soong China 35 4.2k 0.9× 3.5k 0.9× 543 1.0× 1.1k 2.2× 66 0.4× 295 5.3k
S. R. Mahadeva Prasanna India 28 2.1k 0.4× 2.3k 0.6× 582 1.0× 476 0.9× 195 1.3× 290 3.1k
Zhen-Hua Ling China 32 2.9k 0.6× 2.0k 0.5× 343 0.6× 452 0.9× 94 0.6× 247 3.5k
Tomoki Toda Japan 43 6.4k 1.3× 5.5k 1.5× 757 1.3× 810 1.6× 551 3.6× 425 7.3k
Douglas A. Reynolds United States 33 6.7k 1.4× 6.4k 1.7× 337 0.6× 1.3k 2.5× 134 0.9× 86 8.0k
Tomi Kinnunen Finland 35 5.0k 1.0× 5.0k 1.3× 202 0.4× 930 1.8× 325 2.1× 169 6.0k

Countries citing papers authored by Heiga Zen

Since Specialization
Citations

This map shows the geographic impact of Heiga Zen's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Heiga Zen with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Heiga Zen more than expected).

Fields of papers citing papers by Heiga Zen

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Heiga Zen. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Heiga Zen. The network helps show where Heiga Zen may publish in the future.

Co-authorship network of co-authors of Heiga Zen

This figure shows the co-authorship network connecting the top 25 collaborators of Heiga Zen. A scholar is included among the top collaborators of Heiga Zen based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Heiga Zen. Heiga Zen is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
2.
Nachmani, Eliya, et al.. (2025). SimulTron: On-Device Simultaneous Speech to Speech Translation. 1–5.
3.
Koizumi, Yuma, Heiga Zen, Shigeki Karita, et al.. (2023). LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus. 5496–5500. 31 indexed citations
4.
Ghosh, Prasanta Kumar, Hema A. Murthy, Heiga Zen, et al.. (2023). Lightweight, Multi-Speaker, Multi-Lingual Indic Text-to-Speech. 32. 1–2.
5.
Elias, Isaac, Heiga Zen, Jonathan Shen, et al.. (2021). Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling. 141–145. 37 indexed citations
6.
Zhang, Yu, Ron J. Weiss, Heiga Zen, et al.. (2019). Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning. 2080–2084. 97 indexed citations
7.
Chen, Yutian, Yannis Assael, Brendan Shillingford, et al.. (2018). Sample-efficient adaptive text-to-speech. arXiv (Cornell University). 16 indexed citations
8.
Maia, Ranniery, Heiga Zen, & Mark Gales. (2010). Statistical parametric speech synthesis with joint estimation of acoustic and excitation model parameters. Cambridge University Engineering Department Publications Database. 88–93. 12 indexed citations
9.
Zen, Heiga, Norbert Braunschweiler, Sabine Buchholz, et al.. (2010). HMM-based polyglot speech synthesis by speaker and language adaptive training.. SSW. 186–191. 4 indexed citations
10.
Zen, Heiga, Keiichiro Oura, Takashi Nose, et al.. (2009). Recent development of the HMM-based speech synthesis system (HTS). Hokkaido University Collection of Scholarly and Academic Papers (Hokkaido University). 121–130. 24 indexed citations
11.
Oura, Keiichiro, Heiga Zen, Yoshihiko Nankaku, Akinobu Lee, & Keiichi Tokuda. (2009). Tying covariance matrices to reduce the footprint of HMM-based speech synthesis systems. 1759–1762. 12 indexed citations
12.
Maia, Ranniery, Tomoki Toda, Heiga Zen, Yoshihiko Nankaku, & Keiichi Tokuda. (2007). An Excitation Model for HMM-Based Speech Synthesis Based on Residual Modeling. NAIST Digital Library (Nara Institute of Science and Technology). 131–136. 52 indexed citations
13.
Zen, Heiga, Takashi Nose, Junichi Yamagishi, et al.. (2007). The HMM-based speech synthesis system (HTS) version 2.0.. SSW. 294–299. 259 indexed citations
14.
Zen, Heiga, Tomoki Toda, Masaru Nakamura, & Keiichi Tokuda. (2007). Details of the Nitech HMM-Based Speech Synthesis System for the Blizzard Challenge 2005(Speech and Herring). IEICE Transactions on Information and Systems. 90(1). 325–333. 6 indexed citations
15.
Zen, Heiga, et al.. (2007). A Hidden Semi-Markov Model-Based Speech Synthesis System(Speech and Hearing). IEICE Transactions on Information and Systems. 90(5). 825–834. 9 indexed citations
16.
Zen, Heiga, et al.. (2005). Deterministic Annealing EM Algorithm in Acoustic Modeling for Speaker and Speech Recognition(Feature Extraction and Acoustic Medelings, Corpus-Based Speech Technologies). IEICE Transactions on Information and Systems. 88(3). 425–431. 1 indexed citations
17.
Lima, Amaro A. de, et al.. (2005). Applying Sparse KPCA for Feature Extraction in Speech Recognition(Feature Extraction and Acoustic Medelings, Corpus-Based Speech Technologies). IEICE Transactions on Information and Systems. 88(3). 401–409. 2 indexed citations
18.
Zen, Heiga, Keiichi Tokuda, & Tadashi Kitamura. (2004). An introduction of trajectory model into HMM-based speech synthesis.. SSW. 191–196. 36 indexed citations
19.
Tokuda, Keiichi, Heiga Zen, & Tadashi Kitamura. (2004). Reformulating the HMM as a Trajectory Model. Scientific Programming. 104(538). 43–48. 15 indexed citations
20.
Zen, Heiga, et al.. (2004). Deterministic annealing EM algorithm in parameter estimation for acoustic model. 433–436. 7 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026