Vincent Wan

2.1k total citations
49 papers, 1.3k citations indexed

About

Vincent Wan is a scholar working on Artificial Intelligence, Signal Processing and Computer Vision and Pattern Recognition. According to data from OpenAlex, Vincent Wan has authored 49 papers receiving a total of 1.3k indexed citations (citations by other indexed papers that have themselves been cited), including 36 papers in Artificial Intelligence, 31 papers in Signal Processing and 9 papers in Computer Vision and Pattern Recognition. Recurrent topics in Vincent Wan's work include Speech Recognition and Synthesis (31 papers), Speech and Audio Processing (24 papers) and Music and Audio Processing (20 papers). Vincent Wan is often cited by papers focused on Speech Recognition and Synthesis (31 papers), Speech and Audio Processing (24 papers) and Music and Audio Processing (20 papers). Vincent Wan collaborates with scholars based in United Kingdom, Netherlands and Czechia. Vincent Wan's co-authors include Steve Renals, William M. Campbell, Thomas Hain, Odette Scharenborg, Stuart N. Wrigley, John Dines, Guy J. Brown, Mike Lincoln, Martin Karafiát and Lukáš Burget and has published in prestigious journals such as The Journal of the Acoustical Society of America, SAE technical papers on CD-ROM/SAE technical paper series and Computer Vision and Image Understanding.

In The Last Decade

Vincent Wan

47 papers receiving 1.1k citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Vincent Wan United Kingdom 22 908 771 227 111 52 49 1.3k
Seiichi Nakagawa Japan 21 1.3k 1.5× 1.0k 1.3× 187 0.8× 159 1.4× 257 1.6k
Eduardo Lleida Spain 18 984 1.1× 850 1.1× 159 0.7× 142 1.3× 145 1.3k
C. Wellekens France 12 842 0.9× 742 1.0× 180 0.8× 96 0.9× 53 1.1k
Petr Motlíček Switzerland 21 1.1k 1.3× 831 1.1× 114 0.5× 98 0.9× 155 1.4k
Koichi Shinoda Japan 16 989 1.1× 761 1.0× 331 1.5× 168 1.5× 136 1.4k
Pietro Laface Italy 21 1.4k 1.5× 1.1k 1.4× 131 0.6× 154 1.4× 104 1.6k
Sid‐Ahmed Selouani Canada 16 616 0.7× 576 0.7× 172 0.8× 229 2.1× 161 984
Rohit Sinha India 19 946 1.0× 900 1.2× 130 0.6× 112 1.0× 1 0.0× 123 1.2k
Dimitrios Dimitriadis United States 18 569 0.6× 473 0.6× 92 0.4× 99 0.9× 57 813
Vivek Tyagi United States 9 423 0.5× 391 0.5× 98 0.4× 94 0.8× 28 704

Countries citing papers authored by Vincent Wan

Since Specialization
Citations

This map shows the geographic impact of Vincent Wan's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Vincent Wan with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Vincent Wan more than expected).

Fields of papers citing papers by Vincent Wan

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Vincent Wan. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Vincent Wan. The network helps show where Vincent Wan may publish in the future.

Co-authorship network of co-authors of Vincent Wan

This figure shows the co-authorship network connecting the top 25 collaborators of Vincent Wan. A scholar is included among the top collaborators of Vincent Wan based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Vincent Wan. Vincent Wan is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Kenter, Tom, et al.. (2019). CHiVE: Varying Prosody in Speech Synthesis with a Linguistically Driven Dynamic Hierarchical Conditional Variational Network. International Conference on Machine Learning. 3331–3340. 15 indexed citations
2.
Cassidy, Sarah, Björn Stenger, Ross Anderson, et al.. (2016). Expressive visual text-to-speech as an assistive technology for individuals with autism spectrum conditions. Computer Vision and Image Understanding. 148. 193–200. 19 indexed citations
3.
Kim, Sung Hoon, et al.. (2015). Automotive ADAS Camera System Configuration Using Multi-Core Microcontroller. SAE technical papers on CD-ROM/SAE technical paper series. 1. 3 indexed citations
4.
Wan, Vincent, et al.. (2014). Building HMM-TTS Voices on Diverse Data. IEEE Journal of Selected Topics in Signal Processing. 8(2). 296–306. 6 indexed citations
5.
Latorre, Javier, et al.. (2014). Speech intonation for TTS: study on evaluation methodology. 2957–2961. 6 indexed citations
6.
Wan, Vincent, Robert Anderson, Norbert Braunschweiler, et al.. (2013). Photo-realistic expressive text to talking head synthesis. Cambridge University Engineering Department Publications Database. 2667–2669. 10 indexed citations
7.
Latorre, Javier, et al.. (2013). Noise Robustness in HMM-TTS Speaker Adaptation. Edinburgh Research Explorer. 119–124. 6 indexed citations
8.
Anderson, Robert, Björn Stenger, Vincent Wan, & Roberto Cipolla. (2013). Expressive Visual Text-to-Speech Using Active Appearance Models. 3382–3389. 54 indexed citations
9.
Latorre, Javier, Vincent Wan, Mark Gales, et al.. (2012). Speech factorization for HMM-TTS based on cluster adaptive training. 971–974. 21 indexed citations
10.
Gales, Mark, et al.. (2012). Exploring rich expressive information from audiobook data using cluster adaptive training. 959–962. 26 indexed citations
11.
Fry, D., et al.. (2011). Extending Audio Notetaker to Browse WebASR Transcriptions. Conference of the International Speech Communication Association. 31(45). 3329–3330. 1 indexed citations
12.
Wan, Vincent, W. John Braun, C. B. Dean, & Sarah B. Henderson. (2010). A comparison of classification algorithms for the identification of smoke plumes from satellite images. Statistical Methods in Medical Research. 20(2). 131–156. 3 indexed citations
13.
Hain, Thomas, Asmaa El Hannani, Stuart N. Wrigley, & Vincent Wan. (2008). Automatic speech recognition for scientific purposes - webASR. 504–507. 15 indexed citations
14.
Wan, Vincent, et al.. (2007). Finding Maximum Margin Segments in Speech. MPG.PuRe (Max Planck Society). IV–937. 31 indexed citations
15.
Scharenborg, Odette, Vincent Wan, & Roger K. Moore. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication. 49(10-11). 811–826. 23 indexed citations
16.
Hain, Thomas, Lukáš Burget, John Dines, et al.. (2007). The AMI System for the Transcription of Speech in Meetings. Edinburgh Research Explorer. IV–357. 75 indexed citations
17.
Hain, Thomas, John Dines, Giulia Garau, et al.. (2005). Proceedings of the 9th European Conference on Speech Communication and Technology. Conference of the International Speech Communication Association. 37 indexed citations
18.
Wrigley, Stuart N., Guy J. Brown, Vincent Wan, & Steve Renals. (2003). Proceedings of the 8th European Conference on Speech Communication and Technology. Conference of the International Speech Communication Association. 45 indexed citations
19.
Seow, W.K., et al.. (2003). Visual-tactile examination compared with conventional radiography, digital radiography, and Diagnodent in the diagnosis of occlusal occult caries in extracted premolars.. PubMed. 25(4). 341–9. 40 indexed citations
20.
Wan, Vincent & William M. Campbell. (2002). Support vector machines for speaker verification and identification. 2. 775–784. 155 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026