Ben Milner

1.5k total citations
84 papers, 1.0k citations indexed

About

Ben Milner is a scholar working on Signal Processing, Artificial Intelligence and Computational Mechanics. According to data from OpenAlex, Ben Milner has authored 84 papers receiving a total of 1.0k indexed citations (citations by other indexed papers that have themselves been cited), including 80 papers in Signal Processing, 38 papers in Artificial Intelligence and 26 papers in Computational Mechanics. Recurrent topics in Ben Milner's work include Speech and Audio Processing (76 papers), Speech Recognition and Synthesis (37 papers) and Music and Audio Processing (33 papers). Ben Milner is often cited by papers focused on Speech and Audio Processing (76 papers), Speech Recognition and Synthesis (37 papers) and Music and Audio Processing (33 papers). Ben Milner collaborates with scholars based in United Kingdom, United States and China. Ben Milner's co-authors include Saeed V. Vaseghi, Xu Shao, Dan Smith, Ling Ma, Jonathan Darch, Sarah Taylor, S. F. J. Cox, Robert Lee, Denise Risch and Naomi Harte and has published in prestigious journals such as The Journal of the Acoustical Society of America, Electronics Letters and IEEE Transactions on Multimedia.

In The Last Decade

Ben Milner

80 papers receiving 895 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Ben Milner United Kingdom 19 875 531 289 119 74 84 1.0k
J.S. Mason United Kingdom 16 608 0.7× 377 0.7× 390 1.3× 114 1.0× 45 0.6× 53 959
Xu Shao United Kingdom 8 867 1.0× 379 0.7× 243 0.8× 154 1.3× 156 2.1× 27 952
Javier Hernando Spain 21 1.3k 1.5× 1.1k 2.2× 335 1.2× 117 1.0× 50 0.7× 118 1.7k
Ricard Marxer France 13 470 0.5× 356 0.7× 100 0.3× 101 0.8× 127 1.7× 66 719
JJ Odell 5 941 1.1× 1.1k 2.1× 279 1.0× 53 0.4× 61 0.8× 6 1.5k
Hemant A. Patil India 18 1.3k 1.4× 1.1k 2.1× 187 0.6× 24 0.2× 44 0.6× 193 1.5k
Zhiyao Duan United States 22 1.4k 1.6× 428 0.8× 1.0k 3.5× 114 1.0× 205 2.8× 114 1.8k
Xavier Rodet France 19 767 0.9× 352 0.7× 489 1.7× 61 0.5× 182 2.5× 93 1.1k
Masakiyo Fujimoto Japan 18 1.0k 1.2× 700 1.3× 129 0.4× 289 2.4× 76 1.0× 95 1.2k
Carlo Drioli Italy 15 457 0.5× 203 0.4× 174 0.6× 92 0.8× 87 1.2× 82 708

Countries citing papers authored by Ben Milner

Since Specialization
Citations

This map shows the geographic impact of Ben Milner's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Ben Milner with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Ben Milner more than expected).

Fields of papers citing papers by Ben Milner

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Ben Milner. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Ben Milner. The network helps show where Ben Milner may publish in the future.

Co-authorship network of co-authors of Ben Milner

This figure shows the co-authorship network connecting the top 25 collaborators of Ben Milner. A scholar is included among the top collaborators of Ben Milner based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Ben Milner. Ben Milner is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Newson, Stuart E., et al.. (2024). Improving acoustic species identification using data augmentation within a deep learning framework. Ecological Informatics. 83. 102851–102851. 2 indexed citations
2.
Milner, Ben, et al.. (2023). Investigating Imaginary Mask Estimation in Complex Masking for Speech Enhancement. UEA Digital Repository (University of East Anglia). 131–135.
3.
Taylor, Sarah, et al.. (2021). Speaker-Independent Speech Animation Using Perceptual Loss Functions and Synthetic Data. IEEE Transactions on Multimedia. 24. 2539–2552. 9 indexed citations
4.
Milner, Ben, et al.. (2019). A comparison of machine learning methods for detecting right whales from autonomous surface vehicles. UEA Digital Repository (University of East Anglia). 1–5. 4 indexed citations
5.
Milner, Ben, et al.. (2018). Synthesising visual speech using dynamic visemes and deep learning architectures. Computer Speech & Language. 55. 101–119. 10 indexed citations
6.
Milner, Ben, et al.. (2017). A Comparison of Perceptually Motivated Loss Functions for Binary Mask Estimation in Speech Separation. UEA Digital Repository (University of East Anglia). 2003–2007. 2 indexed citations
7.
Milner, Ben, et al.. (2015). Voicing classification of visual speech using convolutional neural networks. UEA Digital Repository (University of East Anglia). 103–108. 3 indexed citations
8.
Milner, Ben, et al.. (2013). Speaker separation using visually-derived binary masks.. AVSP. 215–220. 7 indexed citations
9.
Milner, Ben. (2013). Enhancing speech at very low signal-to-noise ratios using non-acoustic reference signals. Speech Communication. 55(9). 879–892. 1 indexed citations
10.
Milner, Ben, et al.. (2009). Effective visually-derived Wiener filtering for audio-visual speech processing. View. 134–139. 5 indexed citations
11.
Milner, Ben, et al.. (2008). Comparing noise compensation methods for robust prediction of acoustic speech features from MFCC vectors in noise. UEA Digital Repository (University of East Anglia). 1–5. 2 indexed citations
12.
Milner, Ben, et al.. (2008). Using Audio-Visual Features For Robust Voice Activity Detection In Clean And Noisy Speech. UEA Digital Repository (University of East Anglia). 1–5. 32 indexed citations
13.
Milner, Ben, et al.. (2007). Noisy audio speech enhancement using Wiener filters derived from visual speech.. AVSP. 11(2). 16–e0146855. 4 indexed citations
14.
Milner, Ben, et al.. (2007). Maximising audio-visual speech correlation. Surrey Research Insight Open Access (The University of Surrey). 17. 8 indexed citations
15.
Qin, Yan, et al.. (2006). Kalman filter with linear predictor and harmonic noise models for noisy speech enhancement. UEA Digital Repository (University of East Anglia). 1–4.
16.
Milner, Ben, et al.. (2004). Interleaving And Estimation Of Lost Vectors For Robust Speech Recognition In Burst-Like Packet Loss. UEA Digital Repository (University of East Anglia). 1947–1950. 1 indexed citations
17.
Milner, Ben, et al.. (1997). Evaluating feature set performance using the f-ratio and j-measures. 413–416. 21 indexed citations
18.
Milner, Ben & Saeed V. Vaseghi. (1995). An analysis of cepstral-time matrices for noise and channel robust speech recognition. UEA Digital Repository (University of East Anglia). 519–522. 14 indexed citations
19.
Milner, Ben. (1994). Comparison of some noise-compensation methods for speech recognition in adverse environments. IEE Proceedings - Vision Image and Signal Processing. 141(5). 280–280. 5 indexed citations
20.
Vaseghi, Saeed V. & Ben Milner. (1993). Noisy speech recognition based on HMMs, Wiener filters and re-evaluation of most likely candidates. IEEE International Conference on Acoustics Speech and Signal Processing. 26. 103–106 vol.2. 3 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026