Ming-Hsiang Su

857 total citations
48 papers, 567 citations indexed

About

Ming-Hsiang Su is a scholar working on Artificial Intelligence, Experimental and Cognitive Psychology and Computer Vision and Pattern Recognition. According to data from OpenAlex, Ming-Hsiang Su has authored 48 papers receiving a total of 567 indexed citations (citations by other indexed papers that have themselves been cited), including 26 papers in Artificial Intelligence, 19 papers in Experimental and Cognitive Psychology and 12 papers in Computer Vision and Pattern Recognition. Recurrent topics in Ming-Hsiang Su's work include Emotion and Mood Recognition (17 papers), Topic Modeling (15 papers) and Speech and Audio Processing (9 papers). Ming-Hsiang Su is often cited by papers focused on Emotion and Mood Recognition (17 papers), Topic Modeling (15 papers) and Speech and Audio Processing (9 papers). Ming-Hsiang Su collaborates with scholars based in Taiwan, Slovakia and China. Ming-Hsiang Su's co-authors include Chung‐Hsien Wu, Kun-Yi Huang, Yi‐Hsuan Chen, Hsin‐Min Wang, Yi‐Hsuan Chen, Yu‐Ting Kuo, Yuting Zheng, Pao-Ta Yu, Liangyu Chen and Yi Chang and has published in prestigious journals such as IEEE Access, Sensors and Pattern Recognition.

In The Last Decade

Ming-Hsiang Su

46 papers receiving 539 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Ming-Hsiang Su Taiwan 12 283 238 115 81 73 48 567
Kun-Yi Huang Taiwan 10 185 0.7× 210 0.9× 81 0.7× 69 0.9× 65 0.9× 25 420
Colleen Richey United States 16 511 1.8× 198 0.8× 270 2.3× 50 0.6× 90 1.2× 35 797
Lukas Stappen Germany 12 261 0.9× 249 1.0× 117 1.0× 73 0.9× 118 1.6× 25 516
Charlie K. Dagli United States 10 193 0.7× 234 1.0× 78 0.7× 233 2.9× 97 1.3× 16 536
Norhaslinda Kamaruddin Malaysia 11 95 0.3× 183 0.8× 100 0.9× 52 0.6× 33 0.5× 56 391
Shogo Okada Japan 13 307 1.1× 159 0.7× 69 0.6× 102 1.3× 130 1.8× 104 569
Kevin Leach United States 14 272 1.0× 59 0.2× 167 1.5× 36 0.4× 68 0.9× 52 668
Hugues Salamin United Kingdom 10 410 1.4× 386 1.6× 358 3.1× 151 1.9× 121 1.7× 19 823
Laura Docío-Fernández Spain 12 243 0.9× 98 0.4× 169 1.5× 103 1.3× 40 0.5× 63 418

Countries citing papers authored by Ming-Hsiang Su

Since Specialization
Citations

This map shows the geographic impact of Ming-Hsiang Su's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Ming-Hsiang Su with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Ming-Hsiang Su more than expected).

Fields of papers citing papers by Ming-Hsiang Su

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Ming-Hsiang Su. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Ming-Hsiang Su. The network helps show where Ming-Hsiang Su may publish in the future.

Co-authorship network of co-authors of Ming-Hsiang Su

This figure shows the co-authorship network connecting the top 25 collaborators of Ming-Hsiang Su. A scholar is included among the top collaborators of Ming-Hsiang Su based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Ming-Hsiang Su. Ming-Hsiang Su is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Pleva, Matúš, et al.. (2025). Ensemble Learning for Wafer Defect Pattern Classification in the Semiconductor Industry. IEEE Access. 13. 155714–155728. 1 indexed citations
2.
Su, Ming-Hsiang, et al.. (2024). Few-Shot Image Segmentation Using Generating Mask with Meta-Learning Classifier Weight Transformer Network. Electronics. 13(13). 2634–2634. 1 indexed citations
3.
Su, Ming-Hsiang, et al.. (2024). Implementation of Sound Direction Detection and Mixed Source Separation in Embedded Systems. Sensors. 24(13). 4351–4351.
5.
Su, Ming-Hsiang, et al.. (2023). Named Entity Recognition for Chinese Healthcare Applications. 30. 749–750.
6.
Hládek, Daniel, et al.. (2023). Application of Wafer Defect Pattern Classification Model in the Semiconductor Industry. 2173–2177. 1 indexed citations
7.
Su, Ming-Hsiang, et al.. (2021). Speech Emotion Recognition Considering Nonverbal Vocalization in Affective Conversations. IEEE/ACM Transactions on Audio Speech and Language Processing. 29. 1675–1686. 46 indexed citations
8.
Wu, Chung‐Hsien, et al.. (2020). Exploring Macroscopic and Microscopic Fluctuations of Elicited Facial Expressions for Mood Disorder Classification. IEEE Transactions on Affective Computing. 12(4). 989–1001. 7 indexed citations
9.
Su, Ming-Hsiang, et al.. (2020). A Two-Stage Transformer-Based Approach for Variable-Length Abstractive Summarization. IEEE/ACM Transactions on Audio Speech and Language Processing. 28. 2061–2072. 36 indexed citations
10.
Su, Ming-Hsiang, et al.. (2019). Cell-Coupled Long Short-Term Memory With $L$ -Skip Fusion Mechanism for Mood Disorder Detection Through Elicited Audiovisual Features. IEEE Transactions on Neural Networks and Learning Systems. 31(1). 124–135. 18 indexed citations
11.
Su, Ming-Hsiang, Chung‐Hsien Wu, & Liangyu Chen. (2019). Attention-Based Response Generation Using Parallel Double Q-Learning for Dialog Policy Decision in a Conversational System. IEEE/ACM Transactions on Audio Speech and Language Processing. 28. 131–143. 11 indexed citations
12.
Huang, Kun-Yi, et al.. (2019). Speech Emotion Recognition Using Deep Neural Network Considering Verbal and Nonverbal Speech Sounds. 5866–5870. 77 indexed citations
13.
Huang, Kun-Yi, Chung‐Hsien Wu, Ming-Hsiang Su, & Yu‐Ting Kuo. (2018). Detecting Unipolar and Bipolar Depressive Disorders from Elicited Speech Responses Using Latent Affective Structure Model. IEEE Transactions on Affective Computing. 11(3). 393–404. 45 indexed citations
14.
Su, Ming-Hsiang, et al.. (2018). Response Selection and Automatic Message-Response Expansion in Retrieval-Based QA Systems using Semantic Dependency Pair Model. ACM Transactions on Asian and Low-Resource Language Information Processing. 18(1). 1–24. 6 indexed citations
15.
Wu, Chung‐Hsien, et al.. (2016). Detection of mood disorder using speech emotion profiles and LSTM. 1–5. 11 indexed citations
16.
Yu, Pao-Ta, et al.. (2013). A Near-Reality Approach to Improve the e-Learning Open Courseware.. Educational Technology & Society. 16(4). 242–257. 7 indexed citations
17.
Yu, Pao-Ta, et al.. (2012). Utilizing an Online Group Study Environment to Enhance Student Reading Ability and Learning Effectiveness. 網際網路技術學刊. 13(6). 981–988. 1 indexed citations
18.
Su, Ming-Hsiang & Pao-Ta Yu. (2011). A Directable and Designable Course Recording System. Journal of Convergence Information Technology. 6(3). 234–243. 2 indexed citations
19.
Yang, Jinn‐Min, Ming-Hsiang Su, & Pao-Ta Yu. (2010). A Novel K-Nearest Neighbor Classifier Based on Adaptive Metric Formed by Features Extracted by Nonparametric Feature Extraction Model. 4(2). 89–103. 2 indexed citations
20.
Su, Ming-Hsiang, et al.. (2005). A user-oriented framework for the design and implementation of pet robots. 3. 2936–2941. 2 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026