Xiaodan Liang

23.4k total citations · 5 hit papers
239 papers, 8.7k citations indexed

About

Xiaodan Liang is a scholar working on Computer Vision and Pattern Recognition, Artificial Intelligence and Computational Mechanics. According to data from OpenAlex, Xiaodan Liang has authored 239 papers receiving a total of 8.7k indexed citations (citations by other indexed papers that have themselves been cited), including 177 papers in Computer Vision and Pattern Recognition, 137 papers in Artificial Intelligence and 14 papers in Computational Mechanics. Recurrent topics in Xiaodan Liang's work include Multimodal Machine Learning Applications (85 papers), Domain Adaptation and Few-Shot Learning (69 papers) and Advanced Neural Network Applications (66 papers). Xiaodan Liang is often cited by papers focused on Multimodal Machine Learning Applications (85 papers), Domain Adaptation and Few-Shot Learning (69 papers) and Advanced Neural Network Applications (66 papers). Xiaodan Liang collaborates with scholars based in China, United States and Sweden. Xiaodan Liang's co-authors include Liang Lin, Shuicheng Yan, Xiaohui Shen, Jiashi Feng, Yunchao Wei, Eric P. Xing, Xiaojun Chang, Hang Xu, Yao Zhao and Ke Gong and has published in prestigious journals such as Bioinformatics, IEEE Transactions on Pattern Analysis and Machine Intelligence and The Science of The Total Environment.

In The Last Decade

Xiaodan Liang

218 papers receiving 8.5k citations

Hit Papers

Scale-aware Fast R-CNN for Pedestrian Detection 2016 2026 2019 2022 2017 2017 2016 2019 2025 100 200 300 400 500

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Xiaodan Liang China 51 6.6k 3.7k 498 455 434 239 8.7k
Abhinav Gupta United States 35 5.3k 0.8× 3.7k 1.0× 234 0.5× 502 1.1× 486 1.1× 76 7.9k
Yanwei Fu China 38 4.8k 0.7× 2.8k 0.8× 376 0.8× 744 1.6× 693 1.6× 164 6.7k
Mingkui Tan China 40 5.4k 0.8× 3.3k 0.9× 445 0.9× 917 2.0× 473 1.1× 151 8.1k
Zhaoxiang Zhang China 39 5.1k 0.8× 1.9k 0.5× 345 0.7× 428 0.9× 197 0.5× 230 6.5k
Bharath Hariharan United States 23 4.7k 0.7× 2.3k 0.6× 363 0.7× 399 0.9× 229 0.5× 46 5.9k
Fumin Shen China 45 6.1k 0.9× 2.9k 0.8× 226 0.5× 539 1.2× 210 0.5× 205 7.7k
Guosheng Lin Singapore 38 7.0k 1.1× 2.9k 0.8× 454 0.9× 1.6k 3.5× 555 1.3× 162 9.0k
Xiangyang Xue China 42 4.5k 0.7× 2.8k 0.7× 292 0.6× 373 0.8× 320 0.7× 288 7.0k
Guiguang Ding China 51 7.5k 1.1× 4.6k 1.2× 238 0.5× 1.0k 2.2× 481 1.1× 187 11.8k
Wenguan Wang China 56 9.4k 1.4× 1.9k 0.5× 275 0.6× 1.1k 2.4× 282 0.6× 123 11.1k

Countries citing papers authored by Xiaodan Liang

Since Specialization
Citations

This map shows the geographic impact of Xiaodan Liang's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Xiaodan Liang with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Xiaodan Liang more than expected).

Fields of papers citing papers by Xiaodan Liang

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Xiaodan Liang. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Xiaodan Liang. The network helps show where Xiaodan Liang may publish in the future.

Co-authorship network of co-authors of Xiaodan Liang

This figure shows the co-authorship network connecting the top 25 collaborators of Xiaodan Liang. A scholar is included among the top collaborators of Xiaodan Liang based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Xiaodan Liang. Xiaodan Liang is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Chu, Ruihang, et al.. (2025). DialogGen: Multi-modal Interactive Dialogue System with Multi-turn Text-Image Generation. 411–426. 2 indexed citations
2.
Li, Hanhui, et al.. (2024). Monocular 3D Hand Mesh Recovery via Dual Noise Estimation. Proceedings of the AAAI Conference on Artificial Intelligence. 38(4). 3046–3054. 1 indexed citations
3.
Tang, Haoran, Peng Jin, Can Zhang, et al.. (2024). RAP: Efficient Text-Video Retrieval with Sparse-and-Correlated Adapter. Rare & Special e-Zone (The Hong Kong University of Science and Technology). 7160–7174. 3 indexed citations
4.
Xiong, Jing, et al.. (2024). AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations. Rare & Special e-Zone (The Hong Kong University of Science and Technology). 2857–2896.
5.
6.
Wang, Guangrun, Yixing Lao, Peng Chen, et al.. (2024). LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields. 390–398. 10 indexed citations
7.
Han, Jianhua, et al.. (2023). NLIP: Noise-Robust Language-Image Pre-training. Proceedings of the AAAI Conference on Artificial Intelligence. 37(1). 926–934. 17 indexed citations
8.
Zhu, Fengda, Vincent C. S. Lee, Xiaojun Chang, & Xiaodan Liang. (2023). Vision Language Navigation with Knowledge-driven Environmental Dreamer. Monash University Research Portal (Monash University). 1840–1848.
9.
Zeng, Yihan, Chenhan Jiang, Jiageng Mao, et al.. (2023). CLIP2: Contrastive Language-Image-Point Pretraining from Real-World Point Cloud Data. Rare & Special e-Zone (The Hong Kong University of Science and Technology). 15244–15253. 41 indexed citations
10.
Jiang, Zutao, Guansong Lu, Xiaodan Liang, et al.. (2023). 3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation. Proceedings of the AAAI Conference on Artificial Intelligence. 37(1). 1051–1059. 5 indexed citations
11.
Lin, Bingqian, Yi Zhu, Fengda Zhu, et al.. (2023). Towards Deviation-Robust Agent Navigation via Perturbation-Aware Contrastive Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. 45(10). 12535–12549. 2 indexed citations
12.
Qin, Jinghui, et al.. (2023). Template-Based Contrastive Distillation Pretraining for Math Word Problem Solving. IEEE Transactions on Neural Networks and Learning Systems. 35(9). 12823–12835. 1 indexed citations
13.
Dong, Xiao, Yunchao Wei, Xiao-Yong Wei, et al.. (2023). Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product Retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence. 45(11). 1–16. 9 indexed citations
14.
Cheng, Yi, Wenge Liu, Wenjie Li, et al.. (2022). Improving Multi-turn Emotional Support Dialogue Generation with Lookahead Strategy Planning. 3014–3026. 15 indexed citations
16.
Zhang, Hongming, et al.. (2022). MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure. 4698–4724. 2 indexed citations
17.
Li, Changlin, Guangrun Wang, Bing Wang, et al.. (2021). Dynamic Slimmable Network. 8603–8613. 84 indexed citations
18.
Cao, Qingxing, et al.. (2021). Linguistically Routing Capsule Network for Out-of-distribution Visual Question Answering. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 1594–1603. 10 indexed citations
19.
Dong, Xiao, et al.. (2021). PathReasoner: Explainable reasoning paths for commonsense question answering. Knowledge-Based Systems. 235. 107612–107612. 11 indexed citations
20.
Gong, Ke, Xiaodan Liang, Dongyu Zhang, Xiaohui Shen, & Liang Lin. (2017). Look into Person: Self-Supervised Structure-Sensitive Learning and a New Benchmark for Human Parsing. 6757–6765. 293 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026