Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution
2020366 citationsYapeng Tian, Chenliang Xu et al.profile →
Towards Automatic Learning of Procedures From Web Instructional Videos
2018313 citationsLuowei Zhou, Chenliang Xu et al.profile →
Hierarchical Cross-Modal Talking Face Generation With Dynamic Pixel-Wise Loss
2019268 citationsLele Chen, Ross K. Maddox et al.profile →
Video Understanding With Large Language Models: A Survey
202516 citationsYunlong Tang, Jie An et al.IEEE Transactions on Circuits and Systems for Video Technologyprofile →
Peers — A (Enhanced Table)
Peers by citation overlap · career bar shows stage (early→late)
cites ·
hero ref
This map shows the geographic impact of Chenliang Xu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Chenliang Xu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Chenliang Xu more than expected).
This network shows the impact of papers produced by Chenliang Xu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Chenliang Xu. The network helps show where Chenliang Xu may publish in the future.
Co-authorship network of co-authors of Chenliang Xu
This figure shows the co-authorship network connecting the top 25 collaborators of Chenliang Xu.
A scholar is included among the top collaborators of Chenliang Xu based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Chenliang Xu. Chenliang Xu is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
All Works
20 of 20 papers shown
1.
Tang, Yunlong, Jie An, Feng Zheng, et al.. (2025). Video Understanding With Large Language Models: A Survey. IEEE Transactions on Circuits and Systems for Video Technology. 36(2). 1355–1376.16 indexed citations breakdown →
Tian, Yapeng, et al.. (2020). Deep Audio Prior: Learning Sound Source Separation from a Single Audio Mixture. Computer Vision and Pattern Recognition.2 indexed citations
14.
Chen, Lele, Haitian Zheng, Ross K. Maddox, Zhiyao Duan, & Chenliang Xu. (2019). Sound to Visual: Hierarchical Cross-Modal Talking Face Generation. Computer Vision and Pattern Recognition. 1–4.4 indexed citations
Tian, Yapeng, et al.. (2019). Audio-Visual Interpretable and Controllable Video Captioning. Computer Vision and Pattern Recognition. 9–12.9 indexed citations
Tian, Yapeng, Jing Shi, Bochen Li, Zhiyao Duan, & Chenliang Xu. (2019). Audio-Visual Event Localization in the Wild. Computer Vision and Pattern Recognition. 5–8.4 indexed citations
19.
Zhou, Luowei, Chenliang Xu, & Jason J. Corso. (2017). ProcNets: Learning to Segment Procedures in Untrimmed and Unconstrained Videos.. arXiv (Cornell University).4 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.