Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks
20181.0k citationsTao Xu, Pengchuan Zhang et al.profile →
VinVL: Revisiting Visual Representations in Vision-Language Models
2021568 citationsPengchuan Zhang, Xiujun Li et al.profile →
Countries citing papers authored by Pengchuan Zhang
Since
Specialization
Citations
This map shows the geographic impact of Pengchuan Zhang's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Pengchuan Zhang with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Pengchuan Zhang more than expected).
This network shows the impact of papers produced by Pengchuan Zhang. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Pengchuan Zhang. The network helps show where Pengchuan Zhang may publish in the future.
Co-authorship network of co-authors of Pengchuan Zhang
This figure shows the co-authorship network connecting the top 25 collaborators of Pengchuan Zhang.
A scholar is included among the top collaborators of Pengchuan Zhang based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Pengchuan Zhang. Pengchuan Zhang is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Yang, Jianwei, Chunyuan Li, Pengchuan Zhang, et al.. (2022). Unified Contrastive Learning in Image-Text-Label Space. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 19141–19151.117 indexed citations
7.
Dou, Zi-Yi, Yichong Xu, Zhe Gan, et al.. (2022). An Empirical Study of Training End-to-End Vision-and-Language Transformers. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18145–18155.204 indexed citations breakdown →
8.
Li, Chunyuan, et al.. (2021). Focal Attention for Long-Range Interactions in Vision Transformers. Neural Information Processing Systems. 34.50 indexed citations
9.
Zhang, Pengchuan, Xiujun Li, Xiaowei Hu, et al.. (2021). VinVL: Making Visual Representations Matter in Vision-Language Models. arXiv (Cornell University).48 indexed citations
10.
Zhang, Pengchuan, Xiyang Dai, Jianwei Yang, et al.. (2021). Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 2978–2988.204 indexed citations breakdown →
11.
Dai, Xiyang, Yinpeng Chen, Jianwei Yang, et al.. (2021). Dynamic DETR: End-to-End Object Detection with Dynamic Attention. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 2968–2977.257 indexed citations breakdown →
12.
Salman, Hadi, Ilya Razenshteyn, Pengchuan Zhang, et al.. (2019). Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. arXiv (Cornell University). 32. 11289–11300.29 indexed citations
13.
Salman, Hadi, Greg Yang, Huan Zhang, Cho‐Jui Hsieh, & Pengchuan Zhang. (2019). A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks. Neural Information Processing Systems. 32. 9832–9842.16 indexed citations
Zhang, Pengchuan, Qiang Liu, Dengyong Zhou, Tao Xu, & Xiaodong He. (2018). On the Discrimination-Generalization Tradeoff in GANs. International Conference on Learning Representations.3 indexed citations
17.
Huang, Qiuyuan, Pengchuan Zhang, Dapeng Wu, & Lei Zhang. (2018). Turbo Learning for CaptionBot and DrawingBot. Neural Information Processing Systems. 31. 6455–6465.9 indexed citations
18.
Xu, Tao, Pengchuan Zhang, Qiuyuan Huang, et al.. (2018). AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks. 1316–1324.1024 indexed citations breakdown →
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.