Dingkun Long

425 total citations
11 papers, 144 citations indexed

About

Dingkun Long is a scholar working on Artificial Intelligence, Computer Vision and Pattern Recognition and Information Systems. According to data from OpenAlex, Dingkun Long has authored 11 papers receiving a total of 144 indexed citations (citations by other indexed papers that have themselves been cited), including 11 papers in Artificial Intelligence, 5 papers in Computer Vision and Pattern Recognition and 1 paper in Information Systems. Recurrent topics in Dingkun Long's work include Topic Modeling (8 papers), Natural Language Processing Techniques (5 papers) and Multimodal Machine Learning Applications (4 papers). Dingkun Long is often cited by papers focused on Topic Modeling (8 papers), Natural Language Processing Techniques (5 papers) and Multimodal Machine Learning Applications (4 papers). Dingkun Long collaborates with scholars based in China, United States and Canada. Dingkun Long's co-authors include Pengjun Xie, Guangwei Xu, Ning Ding, Haoyu Zhang, Gongshen Liu, Jie Zhou, Chunping Ma, Richong Zhang, Yongyi Mao and Meishan Zhang and has published in prestigious journals such as IEEE Access, Neurocomputing and Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.

In The Last Decade

Dingkun Long

11 papers receiving 140 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Dingkun Long China 6 114 24 17 8 6 11 144
Chenglei Si United States 8 131 1.1× 22 0.9× 25 1.5× 7 0.9× 12 2.0× 13 159
Wenyue Hua United States 8 67 0.6× 31 1.3× 9 0.5× 6 0.8× 6 1.0× 21 104
Γεράσιμος Λάμπουρας Greece 6 142 1.2× 22 0.9× 16 0.9× 13 1.6× 3 0.5× 17 148
Thomas Scialom France 6 158 1.4× 18 0.8× 46 2.7× 5 0.6× 5 0.8× 12 178
Sebastian Hofstätter Austria 5 70 0.6× 24 1.0× 23 1.4× 3 0.4× 3 0.5× 12 82
Sutanu Chakraborti India 5 110 1.0× 30 1.3× 18 1.1× 6 0.8× 4 0.7× 25 132
Guillaume Wenzek Israel 5 193 1.7× 20 0.8× 47 2.8× 10 1.3× 3 0.5× 6 198
C Gysel Netherlands 5 77 0.7× 36 1.5× 13 0.8× 6 0.8× 3 0.5× 12 91
Felix Hieber Germany 7 118 1.0× 29 1.2× 31 1.8× 6 0.8× 3 0.5× 9 124
Qingyu Tan Singapore 5 172 1.5× 21 0.9× 28 1.6× 9 1.1× 3 0.5× 10 193

Countries citing papers authored by Dingkun Long

Since Specialization
Citations

This map shows the geographic impact of Dingkun Long's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Dingkun Long with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Dingkun Long more than expected).

Fields of papers citing papers by Dingkun Long

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Dingkun Long. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Dingkun Long. The network helps show where Dingkun Long may publish in the future.

Co-authorship network of co-authors of Dingkun Long

This figure shows the co-authorship network connecting the top 25 collaborators of Dingkun Long. A scholar is included among the top collaborators of Dingkun Long based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Dingkun Long. Dingkun Long is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

11 of 11 papers shown
1.
Zhang, Xin, Yanzhao Zhang, Dingkun Long, et al.. (2025). Bridging Modalities: Improving Universal Multimodal Retrieval by Multimodal Large Language Models. 9274–9285. 1 indexed citations
2.
Zhang, Longhui, Yanzhao Zhang, Dingkun Long, et al.. (2024). A Two-Stage Adaptation of Large Language Models for Text Ranking. 11880–11891. 1 indexed citations
3.
Zhang, Xintong, Yanzhao Zhang, Dingkun Long, et al.. (2024). mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval. 1393–1412. 8 indexed citations
4.
Zhang, Yanzhao, et al.. (2023). Text Representation Distillation via Information Bottleneck Principle. 14372–14383. 1 indexed citations
5.
Long, Dingkun, et al.. (2022). Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling. 526–537. 5 indexed citations
6.
Long, Dingkun, Qiong Gao, Guangwei Xu, et al.. (2022). Multi-CPR. Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 3046–3056. 6 indexed citations
7.
Long, Dingkun, et al.. (2021). A Fine-Grained Domain Adaption Model for Joint Word Segmentation and POS Tagging. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 3587–3598. 2 indexed citations
8.
Zhou, Jie, Chunping Ma, Dingkun Long, et al.. (2020). Hierarchy-Aware Global Model for Hierarchical Text Classification. 1106–1117. 92 indexed citations
9.
Long, Dingkun, Guangwei Xu, Muhua Zhu, et al.. (2020). Learning with Noise: Improving Distantly-Supervised Fine-grained Entity Typing via Automatic Relabeling. 3808–3815. 16 indexed citations
10.
Long, Dingkun, Richong Zhang, & Yongyi Mao. (2019). Recurrent Neural Networks With Finite Memory Length. IEEE Access. 7. 12511–12520. 8 indexed citations
11.
Long, Dingkun, Richong Zhang, & Yongyi Mao. (2018). Prototypical recurrent unit. Neurocomputing. 311. 146–154. 4 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026