Tie‐Yan Liu

7.3k total citations · 1 hit paper
62 papers, 1.4k citations indexed

About

Tie‐Yan Liu is a scholar working on Artificial Intelligence, Computer Vision and Pattern Recognition and Signal Processing. According to data from OpenAlex, Tie‐Yan Liu has authored 62 papers receiving a total of 1.4k indexed citations (citations by other indexed papers that have themselves been cited), including 49 papers in Artificial Intelligence, 18 papers in Computer Vision and Pattern Recognition and 5 papers in Signal Processing. Recurrent topics in Tie‐Yan Liu's work include Topic Modeling (24 papers), Natural Language Processing Techniques (20 papers) and Multimodal Machine Learning Applications (9 papers). Tie‐Yan Liu is often cited by papers focused on Topic Modeling (24 papers), Natural Language Processing Techniques (20 papers) and Multimodal Machine Learning Applications (9 papers). Tie‐Yan Liu collaborates with scholars based in China, United States and United Kingdom. Tie‐Yan Liu's co-authors include Tao Qin, Guolin Ke, Yingce Xia, Fei Tian, Di He, Di He, Shuxin Zheng, Chengxuan Ying, Tianle Cai and Shengjie Luo and has published in prestigious journals such as Journal of Artificial Intelligence Research, ACM Transactions on Intelligent Systems and Technology and arXiv (Cornell University).

In The Last Decade

Tie‐Yan Liu

60 papers receiving 1.3k citations

Hit Papers

Do Transformers Really Perform Badly for Graph Representa... 2021 2026 2022 2024 2021 50 100 150 200

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Tie‐Yan Liu China 21 1.0k 450 173 122 81 62 1.4k
Lantao Yu United States 9 930 0.9× 664 1.5× 219 1.3× 230 1.9× 63 0.8× 27 1.6k
Ulf Brefeld Germany 20 891 0.9× 381 0.8× 187 1.1× 151 1.2× 95 1.2× 64 1.4k
Shikun Feng China 13 670 0.7× 392 0.9× 70 0.4× 118 1.0× 96 1.2× 37 1.2k
Lingfeng Niu China 16 711 0.7× 447 1.0× 73 0.4× 75 0.6× 104 1.3× 82 1.2k
Stephen M. Chu United States 20 582 0.6× 436 1.0× 372 2.2× 120 1.0× 54 0.7× 56 1.2k
Christian W. Omlin Norway 18 957 0.9× 225 0.5× 105 0.6× 84 0.7× 51 0.6× 75 1.5k
Jialei Wang United States 18 789 0.8× 251 0.6× 58 0.3× 152 1.2× 96 1.2× 35 1.3k
Jilian Zhang China 17 794 0.8× 333 0.7× 114 0.7× 268 2.2× 56 0.7× 42 1.2k
Wu Xiaoyun United States 12 889 0.9× 393 0.9× 131 0.8× 234 1.9× 132 1.6× 18 1.4k

Countries citing papers authored by Tie‐Yan Liu

Since Specialization
Citations

This map shows the geographic impact of Tie‐Yan Liu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Tie‐Yan Liu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Tie‐Yan Liu more than expected).

Fields of papers citing papers by Tie‐Yan Liu

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Tie‐Yan Liu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Tie‐Yan Liu. The network helps show where Tie‐Yan Liu may publish in the future.

Co-authorship network of co-authors of Tie‐Yan Liu

This figure shows the co-authorship network connecting the top 25 collaborators of Tie‐Yan Liu. A scholar is included among the top collaborators of Tie‐Yan Liu based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Tie‐Yan Liu. Tie‐Yan Liu is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Liu, Chang, Xu Tan, Chongyang Tao, et al.. (2022). ProphetChat: Enhancing Dialogue Generation with Simulation of Future Conversation. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 962–973. 4 indexed citations
2.
He, Di, Yelong Shen, Tie‐Yan Liu, et al.. (2022). Finding the Dominant Winning Ticket in Pre-Trained Language Models. Findings of the Association for Computational Linguistics: ACL 2022. 1459–1472. 4 indexed citations
3.
Chen, Jiawei, Xu Tan, Yichong Leng, et al.. (2021). Speech-T: Transducer for Text to Speech and Beyond. Neural Information Processing Systems. 34. 4 indexed citations
4.
Ke, Guolin, Di He, & Tie‐Yan Liu. (2021). Rethinking Positional Encoding in Language Pre-training. International Conference on Learning Representations. 82 indexed citations
5.
Zhang, He, et al.. (2021). Co-evolution Transformer for Protein Contact Prediction. Neural Information Processing Systems. 34. 5 indexed citations
6.
Song, Kaitao, Xu Tan, Tao Qin, Jianfeng Lu, & Tie‐Yan Liu. (2020). MPNet: Masked and Permuted Pre-training for Language Understanding. arXiv (Cornell University). 33. 16857–16867. 18 indexed citations
7.
He, Di, et al.. (2019). Efficient Training of BERT by Progressively Stacking. International Conference on Machine Learning. 2337–2346. 39 indexed citations
8.
Meng, Qi, Shuxin Zheng, Huishuai Zhang, et al.. (2018). G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space.. International Conference on Learning Representations. 6 indexed citations
9.
Zhang, Huishuai, Wei Chen, & Tie‐Yan Liu. (2018). On the Local Hessian in Back-propagation. Neural Information Processing Systems. 31. 6520–6530. 3 indexed citations
10.
Xia, Yingce, Xu Tan, Fei Tian, et al.. (2018). Model-Level Dual Learning. International Conference on Machine Learning. 5383–5392. 22 indexed citations
11.
Gong, Chengyue, Di He, Xu Tan, et al.. (2018). FRAGE: Frequency-Agnostic Word Representation. Neural Information Processing Systems. 31. 1334–1345. 26 indexed citations
12.
Li, Zhuohan, Di He, Fei Tian, et al.. (2018). Towards Binary-Valued Gates for Robust LSTM Training. International Conference on Machine Learning. 2995–3004. 2 indexed citations
13.
Liu, Tie‐Yan, et al.. (2018). Boosting Dynamic Programming with Neural Networks for Solving NP-hard Problems. Asian Conference on Machine Learning. 726–739. 9 indexed citations
14.
He, Tianyu, Xu Tan, Yingce Xia, et al.. (2018). Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation. Neural Information Processing Systems. 31. 7944–7954. 52 indexed citations
15.
Wu, Lijun, Yingce Xia, Fei Tian, et al.. (2018). Adversarial Neural Machine Translation. Asian Conference on Machine Learning. 534–549. 35 indexed citations
16.
Chen, Wei, et al.. (2017). Finite Sample Analysis of the GTD Policy Evaluation Algorithms in Markov Setting. arXiv (Cornell University). 30. 5504–5513. 8 indexed citations
17.
Zheng, Shuxin, Qi Meng, Taifeng Wang, et al.. (2016). Asynchronous Stochastic Gradient Descent with Delay Compensation for Distributed Deep Learning.. arXiv (Cornell University). 11 indexed citations
18.
Cui, Qing, et al.. (2014). Co-learning of Word Representations and Morpheme Representations. International Conference on Computational Linguistics. 141–150. 44 indexed citations
19.
Tian, Fei, Hanjun Dai, Jiang Bian, et al.. (2014). A Probabilistic Model for Learning Multi-Prototype Word Embeddings. International Conference on Computational Linguistics. 151–160. 68 indexed citations
20.
Wang, Yining, Liwei Wang, Yuanzhi Li, Di He, & Tie‐Yan Liu. (2013). A Theoretical Analysis of NDCG Type Ranking Measures. Conference on Learning Theory. 25–54. 61 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026