This map shows the geographic impact of Chen-Yu Wei's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Chen-Yu Wei with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Chen-Yu Wei more than expected).
This network shows the impact of papers produced by Chen-Yu Wei. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Chen-Yu Wei. The network helps show where Chen-Yu Wei may publish in the future.
Co-authorship network of co-authors of Chen-Yu Wei
This figure shows the co-authorship network connecting the top 25 collaborators of Chen-Yu Wei.
A scholar is included among the top collaborators of Chen-Yu Wei based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Chen-Yu Wei. Chen-Yu Wei is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Wei, Chen-Yu, et al.. (2021). Linear Last-iterate Convergence in Constrained Saddle-point Optimization. International Conference on Learning Representations.3 indexed citations
4.
Lee, Chung‐Wei, et al.. (2021). Achieving Near Instance-Optimality and Minimax-Optimality in Stochastic and Adversarial Linear Bandits Simultaneously. arXiv (Cornell University). 6142–6151.1 indexed citations
5.
Wei, Chen-Yu, et al.. (2020). Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes. International Conference on Machine Learning. 1. 10170–10180.8 indexed citations
6.
Chen, Yifang, Chung‐Wei Lee, Haipeng Luo, & Chen-Yu Wei. (2019). A New Algorithm for Non-stationary Contextual Bandits: Efficient, Optimal and Parameter-free. Conference on Learning Theory. 696–726.2 indexed citations
7.
Luo, Haipeng, et al.. (2019). Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously. Research at the University of Copenhagen (University of Copenhagen). 7683–7692.4 indexed citations
8.
Auer, Peter, Yifang Chen, Pratik Gajane, et al.. (2019). Achieving Optimal Dynamic Regret for Non-stationary Bandits without Prior Information. 159–163.1 indexed citations
9.
Luo, Haipeng, Chen-Yu Wei, Alekh Agarwal, & John Langford. (2018). Efficient Contextual Bandits in Non-stationary Worlds. Conference on Learning Theory. 1739–1776.6 indexed citations
Wei, Chen-Yu, et al.. (2016). Tracking the Best Expert in Non-stationary Stochastic Environments. Neural Information Processing Systems. 29. 3972–3980.1 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.