This map shows the geographic impact of Chi Jin's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Chi Jin with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Chi Jin more than expected).
This network shows the impact of papers produced by Chi Jin. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Chi Jin. The network helps show where Chi Jin may publish in the future.
Co-authorship network of co-authors of Chi Jin
This figure shows the co-authorship network connecting the top 25 collaborators of Chi Jin.
A scholar is included among the top collaborators of Chi Jin based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Chi Jin. Chi Jin is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Misra, Dipendra, Qinghua Liu, Chi Jin, & John Langford. (2021). Provable Rich Observation Reinforcement Learning with Combinatorial Latent States. International Conference on Learning Representations.2 indexed citations
6.
Liu, Qinghua, et al.. (2021). A Sharp Analysis of Model-based Reinforcement Learning with Self-Play. International Conference on Machine Learning. 7001–7010.1 indexed citations
Jin, Chi, et al.. (2020). Provable Self-Play Algorithms for Competitive Reinforcement Learning. International Conference on Machine Learning. 1. 551–560.4 indexed citations
9.
Jin, Chi, Praneeth Netrapalli, & Michael I. Jordan. (2020). What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?. International Conference on Machine Learning. 1. 4880–4889.25 indexed citations
10.
Yang, Zhuoran, Chi Jin, Zhaoran Wang, Mengdi Wang, & Michael I. Jordan. (2020). Bridging Exploration and General Function Approximation in Reinforcement Learning: Provably Efficient Kernel and Neural Value Iterations.. arXiv (Cornell University).2 indexed citations
11.
Yang, Zhuoran, Chi Jin, Zhaoran Wang, Mengdi Wang, & Michael I. Jordan. (2020). Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations. Neural Information Processing Systems. 33. 13903–13916.3 indexed citations
12.
Jin, Chi, Praneeth Netrapalli, & Michael I. Jordan. (2019). Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal.. arXiv (Cornell University).12 indexed citations
13.
Jin, Chi, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, & Michael I. Jordan. (2019). Stochastic Gradient Descent Escapes Saddle Points Efficiently.. arXiv (Cornell University).19 indexed citations
Jin, Chi, Zeyuan Allen-Zhu, Sébastien Bubeck, & Michael I. Jordan. (2018). Is Q-learning Provably Efficient?. arXiv (Cornell University). 31. 4863–4873.114 indexed citations
16.
Tripuraneni, Nilesh, Mitchell Stern, Chi Jin, Jeffrey Regier, & Michael I. Jordan. (2018). Stochastic Cubic Regularization for Fast Nonconvex Optimization. Neural Information Processing Systems. 31. 2899–2908.20 indexed citations
17.
Jin, Chi, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, & Michael I. Jordan. (2017). How to escape saddle points efficiently. International Conference on Machine Learning. 1724–1732.57 indexed citations
18.
Jin, Chi, Sham M. Kakade, & Praneeth Netrapalli. (2016). Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent. Neural Information Processing Systems. 29. 4520–4528.15 indexed citations
19.
Wang, Ziteng, Chi Jin, Kai Fan, et al.. (2016). Differentially private data releasing for smooth queries. Journal of Machine Learning Research. 17(1). 1779–1820.2 indexed citations
20.
Jain, Prateek, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, & Aaron Sidford. (2016). Matching Matrix Bernstein with Little Memory: Near-Optimal Finite Sample Guarantees for Oja's Algorithm.. arXiv (Cornell University).2 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.