L. A. Prashanth

1.4k total citations
36 papers, 608 citations indexed

About

L. A. Prashanth is a scholar working on Management Science and Operations Research, Artificial Intelligence and Control and Systems Engineering. According to data from OpenAlex, L. A. Prashanth has authored 36 papers receiving a total of 608 indexed citations (citations by other indexed papers that have themselves been cited), including 21 papers in Management Science and Operations Research, 15 papers in Artificial Intelligence and 5 papers in Control and Systems Engineering. Recurrent topics in L. A. Prashanth's work include Advanced Bandit Algorithms Research (11 papers), Reinforcement Learning in Robotics (7 papers) and Simulation Techniques and Applications (6 papers). L. A. Prashanth is often cited by papers focused on Advanced Bandit Algorithms Research (11 papers), Reinforcement Learning in Robotics (7 papers) and Simulation Techniques and Applications (6 papers). L. A. Prashanth collaborates with scholars based in India, United States and France. L. A. Prashanth's co-authors include Shalabh Bhatnagar, Michael C. Fu, Jie Cheng, Csaba Szepesvári, Sanjay P. Bhat, Nathaniel Korda, Krishna Jagannathan, Steven I. Marcus, Rémi Munos and K. Gopinath and has published in prestigious journals such as IEEE Transactions on Automatic Control, IEEE Transactions on Vehicular Technology and IEEE Transactions on Intelligent Transportation Systems.

In The Last Decade

L. A. Prashanth

35 papers receiving 584 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
L. A. Prashanth India 11 309 206 174 152 127 36 608
D.C. Chin United States 12 282 0.9× 121 0.6× 109 0.6× 125 0.8× 157 1.2× 30 532
Péter Kovács Hungary 8 241 0.8× 111 0.5× 28 0.2× 142 0.9× 19 0.1× 19 552
Patrice Marcotte Canada 19 368 1.2× 169 0.8× 32 0.2× 392 2.6× 63 0.5× 44 1.1k
Sourour Elloumi France 13 126 0.4× 44 0.2× 66 0.4× 47 0.3× 46 0.4× 34 653
O. du Merle Switzerland 8 67 0.2× 33 0.2× 76 0.4× 38 0.3× 59 0.5× 10 497
Lou Caccetta Australia 11 201 0.7× 68 0.3× 27 0.2× 135 0.9× 14 0.1× 35 514
Deren Han China 19 210 0.7× 81 0.4× 21 0.1× 358 2.4× 78 0.6× 64 1.1k
Fangwei Zhang China 10 115 0.4× 19 0.1× 115 0.7× 35 0.2× 151 1.2× 39 435
Cihan H. Tuncbilek United States 7 162 0.5× 46 0.2× 37 0.2× 33 0.2× 32 0.3× 7 553
Haofan Yang China 12 157 0.5× 225 1.1× 120 0.7× 155 1.0× 45 0.4× 25 599

Countries citing papers authored by L. A. Prashanth

Since Specialization
Citations

This map shows the geographic impact of L. A. Prashanth's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by L. A. Prashanth with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites L. A. Prashanth more than expected).

Fields of papers citing papers by L. A. Prashanth

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by L. A. Prashanth. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by L. A. Prashanth. The network helps show where L. A. Prashanth may publish in the future.

Co-authorship network of co-authors of L. A. Prashanth

This figure shows the co-authorship network connecting the top 25 collaborators of L. A. Prashanth. A scholar is included among the top collaborators of L. A. Prashanth based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with L. A. Prashanth. L. A. Prashanth is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Prashanth, L. A. & Shalabh Bhatnagar. (2025). Gradient-Based Algorithms for Zeroth-Order Optimization. 8(1-3). 1–332. 1 indexed citations
2.
Gupte, Sumedh, L. A. Prashanth, & Sanjay P. Bhat. (2024). Optimization of Utility-based Shortfall Risk: A Non-asymptotic Viewpoint. 1075–1080. 1 indexed citations
3.
Hegde, V. S., et al.. (2024). Online Estimation and Optimization of Utility-Based Shortfall Risk. Mathematics of Operations Research. 50(4). 2470–2501. 1 indexed citations
4.
Prashanth, L. A. & Michael C. Fu. (2022). Risk-Sensitive Reinforcement Learning via Policy Gradient Search. 15(5). 537–693. 11 indexed citations
5.
Prashanth, L. A., et al.. (2020). Concentration bounds for CVaR estimation: The cases of light-tailed and heavy-tailed distributions. International Conference on Machine Learning. 1. 5577–5586. 11 indexed citations
6.
Bhat, Sanjay P. & L. A. Prashanth. (2019). Improved Concentration Bounds for Conditional Value-at-Risk and Cumulative Prospect Theory using Wasserstein distance.. arXiv (Cornell University). 1 indexed citations
7.
Bhat, Sanjay P. & L. A. Prashanth. (2019). Concentration of risk measures: A Wasserstein distance approach. Neural Information Processing Systems. 32. 11762–11771. 12 indexed citations
8.
Prashanth, L. A., et al.. (2019). Risk-aware Multi-armed Bandits Using Conditional Value-at-Risk..
9.
Gopalan, Aditya, L. A. Prashanth, Michael C. Fu, & Steven I. Marcus. (2017). Weighted Bandits or: How Bandits Learn Distorted Values That Are Not Expected. Proceedings of the AAAI Conference on Artificial Intelligence. 31(1). 1941–1947. 1 indexed citations
10.
Prashanth, L. A., et al.. (2016). Adaptive System Optimization Using Random Directions Stochastic Approximation. IEEE Transactions on Automatic Control. 62(5). 2223–2238. 18 indexed citations
11.
Prashanth, L. A., et al.. (2015). Cumulative Prospect Theory Meets Reinforcement Learning: Prediction and Control. arXiv (Cornell University). 1406–1415. 25 indexed citations
12.
Prashanth, L. A., et al.. (2015). Two-Timescale Algorithms for Learning Nash Equilibria in General-Sum Stochastic Games. Adaptive Agents and Multi-Agents Systems. 1371–1379. 8 indexed citations
13.
Korda, Nathaniel & L. A. Prashanth. (2015). On TD(0) with function approximation: Concentration bounds and a centered variant with exponential convergence. International Conference on Machine Learning. 626–634. 9 indexed citations
14.
Prashanth, L. A., et al.. (2014). Algorithms for Nash Equilibria in General-Sum Stochastic Games.. arXiv (Cornell University). 2 indexed citations
15.
Prashanth, L. A. & Mohammad Ghavamzadeh. (2014). Actor-Critic Algorithms for Risk-Sensitive Reinforcement Learning.. arXiv (Cornell University). 1 indexed citations
16.
Korda, Nathaniel, L. A. Prashanth, & Rémi Munos. (2013). Online gradient descent for least squares regression: Non-asymptotic bounds and application to bandits.. arXiv (Cornell University). 1 indexed citations
17.
Prashanth, L. A., et al.. (2013). Adaptive Smoothed Functional Algorithms for Optimal Staffing Levels in Service Systems. Service Science. 5(1). 29–55. 4 indexed citations
18.
Prashanth, L. A. & Shalabh Bhatnagar. (2012). Threshold Tuning Using Stochastic Optimization for Graded Signal Control. IEEE Transactions on Vehicular Technology. 61(9). 3865–3880. 31 indexed citations
19.
Prashanth, L. A. & Shalabh Bhatnagar. (2011). Reinforcement learning with average cost for adaptive control of traffic lights at intersections. 1640–1645. 55 indexed citations
20.
Prashanth, L. A. & Shalabh Bhatnagar. (2010). Reinforcement Learning With Function Approximation for Traffic Signal Control. IEEE Transactions on Intelligent Transportation Systems. 12(2). 412–421. 237 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026