Karthik Narasimhan

4.4k total citations · 1 hit paper
45 papers, 1.1k citations indexed

About

Karthik Narasimhan is a scholar working on Artificial Intelligence, Computer Vision and Pattern Recognition and Computer Networks and Communications. According to data from OpenAlex, Karthik Narasimhan has authored 45 papers receiving a total of 1.1k indexed citations (citations by other indexed papers that have themselves been cited), including 31 papers in Artificial Intelligence, 6 papers in Computer Vision and Pattern Recognition and 3 papers in Computer Networks and Communications. Recurrent topics in Karthik Narasimhan's work include Topic Modeling (20 papers), Natural Language Processing Techniques (12 papers) and Reinforcement Learning in Robotics (8 papers). Karthik Narasimhan is often cited by papers focused on Topic Modeling (20 papers), Natural Language Processing Techniques (12 papers) and Reinforcement Learning in Robotics (8 papers). Karthik Narasimhan collaborates with scholars based in United States, India and United Kingdom. Karthik Narasimhan's co-authors include Regina Barzilay, Ardavan Saeedi, Tejas D. Kulkarni, Joshua B. Tenenbaum, Tejas Kulkarni, Adam Yala, Tommi Jaakkola, Tanmay Rajpurohit, Ashwin Kalyan and Shunyu Yao and has published in prestigious journals such as Journal of Neurophysiology, ACM Computing Surveys and Applied Sciences.

In The Last Decade

Karthik Narasimhan

41 papers receiving 1.1k citations

Hit Papers

Hierarchical deep reinforcement learning: integrating tem... 2016 2026 2019 2022 2016 100 200 300

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Karthik Narasimhan United States 18 844 259 91 79 75 45 1.1k
Adrià Puigdomènech Badia United States 7 589 0.7× 191 0.7× 58 0.6× 39 0.5× 170 2.3× 9 875
Kee-Eung Kim South Korea 16 565 0.7× 144 0.6× 107 1.2× 33 0.4× 70 0.9× 63 941
Martijn van Otterlo Netherlands 11 499 0.6× 137 0.5× 164 1.8× 64 0.8× 93 1.2× 34 899
Karl Moritz Hermann United Kingdom 11 1.1k 1.4× 272 1.1× 37 0.4× 169 2.1× 160 2.1× 16 1.5k
Kathryn Merrick Australia 16 408 0.5× 92 0.4× 98 1.1× 68 0.9× 45 0.6× 60 810
Gita Sukthankar United States 18 602 0.7× 264 1.0× 76 0.8× 114 1.4× 25 0.3× 113 1.1k
Tim Harley United Kingdom 6 510 0.6× 190 0.7× 39 0.4× 37 0.5× 159 2.1× 7 797
Yori Zwólš United States 9 446 0.5× 160 0.6× 38 0.4× 41 0.5× 206 2.7× 15 816
Felipe Meneguzzi Brazil 16 662 0.8× 115 0.4× 34 0.4× 94 1.2× 52 0.7× 114 1.4k
Changqin Quan China 18 733 0.9× 140 0.5× 121 1.3× 90 1.1× 235 3.1× 132 1.5k

Countries citing papers authored by Karthik Narasimhan

Since Specialization
Citations

This map shows the geographic impact of Karthik Narasimhan's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Karthik Narasimhan with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Karthik Narasimhan more than expected).

Fields of papers citing papers by Karthik Narasimhan

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Karthik Narasimhan. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Karthik Narasimhan. The network helps show where Karthik Narasimhan may publish in the future.

Co-authorship network of co-authors of Karthik Narasimhan

This figure shows the co-authorship network connecting the top 25 collaborators of Karthik Narasimhan. A scholar is included among the top collaborators of Karthik Narasimhan based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Karthik Narasimhan. Karthik Narasimhan is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Kalyan, Ashwin, et al.. (2025). PersonaGym: Evaluating Persona Agents and LLMs. 6999–7022. 1 indexed citations
2.
Xia, Mengzhou, et al.. (2024). InstructEval: Systematic Evaluation of Instruction Selection Methods. 2 indexed citations
3.
Rajpurohit, Tanmay, et al.. (2023). Toxicity in chatgpt: Analyzing persona-assigned language models. 1236–1270. 80 indexed citations
4.
Shafran, Izhak, et al.. (2023). MUX-PLMs: Pre-training Language Models with Data Multiplexing. 196–211. 1 indexed citations
5.
Narasimhan, Karthik, et al.. (2023). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. 11809–11822. 3 indexed citations
6.
Gopinath, Ashwin, et al.. (2023). Reflexion: language agents with verbal reinforcement learning. 8634–8652. 3 indexed citations
7.
Zhong, Victor W., et al.. (2021). SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. Neural Information Processing Systems. 34. 4 indexed citations
8.
Rosca, Justinian, et al.. (2021). Accelerating Safe Reinforcement Learning with Constraint-mismatched Baseline Policies. International Conference on Machine Learning. 11795–11807. 8 indexed citations
9.
Narasimhan, Karthik, et al.. (2020). Multimodal Graph Networks for Compositional Generalization in Visual Question Answering. Neural Information Processing Systems. 33. 3070–3081. 24 indexed citations
10.
Yao, Shunyu, et al.. (2020). Keep CALM and Explore: Language Models for Action Generation in Text-based Games. 8736–8754. 32 indexed citations
11.
Lan, Andrew, et al.. (2020). Robust and Interpretable Grounding of Spatial References with Relation Networks. 1908–1923. 3 indexed citations
12.
Du, Yilun & Karthik Narasimhan. (2019). Task-agnostic dynamics priors for deep reinforcement learning. International Conference on Machine Learning. 1696–1705.
13.
Yang, Runzhe, Xingyuan Sun, & Karthik Narasimhan. (2019). A Generalized Algorithm for Multi-Objective Reinforcement Learning and Policy Adaptation. Neural Information Processing Systems. 32. 14610–14621. 16 indexed citations
14.
Narasimhan, Karthik, Regina Barzilay, & Tommi Jaakkola. (2017). Deep Transfer in Reinforcement Learning by Language Grounding.. arXiv (Cornell University). 2 indexed citations
15.
Heerden, Charl van, Damianos Karakos, Karthik Narasimhan, Marelie H. Davel, & Richard Schwartz. (2017). Constructing sub-word units for spoken term detection. 13 indexed citations
16.
Kulkarni, Tejas D., Karthik Narasimhan, Ardavan Saeedi, & Joshua B. Tenenbaum. (2016). Hierarchical deep reinforcement learning: integrating temporal abstraction and intrinsic motivation. DSpace@MIT (Massachusetts Institute of Technology). 29. 3682–3690. 318 indexed citations breakdown →
17.
Batmanghelich, Kayhan, et al.. (2016). Nonparametric Spherical Topic Modeling with Word Embeddings. PubMed. 2016. 537–542. 41 indexed citations
18.
Huggins, Jonathan H., Karthik Narasimhan, Ardavan Saeedi, & Vikash K. Mansinghka. (2015). JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes. International Conference on Machine Learning. 693–701. 1 indexed citations
19.
Narasimhan, Karthik, Tejas Kulkarni, & Regina Barzilay. (2015). Language Understanding for Text-based Games using Deep Reinforcement Learning. 1–11. 134 indexed citations
20.
Narasimhan, Karthik, Regina Barzilay, & Tommi Jaakkola. (2015). An Unsupervised Method for Uncovering Morphological Chains. arXiv (Cornell University). 21 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026