Junhyuk Oh

6.3k total citations
16 papers, 251 citations indexed

About

Junhyuk Oh is a scholar working on Artificial Intelligence, Computational Theory and Mathematics and Electrical and Electronic Engineering. According to data from OpenAlex, Junhyuk Oh has authored 16 papers receiving a total of 251 indexed citations (citations by other indexed papers that have themselves been cited), including 13 papers in Artificial Intelligence, 3 papers in Computational Theory and Mathematics and 3 papers in Electrical and Electronic Engineering. Recurrent topics in Junhyuk Oh's work include Reinforcement Learning in Robotics (13 papers), Domain Adaptation and Few-Shot Learning (3 papers) and Adaptive Dynamic Programming Control (3 papers). Junhyuk Oh is often cited by papers focused on Reinforcement Learning in Robotics (13 papers), Domain Adaptation and Few-Shot Learning (3 papers) and Adaptive Dynamic Programming Control (3 papers). Junhyuk Oh collaborates with scholars based in United States, United Kingdom and South Korea. Junhyuk Oh's co-authors include Honglak Lee, Satinder Singh, Richard L. Lewis, Xiaoxiao Guo, Satinder Singh, Pushmeet Kohli, Yijie Guo, David Silver, Hado P. van Hasselt and Matteo Hessel and has published in prestigious journals such as Nature, Chemical Communications and Advanced Materials Technologies.

In The Last Decade

Junhyuk Oh

15 papers receiving 244 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Junhyuk Oh United States 8 173 105 37 17 16 16 251
Steven Kapturowski United States 3 144 0.8× 34 0.3× 30 0.8× 21 1.2× 22 1.4× 5 194
Matthieu Geist France 10 176 1.0× 29 0.3× 52 1.4× 20 1.2× 25 1.6× 27 261
Yuyang Liu China 9 122 0.7× 84 0.8× 13 0.4× 8 0.5× 17 1.1× 14 215
José Antonio Martín H. Spain 8 72 0.4× 76 0.7× 48 1.3× 16 0.9× 27 1.7× 18 224
Xuetao Xie China 8 113 0.7× 58 0.6× 49 1.3× 7 0.4× 25 1.6× 13 216
Enrique Yeguas-Bolívar Spain 8 58 0.3× 126 1.2× 14 0.4× 25 1.5× 10 0.6× 29 218
Jacob Menick United Kingdom 4 120 0.7× 61 0.6× 11 0.3× 13 0.8× 33 2.1× 6 173
Thomas Rückstieß Germany 4 137 0.8× 28 0.3× 37 1.0× 49 2.9× 17 1.1× 7 203
Oron Anschel United States 4 113 0.7× 100 1.0× 29 0.8× 18 1.1× 13 0.8× 5 207

Countries citing papers authored by Junhyuk Oh

Since Specialization
Citations

This map shows the geographic impact of Junhyuk Oh's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Junhyuk Oh with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Junhyuk Oh more than expected).

Fields of papers citing papers by Junhyuk Oh

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Junhyuk Oh. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Junhyuk Oh. The network helps show where Junhyuk Oh may publish in the future.

Co-authorship network of co-authors of Junhyuk Oh

This figure shows the co-authorship network connecting the top 25 collaborators of Junhyuk Oh. A scholar is included among the top collaborators of Junhyuk Oh based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Junhyuk Oh. Junhyuk Oh is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

16 of 16 papers shown
1.
Oh, Junhyuk, Dan A. Calian, Matteo Hessel, et al.. (2025). Discovering state-of-the-art reinforcement learning algorithms. Nature. 648(8093). 312–319. 1 indexed citations
2.
Lee, Jung‐Hyun, Junhyuk Oh, Hyung‐gun Chi, et al.. (2024). Deep Learning‐Assisted Design of Bilayer Nanowire Gratings for High‐Performance MWIR Polarizers. Advanced Materials Technologies. 9(19).
3.
Oh, Junhyuk, et al.. (2024). Next-generation air filtration nanotechnology for improved indoor air quality. Chemical Communications. 61(7). 1322–1341. 2 indexed citations
4.
Hasselt, Hado van, et al.. (2022). Introducing Symmetries to Black Box Meta Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence. 36(7). 7202–7210. 9 indexed citations
5.
Zahavy, Tom, Zhongwen Xu, Vivek Veeriah, et al.. (2020). A Self-Tuning Actor-Critic Algorithm. Neural Information Processing Systems. 33. 20913–20924. 2 indexed citations
6.
Xu, Zhongwen, Hado P. van Hasselt, Matteo Hessel, et al.. (2020). Meta-Gradient Reinforcement Learning with an Objective Discovered Online. Neural Information Processing Systems. 33. 15254–15264. 2 indexed citations
7.
Zahavy, Tom, Zhongwen Xu, Vivek Veeriah, et al.. (2020). Self-Tuning Deep Reinforcement Learning. arXiv (Cornell University). 3 indexed citations
8.
Oh, Junhyuk, Matteo Hessel, Wojciech Marian Czarnecki, et al.. (2020). Discovering Reinforcement Learning Algorithms. Neural Information Processing Systems. 33. 1060–1070. 1 indexed citations
9.
Veeriah, Vivek, Matteo Hessel, Zhongwen Xu, et al.. (2019). Discovery of Useful Questions as Auxiliary Tasks. arXiv (Cornell University). 32. 9310–9321. 7 indexed citations
10.
Choi, Jongwook, Yijie Guo, Marcin Moczulski, et al.. (2018). Contingency-Aware Exploration in Reinforcement Learning. International Conference on Learning Representations. 10 indexed citations
11.
Oh, Junhyuk, Yijie Guo, Satinder Singh, & Honglak Lee. (2018). Self-Imitation Learning. International Conference on Machine Learning. 3878–3887. 24 indexed citations
12.
Oh, Junhyuk, et al.. (2018). Multitask Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies. 1 indexed citations
13.
Oh, Junhyuk, et al.. (2018). Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies. arXiv (Cornell University). 31. 7156–7166. 10 indexed citations
14.
Oh, Junhyuk, Satinder Singh, Honglak Lee, & Pushmeet Kohli. (2017). Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning. arXiv (Cornell University). 2661–2670. 42 indexed citations
15.
Oh, Junhyuk, et al.. (2016). Control of Memory, Active Perception, and Action in Minecraft. arXiv (Cornell University). 2790–2799. 57 indexed citations
16.
Oh, Junhyuk, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, & Satinder Singh. (2015). Action-conditional video prediction using deep networks in Atari games. Neural Information Processing Systems. 28. 2863–2871. 80 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026