Çaǧlar Gülçehre

50.2k total citations · 1 hit paper
26 papers, 1.7k citations indexed

About

Çaǧlar Gülçehre is a scholar working on Artificial Intelligence, Computer Vision and Pattern Recognition and Electrical and Electronic Engineering. According to data from OpenAlex, Çaǧlar Gülçehre has authored 26 papers receiving a total of 1.7k indexed citations (citations by other indexed papers that have themselves been cited), including 22 papers in Artificial Intelligence, 11 papers in Computer Vision and Pattern Recognition and 3 papers in Electrical and Electronic Engineering. Recurrent topics in Çaǧlar Gülçehre's work include Topic Modeling (7 papers), Reinforcement Learning in Robotics (7 papers) and Natural Language Processing Techniques (7 papers). Çaǧlar Gülçehre is often cited by papers focused on Topic Modeling (7 papers), Reinforcement Learning in Robotics (7 papers) and Natural Language Processing Techniques (7 papers). Çaǧlar Gülçehre collaborates with scholars based in Canada, United States and United Kingdom. Çaǧlar Gülçehre's co-authors include Yoshua Bengio, Razvan Pascanu, Kyunghyun Cho, Kyunghyun Cho, Jun‐Young Chung, Bowen Zhou, Yann Dauphin, Ramesh Nallapati, Sungjin Ahn and Surya Ganguli and has published in prestigious journals such as Neural Computation, Computer Speech & Language and Journal on Multimodal User Interfaces.

In The Last Decade

Çaǧlar Gülçehre

23 papers receiving 1.6k citations

Hit Papers

How to Construct Deep Recurrent Neural Networks 2014 2026 2018 2022 2014 100 200 300 400

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Çaǧlar Gülçehre Canada 11 1.0k 526 188 169 142 26 1.7k
Pascal Lamblin Canada 7 831 0.8× 742 1.4× 282 1.5× 186 1.1× 154 1.1× 9 1.9k
Christian W. Omlin Norway 18 957 0.9× 225 0.4× 105 0.6× 102 0.6× 141 1.0× 75 1.5k
Pengjiang Qian China 25 942 0.9× 722 1.4× 182 1.0× 173 1.0× 92 0.6× 101 2.2k
Jinye Peng China 28 1000 1.0× 1.1k 2.1× 181 1.0× 72 0.4× 207 1.5× 237 2.9k
Hong-Han Shuai Taiwan 21 494 0.5× 745 1.4× 125 0.7× 194 1.1× 57 0.4× 119 1.5k
Stanley C. Ahalt United States 19 715 0.7× 543 1.0× 258 1.4× 51 0.3× 112 0.8× 110 1.6k
Dongdong Li China 22 779 0.8× 394 0.7× 195 1.0× 400 2.4× 132 0.9× 102 1.6k
Maurizio Filippone United Kingdom 20 934 0.9× 416 0.8× 224 1.2× 49 0.3× 79 0.6× 62 1.8k
Yizhang Jiang China 25 989 1.0× 740 1.4× 220 1.2× 228 1.3× 103 0.7× 103 2.3k
Yann Dauphin United States 13 2.4k 2.4× 1.2k 2.4× 343 1.8× 177 1.0× 92 0.6× 24 3.3k

Countries citing papers authored by Çaǧlar Gülçehre

Since Specialization
Citations

This map shows the geographic impact of Çaǧlar Gülçehre's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Çaǧlar Gülçehre with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Çaǧlar Gülçehre more than expected).

Fields of papers citing papers by Çaǧlar Gülçehre

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Çaǧlar Gülçehre. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Çaǧlar Gülçehre. The network helps show where Çaǧlar Gülçehre may publish in the future.

Co-authorship network of co-authors of Çaǧlar Gülçehre

This figure shows the co-authorship network connecting the top 25 collaborators of Çaǧlar Gülçehre. A scholar is included among the top collaborators of Çaǧlar Gülçehre based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Çaǧlar Gülçehre. Çaǧlar Gülçehre is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Gülçehre, Çaǧlar, et al.. (2025). Regret-Optimized Portfolio Enhancement through Deep Reinforcement Learning and Future Looking Rewards. Infoscience (Ecole Polytechnique Fédérale de Lausanne). 890–897.
2.
Shridhar, Kumar, et al.. (2025). SIKeD: Self-guided Iterative Knowledge Distillation for Mathematical Reasoning. Research at the University of Copenhagen (University of Copenhagen). 9868–9880.
4.
Gu, Albert, et al.. (2020). Improving the Gating Mechanism of Recurrent Neural Networks. International Conference on Machine Learning. 1. 3800–3809. 5 indexed citations
5.
Parisotto, Emilio, Francis Song, Jack W. Rae, et al.. (2020). Stabilizing Transformers for Reinforcement Learning. International Conference on Machine Learning. 1. 7487–7498. 9 indexed citations
6.
Wang, Ziyu, Alexander Novikov, Konrad Żołna, et al.. (2020). Critic Regularized Regression. arXiv (Cornell University). 33. 7768–7778. 1 indexed citations
7.
Gülçehre, Çaǧlar, Ziyu Wang, Alexander Novikov, et al.. (2020). RL Unplugged: A Collection of Benchmarks for Offline Reinforcement Learning.. Neural Information Processing Systems. 2 indexed citations
8.
Gülçehre, Çaǧlar, Ziyu Wang, Alexander Novikov, et al.. (2020). RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning. arXiv (Cornell University). 33. 7248–7259. 2 indexed citations
9.
Jing, Li, Çaǧlar Gülçehre, John Peurifoy, et al.. (2019). Gated Orthogonal Recurrent Units: On Learning to Forget. Neural Computation. 31(4). 765–783. 51 indexed citations
10.
Chen, Yutian, Yannis Assael, Brendan Shillingford, et al.. (2018). Sample-efficient adaptive text-to-speech. arXiv (Cornell University). 16 indexed citations
11.
Gülçehre, Çaǧlar, Francis Dutil, Adam Trischler, & Yoshua Bengio. (2017). Plan, Attend, Generate: Planning for Sequence-to-Sequence Models. Neural Information Processing Systems. 30. 5474–5483. 6 indexed citations
12.
Gülçehre, Çaǧlar, et al.. (2017). Memory Augmented Neural Networks for Natural Language Processing. Empirical Methods in Natural Language Processing. 1 indexed citations
13.
Gülçehre, Çaǧlar, Francis Dutil, Adam Trischler, & Yoshua Bengio. (2017). Plan, Attend, Generate: Character-Level Neural Machine Translation with Planning. 228–234. 2 indexed citations
14.
Gülçehre, Çaǧlar, Jose Sotelo, Marcin Moczulski, & Yoshua Bengio. (2017). A robust adaptive stochastic gradient method for deep learning. 125–132. 11 indexed citations
15.
Kahou, Samira Ebrahimi, Xavier Bouthillier, Pascal Lamblin, et al.. (2015). EmoNets: Multimodal deep learning approaches for emotion recognition in video. Journal on Multimodal User Interfaces. 10(2). 99–111. 263 indexed citations
16.
Chung, Jun‐Young, Çaǧlar Gülçehre, Kyunghyun Cho, & Yoshua Bengio. (2015). Gated Feedback Recurrent Neural Networks. arXiv (Cornell University). 2067–2075. 261 indexed citations
17.
Rusu, Andrei A., Sergio Gómez Colmenarejo, Çaǧlar Gülçehre, et al.. (2015). Policy Distillation. arXiv (Cornell University). 24 indexed citations
18.
Dauphin, Yann, Razvan Pascanu, Çaǧlar Gülçehre, et al.. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv (Cornell University). 27. 2933–2941. 231 indexed citations
19.
Pascanu, Razvan, Çaǧlar Gülçehre, Kyunghyun Cho, & Yoshua Bengio. (2014). How to Construct Deep Recurrent Neural Networks. International Conference on Learning Representations. 400 indexed citations breakdown →
20.
Gülçehre, Çaǧlar, Kyunghyun Cho, Razvan Pascanu, & Yoshua Bengio. (2013). Learned-norm pooling for deep neural networks.. arXiv (Cornell University). 7 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026