Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
Overcoming catastrophic forgetting in neural networks
20173.6k citationsJames Kirkpatrick, Razvan Pascanu et al.Proceedings of the National Academy of Sciencesprofile →
Theano: A CPU and GPU Math Compiler in Python
2010686 citationsRazvan Pascanu, Guillaume Desjardins et al.profile →
Countries citing papers authored by Razvan Pascanu
Since
Specialization
Citations
This map shows the geographic impact of Razvan Pascanu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Razvan Pascanu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Razvan Pascanu more than expected).
This network shows the impact of papers produced by Razvan Pascanu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Razvan Pascanu. The network helps show where Razvan Pascanu may publish in the future.
Co-authorship network of co-authors of Razvan Pascanu
This figure shows the co-authorship network connecting the top 25 collaborators of Razvan Pascanu.
A scholar is included among the top collaborators of Razvan Pascanu based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Razvan Pascanu. Razvan Pascanu is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
All Works
20 of 20 papers shown
1.
Stan, Adriana, Traian Rebedea, Dani Yogatama, et al.. (2021). LiRo: Benchmark and leaderboard for Romanian language tasks. Neural Information Processing Systems.10 indexed citations
2.
Gu, Albert, et al.. (2020). Improving the Gating Mechanism of Recurrent Neural Networks. International Conference on Machine Learning. 1. 3800–3809.5 indexed citations
3.
Parisotto, Emilio, Francis Song, Jack W. Rae, et al.. (2020). Stabilizing Transformers for Reinforcement Learning. International Conference on Machine Learning. 1. 7487–7498.9 indexed citations
4.
Titsias, Michalis K., Jonathan Schwarz, Alexander Matthews, Razvan Pascanu, & Yee Whye Teh. (2020). Functional Regularisation for Continual Learning with Gaussian Processes. arXiv (Cornell University).9 indexed citations
5.
Rusu, Andrei A., et al.. (2020). Meta-Learning with Warped Gradient Descent. Research Explorer (The University of Manchester).20 indexed citations
6.
Jayakumar, Siddhant M., Jacob Menick, Wojciech Marian Czarnecki, et al.. (2020). Multiplicative Interactions and Where to Find Them. International Conference on Learning Representations.20 indexed citations
7.
Mirzadeh, Seyed Iman, Mehrdad Farajtabar, Razvan Pascanu, & Hassan Ghasemzadeh. (2020). Understanding the Role of Training Regimes in Continual Learning. Neural Information Processing Systems. 33. 7308–7320.4 indexed citations
8.
Zambaldi, Vinícius, David Raposo, Adam Santoro, et al.. (2018). Deep reinforcement learning with relational inductive biases. International Conference on Learning Representations.47 indexed citations
9.
Kirkpatrick, James, Razvan Pascanu, Neil C. Rabinowitz, et al.. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences. 114(13). 3521–3526.3594 indexed citations breakdown →
10.
Czarnecki, Wojciech Marian, Simon Osindero, Max Jaderberg, Grzegorz Świrszcz, & Razvan Pascanu. (2017). Sobolev Training for Neural Networks. Neural Information Processing Systems. 30. 4278–4287.28 indexed citations
11.
Watters, Nicholas, Daniel Zoran, Théophane Weber, et al.. (2017). Visual Interaction Networks: Learning a Physics Simulator from Video. Neural Information Processing Systems. 30. 4539–4547.74 indexed citations
12.
Racanière, Sébastien, Théophane Weber, David Reichert, et al.. (2017). Imagination-Augmented Agents for Deep Reinforcement Learning. arXiv (Cornell University). 30. 5690–5701.49 indexed citations
Pascanu, Razvan, Guido Montúfar, & Yoshua Bengio. (2014). On the number of inference regions of deep feed forward networks with piece-wise linear activations. arXiv (Cornell University).21 indexed citations
15.
Dauphin, Yann, Razvan Pascanu, Çaǧlar Gülçehre, et al.. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv (Cornell University). 27. 2933–2941.231 indexed citations
16.
Pascanu, Razvan & Yoshua Bengio. (2014). Revisiting Natural Gradient for Deep Networks. arXiv (Cornell University).43 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.