Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
If deep learning is the answer, what is the question?
2020208 citationsAndrew Saxe, Stephanie Nelli et al.Nature reviews. Neuroscienceprofile →
Peers — A (Enhanced Table)
Peers by citation overlap · career bar shows stage (early→late)
cites ·
hero ref
This map shows the geographic impact of Andrew Saxe's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Andrew Saxe with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Andrew Saxe more than expected).
This network shows the impact of papers produced by Andrew Saxe. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Andrew Saxe. The network helps show where Andrew Saxe may publish in the future.
Co-authorship network of co-authors of Andrew Saxe
This figure shows the co-authorship network connecting the top 25 collaborators of Andrew Saxe.
A scholar is included among the top collaborators of Andrew Saxe based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Andrew Saxe. Andrew Saxe is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Summerfield, Christopher, et al.. (2020). Characterizing emergent representations in a space of candidate learning rules for deep networks. Neural Information Processing Systems. 33. 8660–8670.2 indexed citations
9.
Saxe, Andrew, Stephanie Nelli, & Christopher Summerfield. (2020). If deep learning is the answer, what is the question?. Nature reviews. Neuroscience. 22(1). 55–67.208 indexed citations breakdown →
10.
Saxe, Andrew, et al.. (2018). Hierarchical subtask discovery with non-negative matrix factorization. arXiv (Cornell University).2 indexed citations
11.
Saxe, Andrew, et al.. (2017). Hierarchy Through Composition with Multitask LMDPs. Oxford University Research Archive (ORA) (University of Oxford). 3017–3026.8 indexed citations
12.
Musslick, Sebastian, et al.. (2017). Multitasking Capability Versus Learning Efficiency in Neural Network Architectures.. Cognitive Science.15 indexed citations
Saxe, Andrew, et al.. (2014). Modeling Perceptual Learning with Deep Networks. Cognitive Science. 36(36).4 indexed citations
15.
Saxe, Andrew, James L. McClelland, & Surya Ganguli. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. International Conference on Learning Representations.117 indexed citations
16.
Saxe, Andrew. (2014). Deep Learning and the Brain.. Cognitive Science.1 indexed citations
17.
Saxe, Andrew, James L. McClelland, & Surya Ganguli. (2013). Learning hierarchical categories in deep neural networks. Cognitive Science. 35(35).8 indexed citations
18.
Suresh, Bipin, et al.. (2011). Unsupervised learning models of primary cortical receptive fields and receptive field plasticity. Neural Information Processing Systems. 24. 1971–1979.27 indexed citations
19.
Saxe, Andrew, et al.. (2011). On Random Weights and Unsupervised Feature Learning. International Conference on Machine Learning. 1089–1096.165 indexed citations
20.
Goodfellow, Ian, Honglak Lee, Quoc V. Le, Andrew Saxe, & Andrew Y. Ng. (2009). Measuring Invariances in Deep Networks. Neural Information Processing Systems. 22. 646–654.188 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.