Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
Evolutionary-scale prediction of atomic-level protein structure with a language model
20232.0k citationsZeming Lin, Halil Akin et al.Scienceprofile →
Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences
20211.4k citationsAlexander Rives, Joshua Meier et al.Proceedings of the National Academy of Sciencesprofile →
Peers — A (Enhanced Table)
Peers by citation overlap · career bar shows stage (early→late)
cites ·
hero ref
This map shows the geographic impact of Tom Sercu's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Tom Sercu with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Tom Sercu more than expected).
This network shows the impact of papers produced by Tom Sercu. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Tom Sercu. The network helps show where Tom Sercu may publish in the future.
Co-authorship network of co-authors of Tom Sercu
This figure shows the co-authorship network connecting the top 25 collaborators of Tom Sercu.
A scholar is included among the top collaborators of Tom Sercu based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Tom Sercu. Tom Sercu is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
All Works
16 of 16 papers shown
1.
Lin, Zeming, Halil Akin, Roshan Rao, et al.. (2023). Evolutionary-scale prediction of atomic-level protein structure with a language model. Science. 379(6637). 1123–1130.1968 indexed citations breakdown →
2.
Rives, Alexander, Joshua Meier, Tom Sercu, et al.. (2021). Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proceedings of the National Academy of Sciences. 118(15).1429 indexed citations breakdown →
Rao, Roshan, Jason Liu, Robert Verkuil, et al.. (2021). MSA Transformer.3 indexed citations
5.
Mroueh, Youssef, et al.. (2021). Improved Mutual Information Estimation. Proceedings of the AAAI Conference on Artificial Intelligence. 35(10). 9009–9017.5 indexed citations
6.
Rao, Roshan, Joshua Meier, Tom Sercu, Sergey Ovchinnikov, & Alexander Rives. (2021). Transformer protein language models are unsupervised structure learners.2 indexed citations
7.
Melnyk, Igor, et al.. (2019). Improved Adversarial Image Captioning. International Conference on Learning Representations.1 indexed citations
8.
Sercu, Tom, Sebastian Gehrmann, Hendrik Strobelt, et al.. (2019). Interactive Visual Exploration of Latent Space (IVELS) for Peptide Auto-Encoder Model Selection. International Conference on Learning Representations.3 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.