Noah Constant

7.5k total citations · 3 hit papers
22 papers, 2.5k citations indexed

About

Noah Constant is a scholar working on Artificial Intelligence, Language and Linguistics and Computer Vision and Pattern Recognition. According to data from OpenAlex, Noah Constant has authored 22 papers receiving a total of 2.5k indexed citations (citations by other indexed papers that have themselves been cited), including 17 papers in Artificial Intelligence, 7 papers in Language and Linguistics and 5 papers in Computer Vision and Pattern Recognition. Recurrent topics in Noah Constant's work include Natural Language Processing Techniques (17 papers), Topic Modeling (15 papers) and Syntax, Semantics, Linguistic Variation (6 papers). Noah Constant is often cited by papers focused on Natural Language Processing Techniques (17 papers), Topic Modeling (15 papers) and Syntax, Semantics, Linguistic Variation (6 papers). Noah Constant collaborates with scholars based in United States and Switzerland. Noah Constant's co-authors include Rami Al‐Rfou, Daniel Cer, Aditya Barua, Mihir Kale, Linting Xue, Colin Raffel, Adam P. Roberts, Yinfei Yang, Aditya Siddhant and Ray Kurzweil and has published in prestigious journals such as Linguistics and Philosophy, Transactions of the Association for Computational Linguistics and Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers).

In The Last Decade

Noah Constant

22 papers receiving 2.3k citations

Hit Papers

mT5: A Massively Multilingual Pre-trained Text-to-Text... 2018 2026 2020 2023 2021 2018 2022 250 500 750

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Noah Constant United States 14 2.1k 451 275 122 99 22 2.5k
Anders Søgaard Denmark 27 2.6k 1.2× 402 0.9× 224 0.8× 83 0.7× 71 0.7× 203 2.9k
Barbara Plank Denmark 26 1.9k 0.9× 352 0.8× 235 0.9× 149 1.2× 74 0.7× 145 2.2k
Ellie Pavlick United States 23 2.5k 1.2× 540 1.2× 223 0.8× 89 0.7× 43 0.4× 72 2.9k
Albert Gatt Malta 22 1.6k 0.8× 341 0.8× 255 0.9× 79 0.6× 118 1.2× 99 2.2k
Kevin Gimpel United States 22 2.4k 1.2× 395 0.9× 302 1.1× 124 1.0× 41 0.4× 81 2.8k
Michael Heilman United States 22 1.8k 0.9× 217 0.5× 387 1.4× 99 0.8× 72 0.7× 40 2.3k
Jean Y. Wu United States 5 3.4k 1.7× 477 1.1× 431 1.6× 208 1.7× 45 0.5× 6 3.8k
Kenneth Heafield United Kingdom 22 2.5k 1.2× 550 1.2× 311 1.1× 67 0.5× 69 0.7× 56 2.8k
Kentaro Inui Japan 26 2.4k 1.2× 278 0.6× 430 1.6× 152 1.2× 27 0.3× 235 2.7k
Roi Reichart Israel 28 2.5k 1.2× 353 0.8× 194 0.7× 86 0.7× 44 0.4× 108 2.9k

Countries citing papers authored by Noah Constant

Since Specialization
Citations

This map shows the geographic impact of Noah Constant's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Noah Constant with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Noah Constant more than expected).

Fields of papers citing papers by Noah Constant

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Noah Constant. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Noah Constant. The network helps show where Noah Constant may publish in the future.

Co-authorship network of co-authors of Noah Constant

This figure shows the co-authorship network connecting the top 25 collaborators of Noah Constant. A scholar is included among the top collaborators of Noah Constant based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Noah Constant. Noah Constant is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Vu, Tu, Mohit Iyyer, Xuezhi Wang, et al.. (2024). FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation. 13697–13720. 25 indexed citations
2.
Liu, Rosanne, Dan Garrette, Chitwan Saharia, et al.. (2023). Character-Aware Models Improve Visual Text Rendering. 16270–16297. 11 indexed citations
3.
Dozat, Timothy, Xavier García, Dan Garrette, et al.. (2023). FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation. Transactions of the Association for Computational Linguistics. 11. 671–685. 4 indexed citations
4.
Ni, Jianmo, Gustavo Hernández Ábrego, Noah Constant, et al.. (2022). Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models. Findings of the Association for Computational Linguistics: ACL 2022. 1864–1874. 138 indexed citations breakdown →
5.
Vu, Tu, Aditya Barua, Brian Lester, et al.. (2022). Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation. 9279–9300. 11 indexed citations
6.
Vu, Tu, Brian Lester, Noah Constant, Rami Al‐Rfou, & Daniel Cer. (2022). SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 5039–5059. 112 indexed citations
7.
Xue, Linting, Aditya Barua, Noah Constant, et al.. (2022). ByT5: Towards a Token-Free Future with Pre-trained Byte-to-Byte Models. Transactions of the Association for Computational Linguistics. 10. 291–306. 114 indexed citations
8.
Xue, Linting, Noah Constant, Adam P. Roberts, et al.. (2021). mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. 483–498. 841 indexed citations breakdown →
9.
Constant, Noah, et al.. (2021). TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling. 23 indexed citations
10.
Constant, Noah. (2021). Contrastive Topic: Meanings and Realizations. Scholarworks (University of Massachusetts Amherst). 13 indexed citations
11.
Kale, Mihir, Aditya Siddhant, Rami Al‐Rfou, et al.. (2021). nmT5 - Is parallel data still relevant for pre-training massively multilingual language models?. 683–691. 7 indexed citations
12.
Roy, Uma, et al.. (2020). LAReQA: Language-Agnostic Answer Retrieval from a Multilingual Pool. 5919–5930. 21 indexed citations
13.
Yang, Yinfei, Daniel Cer, Amin Ahmad, et al.. (2020). Multilingual Universal Sentence Encoder for Semantic Retrieval. 87–94. 228 indexed citations
14.
Cer, Daniel, Yinfei Yang, Sheng-yi Kong, et al.. (2018). Universal Sentence Encoder for English. 169–174. 705 indexed citations breakdown →
15.
Guo, Mandy, Qinlan Shen, Yinfei Yang, et al.. (2018). Effective Parallel Corpus Mining using Bilingual Sentence Embeddings. 165–176. 54 indexed citations
16.
Yang, Yinfei, Steve Yuan, Daniel Cer, et al.. (2018). Learning Semantic Textual Similarity from Conversations. 164–174. 86 indexed citations
17.
Constant, Noah. (2015). Witnessable quantifiers license type-e meaning: Evidence from contrastive topic, equatives and supplements. Proceedings from Semantics and Linguistic Theory. 286–286. 1 indexed citations
18.
Constant, Noah. (2012). Witnessable quantifiers license type-e meaning: Evidence from contrastive topic, equatives and supplements. Proceedings from Semantics and Linguistic Theory. 22. 286–286. 9 indexed citations
19.
Constant, Noah. (2012). English rise-fall-rise: a study in the semantics and pragmatics of intonation. Linguistics and Philosophy. 35(5). 407–442. 37 indexed citations
20.
Constant, Noah, et al.. (2010). Mandarin 'even', `all' and the Trigger of Focus Movement. Scholarly Commons (University of Pennsylvania). 16(1). 4. 6 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026