Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
Building Watson: An Overview of the DeepQA Project
2010789 citationsDavid Ferrucci, Eric W. Brown et al.profile →
Peers — A (Enhanced Table)
Peers by citation overlap · career bar shows stage (early→late)
cites ·
hero ref
This map shows the geographic impact of John Prager's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by John Prager with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites John Prager more than expected).
This network shows the impact of papers produced by John Prager. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by John Prager. The network helps show where John Prager may publish in the future.
Co-authorship network of co-authors of John Prager
This figure shows the co-authorship network connecting the top 25 collaborators of John Prager.
A scholar is included among the top collaborators of John Prager based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with John Prager. John Prager is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
All Works
20 of 20 papers shown
1.
Liang, Jennifer J., et al.. (2021). Reducing Physicians' Cognitive Load During Chart Review: A Problem-Oriented Summary of the Patient Electronic Record.. PubMed. 2021. 763–772.6 indexed citations
Prager, John. (2007). Open-Domain Question Answering: Foundations and Trends(R) in Information Retrieval. now publishers, Inc. eBooks.6 indexed citations
Chu‐Carroll, Jennifer, John Prager, Christopher Welty, Krzysztof Czuba, & David Ferrucci. (2006). A Multi-Strategy and Multi-Source Approach to Question Answering. Defense Technical Information Center (DTIC).23 indexed citations
Chu‐Carroll, Jennifer, et al.. (2004). IBM's PIQUANT II in TREC 2004.. Text REtrieval Conference.10 indexed citations
13.
Blair-Goldensohn, Sasha, Jennifer Chu‐Carroll, Krzysztof Czuba, & John Prager. (2004). IBM’s PIQUANT II in TREC2004. Columbia Academic Commons (Columbia University).8 indexed citations
14.
Prager, John, Jennifer Chu‐Carroll, & Krzysztof Czuba. (2004). A Multi-Agent Approach to Using Redundancy and Reinforcement in Question Answering.. 237–252.3 indexed citations
15.
Prager, John, et al.. (2003). IBM's PIQUANT in TREC2003. Defense Technical Information Center (DTIC). 283–292.21 indexed citations
Prager, John, Jennifer Chu‐Carroll, & Krzysztof Czuba. (2001). Use of WordNet hypernyms for answering what-is questions. Text REtrieval Conference. 250–257.34 indexed citations
18.
Prager, John, Eric W. Brown, Dragomir Radev, & Krzysztof Czuba. (2000). One Search Engine or Two for Question-Answering. Text REtrieval Conference.13 indexed citations
19.
Prager, John, et al.. (1999). The Use of Predictive Annotation for Question Answering in TREC8.. Text REtrieval Conference.59 indexed citations
20.
Prager, John, et al.. (1977). Segmentation processes in the visions system. International Joint Conference on Artificial Intelligence. 642–643.5 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.