Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
Learning With Concept and Knowledge Maps: A Meta-Analysis
2006583 citationsJohn C. Nesbit, Olusola Adesopeprofile →
Intelligent tutoring systems and learning outcomes: A meta-analysis.
2014392 citationsOlusola Adesope, John C. Nesbit et al.profile →
Peers — A (Enhanced Table)
Peers by citation overlap · career bar shows stage (early→late)
cites ·
hero ref
Countries citing papers authored by John C. Nesbit
Since
Specialization
Citations
This map shows the geographic impact of John C. Nesbit's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by John C. Nesbit with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites John C. Nesbit more than expected).
This network shows the impact of papers produced by John C. Nesbit. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by John C. Nesbit. The network helps show where John C. Nesbit may publish in the future.
Co-authorship network of co-authors of John C. Nesbit
This figure shows the co-authorship network connecting the top 25 collaborators of John C. Nesbit.
A scholar is included among the top collaborators of John C. Nesbit based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with John C. Nesbit. John C. Nesbit is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Hajian, Shiva, et al.. (2019). Inquiry Learning with an Interactive Physics Simulation: What Exploratory Strategies Lead to Success?. The Journal of Interactive Learning Research. 30(4). 451–476.1 indexed citations
Popowich, Fred, et al.. (2013). Generating Natural Language Questions to Support Learning On-Line. 105–114.67 indexed citations
9.
Adesope, Olusola & John C. Nesbit. (2009). Learning with collaborative concept maps: A Meta-Analysis. E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. 2009(1). 2082–2091.1 indexed citations
Nesbit, John C., et al.. (2007). A Framework for Evaluating the Quality of Multimedia Learning Resources. Educational Technology & Society. 10(2). 44–59.175 indexed citations
12.
Nesbit, John C., Hector Larios, & Olusola Adesope. (2007). How Students Read Concept Maps: A Study of Eye Movements. EdMedia: World Conference on Educational Media and Technology. 2007(1). 3961–3970.9 indexed citations
13.
Adesope, Olusola & John C. Nesbit. (2005). Toward Accessible Learning Resources. E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. 2005(1). 1802–1807.1 indexed citations
14.
Nesbit, John C., et al.. (2005). Web-Based Tools for Collaborative Evaluation of Learning Resources. SHILAP Revista de lepidopterología.12 indexed citations
15.
Nesbit, John C. & Olusola Adesope. (2005). Dynamic Concept Maps. EdMedia: World Conference on Educational Media and Technology. 2005(1). 4323–4329.5 indexed citations
16.
Nesbit, John C., et al.. (2004). Learning object evaluation and convergent participation: Tools for professional development in e-learning. Annual Conference on Computers. 339–344.6 indexed citations
17.
Richards, Griff, et al.. (2004). Teachers need simple, effective tools to evaluate learning objects: Enter elera.net. Annual Conference on Computers. 333–338.7 indexed citations
18.
Kumar, Vive, et al.. (2004). Applying Bayesian Belief Networks in Learning Object Quality Rating. EdMedia: World Conference on Educational Media and Technology. 2004(1). 5256–5262.1 indexed citations
19.
Kumar, Vive, et al.. (2003). Rating Learning Object Quality with Bayesian Belief Networks. E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. 2003(1). 1598–1601.2 indexed citations
20.
Nesbit, John C.. (1986). The accuracy of approximate string matching algorithms. The Journal of Computer Based Instruction. 13(3). 80–83.9 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.