Hit papers significantly outperform the citation benchmark for their cohort. A paper qualifies
if it has ≥500 total citations, achieves ≥1.5× the top-1% citation threshold for papers in the
same subfield and year (this is the minimum needed to enter the top 1%, not the average
within it), or reaches the top citation threshold in at least one of its specific research
topics.
Contrasting carbon cycle responses of the tropical continents to the 2015–2016 El Niño
2017323 citationsJunjie Liu, K. W. Bowman et al.profile →
Author Peers
Peers are selected by citation overlap in the author's most active subfields.
citations ·
hero ref
This map shows the geographic impact of A. Eldering's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by A. Eldering with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites A. Eldering more than expected).
This network shows the impact of papers produced by A. Eldering. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by A. Eldering. The network helps show where A. Eldering may publish in the future.
Co-authorship network of co-authors of A. Eldering
This figure shows the co-authorship network connecting the top 25 collaborators of A. Eldering.
A scholar is included among the top collaborators of A. Eldering based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with A. Eldering. A. Eldering is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Kulawik, S. S., J. Worden, James McDuffie, et al.. (2019). Reducing Regional Biases from OCO-2 Observations. AGU Fall Meeting Abstracts. 2019.1 indexed citations
8.
Chapsky, Lars, et al.. (2019). Key Differences in OCO-2 and OCO-3 Calibration. AGU Fall Meeting Abstracts. 2019.1 indexed citations
Eldering, A., Ralph R. Basilio, David Schimel, & Chris O’Dell. (2017). First results from Orbiting Carbon Observatory-2 (OCO-2) and prospects for OCO-3. EGU General Assembly Conference Abstracts. 10215.1 indexed citations
11.
Osterman, G. B., B. Fisher, Debra Wunch, et al.. (2015). OCO-2 Observation and Validation Overview: Observations Data Modes and Target Observations, Taken During the First 15 Months of Operations. AGU Fall Meeting Abstracts. 2015.1 indexed citations
Crisp, David, A. Eldering, & M. R. Gunson. (2014). Preliminary Results from the NASA Orbiting Carbon Observatory–2 (OCO-2). 2014 AGU Fall Meeting. 2014.2 indexed citations
14.
Eldering, A.. (2014). The Orbiting Carbon Observatory-3 (OCO-3) Mission: An Overview. 2014 AGU Fall Meeting.2 indexed citations
15.
Schwandner, F. M., Charles E. Miller, Riley Duren, et al.. (2014). Strategies for satellite-based monitoring of CO2 from distributed area and point sources. EGU General Assembly Conference Abstracts. 14477.1 indexed citations
16.
Sander, Stanley P., Dmitriy Bekker, Jean-François L. Blavier, et al.. (2012). Geostationary Fourier Transform Spectrometer (GeoFTS). AGU Fall Meeting Abstracts. 2012.1 indexed citations
Shephard, Mark W., Vivienne H. Payne, Karen Cady‐Pereira, et al.. (2008). Investigation of biases in the TES temperature retrievals. AGU Spring Meeting Abstracts. 2008.1 indexed citations
19.
Zuffada, Cinzia, О. В. Калашникова, A. Eldering, et al.. (2006). Characterization Of The Global Angstrom Exponent And Aerosol Optical Depth From MISR Observations And IMPACT Model Predictions. AGU Fall Meeting Abstracts. 2006.1 indexed citations
20.
Wilson, Brian, et al.. (2005). GENESIS SciFlo: Scientific Knowledge Creation on the Grid Using a Semantically-Enabled Dataflow Execution Environment. Journal de Radiologie. 2005(7-8). 83–86.7 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.