Countries citing papers authored by Connie U. Smith
Since
Specialization
Citations
This map shows the geographic impact of Connie U. Smith's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Connie U. Smith with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Connie U. Smith more than expected).
This network shows the impact of papers produced by Connie U. Smith. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Connie U. Smith. The network helps show where Connie U. Smith may publish in the future.
Co-authorship network of co-authors of Connie U. Smith
This figure shows the co-authorship network connecting the top 25 collaborators of Connie U. Smith.
A scholar is included among the top collaborators of Connie U. Smith based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Connie U. Smith. Connie U. Smith is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
All Works
20 of 20 papers shown
1.
Smith, Connie U. & Lloyd G. Williams. (2006). Five steps to establish software performance engineering.. Int. CMG Conference. 507–516.4 indexed citations
2.
Williams, Lloyd G. & Connie U. Smith. (2005). QSEM: Quantitative Scalability Evaluation Method.. Int. CMG Conference. 341–352.6 indexed citations
3.
Smith, Connie U., et al.. (2005). A Performance Model Web Service.. Int. CMG Conference. 447–456.6 indexed citations
Smith, Connie U. & Lloyd G. Williams. (2003). More New Software Antipatterns: Even More Ways to Shoot Yourself in the Foot.. Int. CMG Conference. 717–725.44 indexed citations
6.
Williams, Lloyd G. & Connie U. Smith. (2003). Making the Business Case for Software Performance Engineering. Int. CMG Conference. 349–358.17 indexed citations
7.
Smith, Connie U. & Lloyd G. Williams. (2003). Best Practices for Software Performance Engineering. Int. CMG Conference. 83–92.17 indexed citations
8.
Smith, Connie U. & Lloyd G. Williams. (2002). Introduction to Software Performance Engineering.. Int. CMG Conference. 853–864.8 indexed citations
9.
Smith, Connie U. & Lloyd G. Williams. (2002). New Software Performance AntiPatterns: More Ways to Shoot Yourself in the Foot. Int. CMG Conference. 667–674.34 indexed citations
Smith, Connie U.. (1999). SPE Models for Multi-Tier Client/Server Interactions with MQSeries and Other Middleware.. Int. CMG Conference. 312–321.1 indexed citations
12.
Smith, Connie U. & Lloyd G. Williams. (1998). Performance Engineering Models of CORBA-based Distributed-Object Systems.. Int. CMG Conference. 886–898.16 indexed citations
13.
Smith, Connie U. & Lloyd G. Williams. (1997). A Basic Performance Model Interchange Format. Int. CMG Conference. 550–561.
14.
Smith, Connie U. & Lloyd G. Williams. (1997). Performance Engineering Evaluation of Object Oriented Systems With SPE-ED.. Int. CMG Conference. 694–705.8 indexed citations
15.
Smith, Connie U.. (1996). Designing High-Performance Distributed Applications Using Software Performance Engineering: A Tutorial. Int. CMG Conference. 498–507.6 indexed citations
Smith, Connie U.. (1981). Increasing Information Systems Productivity by Software Performance Engineering.. Int. CMG Conference. 5–14.8 indexed citations
20.
Smith, Connie U. & James C. Browne. (1979). Modeling Software Systems for Performance Predictions.. Int. CMG Conference. 321–341.6 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.