Philipp Leitner

3.6k total citations · 1 hit paper
100 papers, 2.2k citations indexed

About

Philipp Leitner is a scholar working on Information Systems, Computer Networks and Communications and Software. According to data from OpenAlex, Philipp Leitner has authored 100 papers receiving a total of 2.2k indexed citations (citations by other indexed papers that have themselves been cited), including 80 papers in Information Systems, 78 papers in Computer Networks and Communications and 22 papers in Software. Recurrent topics in Philipp Leitner's work include Software System Performance and Reliability (57 papers), Cloud Computing and Resource Management (38 papers) and Software Engineering Research (32 papers). Philipp Leitner is often cited by papers focused on Software System Performance and Reliability (57 papers), Cloud Computing and Resource Management (38 papers) and Software Engineering Research (32 papers). Philipp Leitner collaborates with scholars based in Sweden, Switzerland and Austria. Philipp Leitner's co-authors include Jürgen Cito, Stefan Schulte, Harald C. Gall, Olena Skarlat, Joel Scheuner, Michael Borkowski, Waldemar Hummer, Gerald Schermann, Schahram Dustdar and Matteo Nardelli and has published in prestigious journals such as IEEE Transactions on Software Engineering, Computer and IEEE Transactions on Systems Man and Cybernetics Systems.

In The Last Decade

Philipp Leitner

96 papers receiving 2.1k citations

Hit Papers

Optimized IoT service placement in the fog 2017 2026 2020 2023 2017 50 100 150 200 250

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Philipp Leitner Sweden 25 1.6k 1.6k 404 355 153 100 2.2k
Elisabetta Di Nitto Italy 27 1.6k 1.0× 1.5k 1.0× 781 1.9× 222 0.6× 387 2.5× 138 2.5k
Wei‐Tek Tsai China 23 1.1k 0.6× 812 0.5× 352 0.9× 312 0.9× 139 0.9× 85 1.5k
Rami Bahsoon United Kingdom 23 1.1k 0.7× 928 0.6× 558 1.4× 185 0.5× 95 0.6× 135 1.7k
Marin Litoiu Canada 26 1.6k 1.0× 1.7k 1.0× 787 1.9× 195 0.5× 173 1.1× 162 2.3k
Xiaoying Bai China 24 1.3k 0.8× 1.1k 0.7× 383 0.9× 611 1.7× 174 1.1× 101 1.7k
Fuyuki Ishikawa Japan 20 862 0.5× 646 0.4× 386 1.0× 225 0.6× 133 0.9× 113 1.4k
Henry Muccini Italy 23 993 0.6× 716 0.4× 941 2.3× 754 2.1× 146 1.0× 144 1.9k
Shinichi Honiden Japan 22 914 0.6× 812 0.5× 683 1.7× 179 0.5× 109 0.7× 211 1.7k
Jidong Ge China 21 952 0.6× 824 0.5× 344 0.9× 156 0.4× 91 0.6× 98 1.4k
Yingnong Dang United States 28 1.4k 0.9× 1.9k 1.1× 915 2.3× 719 2.0× 57 0.4× 58 2.6k

Countries citing papers authored by Philipp Leitner

Since Specialization
Citations

This map shows the geographic impact of Philipp Leitner's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Philipp Leitner with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Philipp Leitner more than expected).

Fields of papers citing papers by Philipp Leitner

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Philipp Leitner. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Philipp Leitner. The network helps show where Philipp Leitner may publish in the future.

Co-authorship network of co-authors of Philipp Leitner

This figure shows the co-authorship network connecting the top 25 collaborators of Philipp Leitner. A scholar is included among the top collaborators of Philipp Leitner based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Philipp Leitner. Philipp Leitner is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Neto, Francisco Gomes de Oliveira, et al.. (2025). The Impact of Prompt Programming on Function-Level Code Generation. IEEE Transactions on Software Engineering. 51(8). 2381–2395. 2 indexed citations
2.
Hebig, Regina, et al.. (2024). The Roles, Responsibilities, and Skills of Engineers in the Era of Microservices-Based Architectures. Chalmers Research (Chalmers University of Technology). 13–23.
3.
Neto, Francisco Gomes de Oliveira, et al.. (2024). The Impact of Compiler Warnings on Code Quality in C++ Projects. Chalmers Research (Chalmers University of Technology). 270–279.
4.
Zhang, Huaifeng, et al.. (2024). Machine Learning Systems are Bloated and Vulnerable. Proceedings of the ACM on Measurement and Analysis of Computing Systems. 8(1). 1–30. 1 indexed citations
5.
Neto, Francisco Gomes de Oliveira, et al.. (2024). From Human-to-Human to Human-to-Bot Conversations in Software Engineering. Chalmers Research (Chalmers University of Technology). 38–44.
6.
Zhang, Huaifeng, et al.. (2024). Machine Learning Systems are Bloated and Vulnerable. Chalmers Research (Chalmers University of Technology). 37–38. 2 indexed citations
7.
Leitner, Philipp, et al.. (2024). Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice. Proceedings of the ACM on software engineering.. 1(FSE). 1819–1840. 25 indexed citations
8.
Leitner, Philipp, et al.. (2022). Automated Generation and Evaluation of JMH Microbenchmark Suites From Unit Tests. IEEE Transactions on Software Engineering. 49(4). 1704–1725. 7 indexed citations
9.
Leitner, Philipp, et al.. (2022). Using Microbenchmark Suites to Detect Application Performance Changes. IEEE Transactions on Cloud Computing. 11(3). 2575–2590. 6 indexed citations
10.
Leitner, Philipp, et al.. (2021). Using application benchmark call graphs to quantify and improve the practical relevance of microbenchmark suites. PeerJ Computer Science. 7. e548–e548. 5 indexed citations
11.
Gall, Harald C., et al.. (2021). Applying test case prioritization to software microbenchmarks. Empirical Software Engineering. 26(6). 133–133. 9 indexed citations
12.
Gall, Harald C., et al.. (2020). Dynamically reconfiguring software microbenchmarks: reducing execution time without sacrificing result quality. Zurich Open Repository and Archive (University of Zurich). 989–1001. 19 indexed citations
13.
Costa, Diego Elias, Cor‐Paul Bezemer, Philipp Leitner, & Artur Andrzejak. (2019). What's Wrong with My Benchmark Results? Studying Bad Practices in JMH Benchmarks. IEEE Transactions on Software Engineering. 47(7). 1452–1467. 29 indexed citations
14.
Zdun, Uwe, Erik Wittern, & Philipp Leitner. (2019). Emerging Trends, Challenges, and Experiences in DevOps and Microservice APIs. IEEE Software. 37(1). 87–91. 17 indexed citations
15.
Scheuner, Joel, et al.. (2019). Software microbenchmarking in the cloud. How bad is it really?. Empirical Software Engineering. 24(4). 2469–2508. 38 indexed citations
16.
Varghese, Blesson, Philipp Leitner, Suprio Ray, et al.. (2019). Cloud Futurology. Computer. 52(9). 68–77. 22 indexed citations
17.
Skarlat, Olena, Matteo Nardelli, Stefan Schulte, Michael Borkowski, & Philipp Leitner. (2017). Optimized IoT service placement in the fog. Service Oriented Computing and Applications. 11(4). 427–443. 253 indexed citations breakdown →
18.
Vassallo, Carmine, Gerald Schermann, Fiorella Zampetti, et al.. (2017). A Tale of CI Build Failures: An Open Source and a Financial Organization Perspective. Zurich Open Repository and Archive (University of Zurich). 183–193. 60 indexed citations
19.
Cito, Jürgen, et al.. (2017). Extraction of Microservices from Monolithic Software Architectures. 524–531. 153 indexed citations
20.
Marin, Ricardo, et al.. (2006). A Distributed Policy Based Solution in a Fault Management Scenario. Global Communications Conference. 1 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026