This map shows the geographic impact of Martin Popel's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Martin Popel with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Martin Popel more than expected).
This network shows the impact of papers produced by Martin Popel. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Martin Popel. The network helps show where Martin Popel may publish in the future.
Co-authorship network of co-authors of Martin Popel
This figure shows the co-authorship network connecting the top 25 collaborators of Martin Popel.
A scholar is included among the top collaborators of Martin Popel based on the total number of
citations received by their joint publications. Widths of edges
represent the number of papers authors have co-authored together.
Node borders
signify the number of papers an author published with Martin Popel. Martin Popel is excluded from
the visualization to improve readability, since they are connected to all nodes in the network.
Zeman, Daniel, Jan Hajič, Martin Popel, et al.. (2018). CoNLL 2018 Shared Task : Multilingual Parsing from Raw Text to Universal Dependencies. 1–21.96 indexed citations
7.
Libovický, Jindřich, Rudolf Rosa, Jindřich Helcl, & Martin Popel. (2018). Solving Three Czech NLP Tasks with End-to-end Neural Models.. 138–143.4 indexed citations
8.
Branco, António, Jan Hajič, Martin Popel, et al.. (2016). QTLeap WSD/NED Corpora: Semantic Annotation of Parallel Corpora in Six Languages. Language Resources and Evaluation. 3023–3030.7 indexed citations
9.
Avramidis, Eleftherios, et al.. (2016). Tools and Guidelines for Principled Machine Translation Development. Language Resources and Evaluation. 1877–1882.4 indexed citations
Rosa, Rudolf, J. Mašek, David Mareček, et al.. (2014). HamleDT 2.0: Thirty Dependency Treebanks Stanfordized. Language Resources and Evaluation. 2334–2341.24 indexed citations
Popel, Martin, et al.. (2013). Coordination Structures in Dependency Treebanks. Meeting of the Association for Computational Linguistics. 517–527.22 indexed citations
14.
Popel, Martin, et al.. (2013). PhraseFix: Statistical Post-Editing of TectoMT. Workshop on Statistical Machine Translation. 141–147.5 indexed citations
15.
Žabokrtský, Zdeněk, et al.. (2012). Formemes in English-Czech Deep Syntactic MT. Workshop on Statistical Machine Translation. 267–274.9 indexed citations
16.
Bojar, Ondřej, Zdeněk Žabokrtský, Ondřej Dušek, et al.. (2012). The Joy of Parallelism with CzEng 1.0. Language Resources and Evaluation. 3921–3928.33 indexed citations
17.
Rosa, Rudolf, et al.. (2012). Using Parallel Features in Parsing of Machine-Translated Sentences for Correction of Grammatical Errors. Meeting of the Association for Computational Linguistics. 39–48.9 indexed citations
18.
Zeman, Daniel, David Mareček, Martin Popel, et al.. (2012). HamleDT: To Parse or Not to Parse?. Language Resources and Evaluation. 2735–2741.32 indexed citations
19.
Popel, Martin, et al.. (2011). Influence of Parser Choice on Dependency-Based MT. Workshop on Statistical Machine Translation. 433–439.6 indexed citations
20.
Žabokrtský, Zdeněk, Martin Popel, & David Mareček. (2010). Maximum Entropy Translation Model in Dependency-Based MT Framework. Workshop on Statistical Machine Translation. 201–206.16 indexed citations
Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive
bibliographic database. While OpenAlex provides broad and valuable coverage of the global
research landscape, it—like all bibliographic datasets—has inherent limitations. These include
incomplete records, variations in author disambiguation, differences in journal indexing, and
delays in data updates. As a result, some metrics and network relationships displayed in
Rankless may not fully capture the entirety of a scholar's output or impact.