Dima Damen

6.5k total citations · 2 hit papers
89 papers, 2.5k citations indexed

About

Dima Damen is a scholar working on Computer Vision and Pattern Recognition, Artificial Intelligence and Biomedical Engineering. According to data from OpenAlex, Dima Damen has authored 89 papers receiving a total of 2.5k indexed citations (citations by other indexed papers that have themselves been cited), including 70 papers in Computer Vision and Pattern Recognition, 24 papers in Artificial Intelligence and 11 papers in Biomedical Engineering. Recurrent topics in Dima Damen's work include Human Pose and Action Recognition (31 papers), Video Surveillance and Tracking Methods (17 papers) and Anomaly Detection Techniques and Applications (16 papers). Dima Damen is often cited by papers focused on Human Pose and Action Recognition (31 papers), Video Surveillance and Tracking Methods (17 papers) and Anomaly Detection Techniques and Applications (16 papers). Dima Damen collaborates with scholars based in United Kingdom, Brazil and France. Dima Damen's co-authors include David Hogg, Walterio Mayol‐Cuevas, Majid Mirmehdi, Michael Wray, Hazel Doughty, Massimo Camplani, Toby Perrett, Sion Hannuna, Adeline Paiement and Lili Tao and has published in prestigious journals such as PLoS ONE, IEEE Transactions on Pattern Analysis and Machine Intelligence and Expert Systems with Applications.

In The Last Decade

Dima Damen

83 papers receiving 2.4k citations

Hit Papers

British Machine Vision Conference (BMVC) 2007 2026 2013 2019 2007 2021 250 500 750

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Dima Damen United Kingdom 22 1.9k 692 323 214 185 89 2.5k
Luigi Cinque Italy 26 1.3k 0.7× 571 0.8× 181 0.6× 229 1.1× 169 0.9× 159 2.3k
Yutaka Satoh Japan 17 1.9k 1.0× 889 1.3× 334 1.0× 257 1.2× 123 0.7× 95 2.6k
Feng Zheng China 31 2.7k 1.4× 965 1.4× 524 1.6× 142 0.7× 247 1.3× 133 3.6k
Xiantong Zhen China 30 2.3k 1.2× 1.2k 1.8× 401 1.2× 206 1.0× 133 0.7× 116 3.0k
Hamed Pirsiavash United States 21 2.3k 1.2× 1.0k 1.5× 175 0.5× 159 0.7× 269 1.5× 54 2.8k
Manohar Paluri United States 14 2.7k 1.4× 1.3k 1.8× 537 1.7× 297 1.4× 164 0.9× 16 3.4k
Syed Afaq Ali Shah Australia 20 961 0.5× 480 0.7× 230 0.7× 225 1.1× 141 0.8× 74 1.8k
Yanghao Li China 17 2.5k 1.3× 1.2k 1.7× 508 1.6× 269 1.3× 188 1.0× 36 3.2k
Du Q. Huynh Australia 22 1.3k 0.7× 482 0.7× 271 0.8× 152 0.7× 281 1.5× 79 2.0k

Countries citing papers authored by Dima Damen

Since Specialization
Citations

This map shows the geographic impact of Dima Damen's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Dima Damen with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Dima Damen more than expected).

Fields of papers citing papers by Dima Damen

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Dima Damen. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Dima Damen. The network helps show where Dima Damen may publish in the future.

Co-authorship network of co-authors of Dima Damen

This figure shows the co-authorship network connecting the top 25 collaborators of Dima Damen. A scholar is included among the top collaborators of Dima Damen based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Dima Damen. Dima Damen is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Perrett, Toby, K. Liu, Siddhant Bansal, et al.. (2025). HD-EPIC: A Highly-Detailed Egocentric Video Dataset. Bristol Research (University of Bristol). 23901–23913. 2 indexed citations
2.
Huh, Jaesung, et al.. (2025). EPIC-SOUNDS: A Large-Scale Dataset of Actions That Sound. IEEE Transactions on Pattern Analysis and Machine Intelligence. 47(11). 9953–9965.
3.
Berg, Jacob, et al.. (2024). Rank2Reward: Learning Shaped Reward Functions from Passive Video. 2806–2813.
4.
Huh, Jaesung, et al.. (2024). TIM: A Time Interval Machine for Audio-Visual Action Recognition. 18153–18163. 3 indexed citations
5.
Carreira, João, Michael C. King, Viorica Pătrăucean, et al.. (2024). Learning from One Continuous Video Stream. 28751–28761. 1 indexed citations
6.
Huh, Jaesung, et al.. (2023). Epic-Sounds: A Large-Scale Dataset of Actions that Sound. 1–5. 17 indexed citations
7.
Stergiou, Alexandros & Dima Damen. (2023). The Wisdom of Crowds: Temporal Progressive Attention for Early Action Prediction. Bristol Research (University of Bristol). 14709–14719. 7 indexed citations
8.
Perrett, Toby, Alessandro Masullo, Dima Damen, et al.. (2022). Personalized Energy Expenditure Estimation: Visual Sensing Approach With Deep Learning. JMIR Formative Research. 6(9). e33606–e33606. 3 indexed citations
9.
Masullo, Alessandro, Tilo Burghardt, Dima Damen, Toby Perrett, & Majid Mirmehdi. (2020). Person Re-ID by Fusion of Video Silhouettes and Wearable Signals for Home Monitoring Applications. Sensors. 20(9). 2576–2576. 7 indexed citations
10.
Sullivan, Brian, et al.. (2019). EPIC-Tent: An Egocentric Video Dataset for Camping Tent Assembly. Bristol Research (University of Bristol). 4461–4469. 15 indexed citations
11.
Wray, Michael, Diane Larlus, Gabriela Csurka, & Dima Damen. (2019). . arXiv (Cornell University). 90 indexed citations
12.
Wray, Michael & Dima Damen. (2019). Learning Visual Actions Using Multiple Verb-Only Labels. Bristol Research (University of Bristol). 176. 3 indexed citations
13.
Doughty, Hazel, Dima Damen, & Walterio Mayol‐Cuevas. (2017). Who's Better, Who's Best: Skill Determination in Video using Deep Ranking.. arXiv (Cornell University). 4 indexed citations
14.
Layne, Ryan, Sion Hannuna, Massimo Camplani, et al.. (2017). A Dataset for Persistent Multi-target Multi-camera Tracking in RGB-D. Edinburgh Research Explorer (University of Edinburgh). 1462–1470. 6 indexed citations
15.
Camplani, Massimo, Adeline Paiement, Majid Mirmehdi, et al.. (2016). Multiple human tracking in RGB‐depth data: a survey. IET Computer Vision. 11(4). 265–285. 37 indexed citations
16.
Tao, Lili, Tilo Burghardt, Sion Hannuna, et al.. (2015). A comparative home activity monitoring study using visual and inertial sensors. Bristol Research (University of Bristol). 644–647. 18 indexed citations
17.
Bleser, Gabriele, Dima Damen, Ardhendu Behera, et al.. (2015). Cognitive Learning, Monitoring and Assistance of Industrial Workflows Using Egocentric Sensor Networks. PLoS ONE. 10(6). e0127769–e0127769. 33 indexed citations
18.
Damen, Dima, et al.. (2014). You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video. Bristol Research (University of Bristol). 40 indexed citations
19.
Damen, Dima, Andrew P. Gee, Walterio Mayol‐Cuevas, & Andrew Calway. (2011). IEEE IROS Workshop on Active Semantic Perception and Object Search in the Real World (ASP-AVS-11). 1 indexed citations
20.
Damen, Dima & David Hogg. (2011). Detecting Carried Objects from Sequences of Walking Pedestrians. IEEE Transactions on Pattern Analysis and Machine Intelligence. 34(6). 1056–1067. 24 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026