Dongyoon Wee

885 total citations
10 papers, 89 citations indexed

About

Dongyoon Wee is a scholar working on Computer Vision and Pattern Recognition, Artificial Intelligence and Biomedical Engineering. According to data from OpenAlex, Dongyoon Wee has authored 10 papers receiving a total of 89 indexed citations (citations by other indexed papers that have themselves been cited), including 9 papers in Computer Vision and Pattern Recognition, 5 papers in Artificial Intelligence and 3 papers in Biomedical Engineering. Recurrent topics in Dongyoon Wee's work include Human Pose and Action Recognition (6 papers), Video Surveillance and Tracking Methods (5 papers) and Anomaly Detection Techniques and Applications (4 papers). Dongyoon Wee is often cited by papers focused on Human Pose and Action Recognition (6 papers), Video Surveillance and Tracking Methods (5 papers) and Anomaly Detection Techniques and Applications (4 papers). Dongyoon Wee collaborates with scholars based in South Korea, Canada and United States. Dongyoon Wee's co-authors include Dit‐Yan Yeung, Myunggu Kang, Soonmin Bae, Jin-Hyung Kim, Junmo Kim, Pilhyeon Lee, Hyeran Byun, Taeoh Kim, Inwoong Lee and Doyoung Kim and has published in prestigious journals such as Sensors, IEEE Robotics and Automation Letters and 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).

In The Last Decade

Dongyoon Wee

10 papers receiving 89 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Dongyoon Wee South Korea 4 75 30 12 10 7 10 89
AJ Piergiovanni United States 5 100 1.3× 72 2.4× 14 1.2× 3 0.3× 7 1.0× 13 127
Kenji Okuma Canada 3 119 1.6× 51 1.7× 13 1.1× 22 2.2× 6 0.9× 5 133
Dariusz Frejlichowski Poland 5 64 0.9× 22 0.7× 5 0.4× 5 0.5× 3 0.4× 26 88
Kangkai Zhang China 5 83 1.1× 41 1.4× 18 1.5× 8 0.8× 3 0.4× 6 124
Jan Ernst United States 5 100 1.3× 35 1.2× 9 0.8× 4 0.4× 5 0.7× 6 138
Yifu Zhang United States 3 105 1.4× 13 0.4× 18 1.5× 17 1.7× 3 0.4× 5 113
Yuan-Ting Hu United States 6 104 1.4× 18 0.6× 5 0.4× 31 3.1× 11 1.6× 13 122
Jianbing Shen China 3 108 1.4× 52 1.7× 7 0.6× 3 0.3× 6 0.9× 6 127
Shenyuan Gao Hong Kong 2 99 1.3× 21 0.7× 4 0.3× 32 3.2× 9 1.3× 3 135
Igor Barros Barbosa Norway 4 127 1.7× 19 0.6× 47 3.9× 6 0.6× 2 0.3× 7 149

Countries citing papers authored by Dongyoon Wee

Since Specialization
Citations

This map shows the geographic impact of Dongyoon Wee's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Dongyoon Wee with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Dongyoon Wee more than expected).

Fields of papers citing papers by Dongyoon Wee

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Dongyoon Wee. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Dongyoon Wee. The network helps show where Dongyoon Wee may publish in the future.

Co-authorship network of co-authors of Dongyoon Wee

This figure shows the co-authorship network connecting the top 25 collaborators of Dongyoon Wee. A scholar is included among the top collaborators of Dongyoon Wee based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Dongyoon Wee. Dongyoon Wee is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

10 of 10 papers shown
1.
Cho, MyeongAh, et al.. (2024). Towards Multi-Domain Learning for Generalizable Video Anomaly Detection. 50256–50284. 1 indexed citations
2.
Kim, Jinhyung, et al.. (2023). Frequency Selective Augmentation for Video Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence. 37(1). 1124–1132. 1 indexed citations
3.
Lee, Pilhyeon, et al.. (2023). Decomposed Cross-Modal Distillation for RGB-based Temporal Action Detection. 2373–2383. 13 indexed citations
4.
Kong, Kyeongbo, et al.. (2023). SEFD: Learning to Distill Complex Pose and Occlusion. 14895–14906. 3 indexed citations
5.
Kang, Myunggu, et al.. (2023). Detection Recovery in Online Multi-Object Tracking with Sparse Graph Tracker. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). 4839–4848. 35 indexed citations
6.
Wee, Dongyoon, et al.. (2023). OCVOS: Object-Centric Representation for Video Object Segmentation. 1655–1659. 1 indexed citations
7.
Wee, Dongyoon, et al.. (2022). Self-Supervised Monocular Depth Estimation With Isometric-Self-Sample-Based Learning. IEEE Robotics and Automation Letters. 8(4). 2173–2180. 1 indexed citations
8.
Lee, Inwoong, Doyoung Kim, Dongyoon Wee, & Sanghoon Lee. (2021). An Efficient Human Instance-Guided Framework for Video Action Recognition. Sensors. 21(24). 8309–8309. 10 indexed citations
9.
Kim, Jin-Hyung, et al.. (2020). Regularization on Spatio-Temporally Smoothed Feature for Action Recognition. 12100–12109. 21 indexed citations
10.
Wee, Dongyoon, et al.. (2020). Learning from Dances: Pose-Invariant Re-Identification for Multi-Person Tracking. 2113–2117. 3 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026