Lu Dong

2.0k total citations
75 papers, 1.5k citations indexed

About

Lu Dong is a scholar working on Artificial Intelligence, Control and Systems Engineering and Computational Theory and Mathematics. According to data from OpenAlex, Lu Dong has authored 75 papers receiving a total of 1.5k indexed citations (citations by other indexed papers that have themselves been cited), including 31 papers in Artificial Intelligence, 28 papers in Control and Systems Engineering and 24 papers in Computational Theory and Mathematics. Recurrent topics in Lu Dong's work include Adaptive Dynamic Programming Control (24 papers), Reinforcement Learning in Robotics (22 papers) and Adaptive Control of Nonlinear Systems (18 papers). Lu Dong is often cited by papers focused on Adaptive Dynamic Programming Control (24 papers), Reinforcement Learning in Robotics (22 papers) and Adaptive Control of Nonlinear Systems (18 papers). Lu Dong collaborates with scholars based in China, United States and Bangladesh. Lu Dong's co-authors include Changyin Sun, Haibo He, X. Zhong, Yuncheng Ouyang, Zichen He, Lei Xue, Haibo He, Yuanda Wang, Yufei Tang and Chunwei Song and has published in prestigious journals such as IEEE Transactions on Power Systems, The Economic Journal and Energy.

In The Last Decade

Lu Dong

64 papers receiving 1.4k citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Lu Dong China 19 782 520 432 311 266 75 1.5k
Hao Xu United States 24 1.2k 1.5× 737 1.4× 381 0.9× 623 2.0× 549 2.1× 136 2.0k
Dapeng Li China 22 2.0k 2.6× 811 1.6× 263 0.6× 159 0.5× 619 2.3× 58 2.5k
Zhiqiang Pu China 20 1.1k 1.4× 192 0.4× 246 0.6× 134 0.4× 353 1.3× 100 1.6k
Abdesselem Boulkroune Algeria 31 1.9k 2.5× 498 1.0× 574 1.3× 283 0.9× 803 3.0× 88 2.7k
Jin‐Xi Zhang China 23 1.6k 2.0× 364 0.7× 163 0.4× 103 0.3× 439 1.7× 86 1.9k
Dengxiu Yu China 20 809 1.0× 262 0.5× 254 0.6× 97 0.3× 740 2.8× 107 1.5k
Aydın Yeşildirek United States 11 1.7k 2.2× 306 0.6× 632 1.5× 93 0.3× 181 0.7× 36 1.9k
Howard M. Schwartz Canada 18 580 0.7× 151 0.3× 321 0.7× 159 0.5× 258 1.0× 129 1.2k
Luis T. Aguilar Mexico 26 1.5k 2.0× 120 0.2× 733 1.7× 293 0.9× 229 0.9× 154 2.3k
Rushikesh Kamalapurkar United States 20 1.3k 1.6× 1.2k 2.4× 711 1.6× 286 0.9× 152 0.6× 72 2.0k

Countries citing papers authored by Lu Dong

Since Specialization
Citations

This map shows the geographic impact of Lu Dong's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Lu Dong with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Lu Dong more than expected).

Fields of papers citing papers by Lu Dong

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Lu Dong. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Lu Dong. The network helps show where Lu Dong may publish in the future.

Co-authorship network of co-authors of Lu Dong

This figure shows the co-authorship network connecting the top 25 collaborators of Lu Dong. A scholar is included among the top collaborators of Lu Dong based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Lu Dong. Lu Dong is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Jia, Yubin, et al.. (2025). Optimal DoS Attack Energy Allocation in Cyber–Physical Systems Based on Deep Reinforcement Learning. IEEE Internet of Things Journal. 12(18). 39077–39087.
2.
Dong, Lu, et al.. (2024). A novel state of health estimation method for lithium-ion battery based on forward-broad learning system. Journal of Energy Storage. 99. 113376–113376. 1 indexed citations
3.
Dong, Lu, et al.. (2024). Active Robust Adversarial Reinforcement Learning Under Temporally Coupled Perturbations. IEEE Transactions on Artificial Intelligence. 6(4). 874–884. 1 indexed citations
4.
Dong, Lu, et al.. (2024). Dynamic Event-Based Hierarchical Fuzzy Prescribed Performance Control for Underactuated Systems With Uncertain Dead Zone. IEEE Transactions on Fuzzy Systems. 32(12). 7118–7128. 3 indexed citations
5.
He, Zichen, et al.. (2024). Functionality-Verification Attack Framework Based on Reinforcement Learning Against Static Malware Detectors. IEEE Transactions on Information Forensics and Security. 19. 8500–8514. 4 indexed citations
6.
Dong, Lu, et al.. (2024). Reinforcement-Learning-Based Multi-Unmanned Aerial Vehicle Optimal Control for Communication Services With Limited Endurance. IEEE Transactions on Cognitive and Developmental Systems. 17(1). 219–231. 4 indexed citations
7.
Sun, Changyin, et al.. (2024). Semi-Supervised Feature Distillation and Unsupervised Domain Adversarial Distillation for Underwater Image Enhancement. IEEE Transactions on Circuits and Systems for Video Technology. 34(8). 7671–7682. 7 indexed citations
8.
Liu, Jian, et al.. (2024). Intermittent Fixed-Time Fuzzy Consensus of Nonlinear Multiagent Systems With Unknown Control Directions and Event-Based Communication. IEEE Transactions on Fuzzy Systems. 32(12). 6917–6928. 6 indexed citations
9.
Liu, Wenzhang, et al.. (2024). Discovering Latent Variables for the Tasks With Confounders in Multi-Agent Reinforcement Learning. IEEE/CAA Journal of Automatica Sinica. 11(7). 1591–1604. 1 indexed citations
10.
Wang, Yuanda, et al.. (2023). Multi-objective deep reinforcement learning for crowd-aware robot navigation with dynamic human preference. Neural Computing and Applications. 35(22). 16247–16265. 5 indexed citations
11.
Dong, Lu, et al.. (2023). Multi-Task Reinforcement Learning With Attention-Based Mixture of Experts. IEEE Robotics and Automation Letters. 8(6). 3812–3819. 13 indexed citations
12.
13.
Jiang, Kun, Wenzhang Liu, Yuanda Wang, Lu Dong, & Changyin Sun. (2023). Credit assignment in heterogeneous multi-agent reinforcement learning for fully cooperative tasks. Applied Intelligence. 53(23). 29205–29222. 7 indexed citations
14.
He, Zichen, Lu Dong, Chunwei Song, & Changyin Sun. (2022). Multiagent Soft Actor-Critic Based Hybrid Motion Planner for Mobile Robots. IEEE Transactions on Neural Networks and Learning Systems. 34(12). 10980–10992. 31 indexed citations
15.
He, Zichen, Lu Dong, Changyin Sun, & Jiawei Wang. (2021). Asynchronous Multithreading Reinforcement-Learning-Based Path Planning and Tracking for Unmanned Underwater Vehicle. IEEE Transactions on Systems Man and Cybernetics Systems. 52(5). 2757–2769. 61 indexed citations
16.
Ouyang, Yuncheng, Lei Xue, Lu Dong, & Changyin Sun. (2021). Neural Network-Based Finite-Time Distributed Formation-Containment Control of Two-Layer Quadrotor UAVs. IEEE Transactions on Systems Man and Cybernetics Systems. 52(8). 4836–4848. 80 indexed citations
17.
Jia, Yubin, Ke Meng, Lu Dong, et al.. (2020). Economic Model Predictive Control of a Point Absorber Wave Energy Converter. IEEE Transactions on Sustainable Energy. 12(1). 578–586. 22 indexed citations
18.
Dong, Lu, et al.. (2020). Solver–Critic: A Reinforcement Learning Method for Discrete-Time-Constrained-Input Systems. IEEE Transactions on Cybernetics. 51(11). 5619–5630. 9 indexed citations
19.
Ouyang, Yuncheng, Lu Dong, & Changyin Sun. (2020). Critic Learning-Based Control for Robotic Manipulators With Prescribed Constraints. IEEE Transactions on Cybernetics. 52(4). 2274–2283. 45 indexed citations
20.
Dong, Lu. (2005). The automatic recognition of disused paper money. Journal of Circuits and Systems. 1 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026