Tim Brys

709 total citations
28 papers, 409 citations indexed

About

Tim Brys is a scholar working on Artificial Intelligence, Computational Theory and Mathematics and Control and Systems Engineering. According to data from OpenAlex, Tim Brys has authored 28 papers receiving a total of 409 indexed citations (citations by other indexed papers that have themselves been cited), including 24 papers in Artificial Intelligence, 13 papers in Computational Theory and Mathematics and 11 papers in Control and Systems Engineering. Recurrent topics in Tim Brys's work include Reinforcement Learning in Robotics (14 papers), Advanced Multi-Objective Optimization Algorithms (9 papers) and Robot Manipulation and Learning (6 papers). Tim Brys is often cited by papers focused on Reinforcement Learning in Robotics (14 papers), Advanced Multi-Objective Optimization Algorithms (9 papers) and Robot Manipulation and Learning (6 papers). Tim Brys collaborates with scholars based in Belgium, United States and Netherlands. Tim Brys's co-authors include Ann Nowé, Matthew E. Taylor, Anna Harutyunyan, Halit Bener Suay, Sonia Chernova, Peter Vrancx, Daniel Kudenko⋆, Kristof Van Moffaert, Peter R. Lewis and Kurt Driessens and has published in prestigious journals such as Neurocomputing, The Knowledge Engineering Review and Connection Science.

In The Last Decade

Tim Brys

28 papers receiving 393 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Tim Brys Belgium 11 300 127 99 47 43 28 409
Yali Du United Kingdom 12 275 0.9× 93 0.7× 33 0.3× 29 0.6× 77 1.8× 35 516
Jelle R. Kok Netherlands 9 305 1.0× 82 0.6× 60 0.6× 103 2.2× 23 0.5× 18 490
Sven Gronauer Germany 3 190 0.6× 84 0.7× 33 0.3× 25 0.5× 43 1.0× 6 435
Kazi Shah Nawaz Ripon Norway 11 143 0.5× 47 0.4× 79 0.8× 28 0.6× 17 0.4× 28 383
Stefan Mitsch United States 11 187 0.6× 70 0.6× 181 1.8× 17 0.4× 55 1.3× 50 447
Sandor Markon Japan 12 222 0.7× 358 2.8× 99 1.0× 45 1.0× 8 0.2× 62 648
Haoran Tang China 6 270 0.9× 60 0.5× 47 0.5× 51 1.1× 20 0.5× 21 380
Hitoshi Kanoh Japan 9 79 0.3× 48 0.4× 35 0.4× 37 0.8× 41 1.0× 42 313
Alborz Geramifard United States 13 306 1.0× 71 0.6× 57 0.6× 43 0.9× 16 0.4× 28 494
Chih‐Hong Cheng Germany 11 124 0.4× 41 0.3× 38 0.4× 11 0.2× 45 1.0× 31 311

Countries citing papers authored by Tim Brys

Since Specialization
Citations

This map shows the geographic impact of Tim Brys's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Tim Brys with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Tim Brys more than expected).

Fields of papers citing papers by Tim Brys

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Tim Brys. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Tim Brys. The network helps show where Tim Brys may publish in the future.

Co-authorship network of co-authors of Tim Brys

This figure shows the co-authorship network connecting the top 25 collaborators of Tim Brys. A scholar is included among the top collaborators of Tim Brys based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Tim Brys. Tim Brys is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Brys, Tim, et al.. (2019). Introspective Q -learning and learning from demonstration. The Knowledge Engineering Review. 34. 1 indexed citations
2.
Brys, Tim, et al.. (2018). Introspective Reinforcement Learning and Learning from Demonstration. Adaptive Agents and Multi-Agents Systems. 1992–1994. 2 indexed citations
3.
Brys, Tim, et al.. (2018). Adapting to Concept Drift in Credit Card Transaction Data Streams Using Contextual Bandits and Decision Trees. Proceedings of the AAAI Conference on Artificial Intelligence. 32(1). 18 indexed citations
4.
Brys, Tim, Anna Harutyunyan, Peter Vrancx, Ann Nowé, & Matthew E. Taylor. (2017). Multi-objectivization and ensembles of shapings in reinforcement learning. Neurocomputing. 263. 48–59. 29 indexed citations
5.
Curran, William J., Tim Brys, David W. Aha, Matthew E. Taylor, & William D. Smart. (2016). Dimensionality Reduced Reinforcement Learning for Assistive Robots.. National Conference on Artificial Intelligence. 6 indexed citations
6.
Suay, Halit Bener, Tim Brys, Matthew E. Taylor, & Sonia Chernova. (2016). Learning from Demonstration for Shaping through Inverse Reinforcement Learning. Adaptive Agents and Multi-Agents Systems. 429–437. 34 indexed citations
7.
Harutyunyan, Anna, Tim Brys, Peter Vrancx, & Ann Nowé. (2015). Multi-Scale Reward Shaping via an Off-Policy Ensemble. Adaptive Agents and Multi-Agents Systems. 1641–1642. 2 indexed citations
8.
Brys, Tim, Anna Harutyunyan, Matthew E. Taylor, & Ann Nowé. (2015). Policy Transfer using Reward Shaping. Adaptive Agents and Multi-Agents Systems. 181–188. 29 indexed citations
9.
Brys, Tim. (2015). Encoding and combining knowledge to speed up reinforcement learning. VUBIR (Vrije Universiteit Brussel). 4347–4348. 1 indexed citations
10.
Brys, Tim, Anna Harutyunyan, Halit Bener Suay, et al.. (2015). Reinforcement learning from demonstration through shaping. VUBIR (Vrije Universiteit Brussel). 3352–3358. 83 indexed citations
11.
Harutyunyan, Anna, Tim Brys, Peter Vrancx, & Ann Nowé. (2015). Shaping Mario with Human Advice. Adaptive Agents and Multi-Agents Systems. 1913–1914. 8 indexed citations
12.
Rodrigues, S., R. Teixeira Pinto, Pavol Bauer, Tim Brys, & Ann Nowé. (2015). Online distributed voltage control of an offshore MTdc network using reinforcement learning. 1769–1775. 3 indexed citations
13.
Moffaert, Kristof Van, Tim Brys, & Ann Nowé. (2015). Risk-sensitivity through multi-objective reinforcement learning. 1746–1753. 4 indexed citations
14.
Brys, Tim, Kristof Van Moffaert, Ann Nowé, & Matthew E. Taylor. (2014). Adaptive objective selection for correlated objectives in multi-objective reinforcement learning. Adaptive Agents and Multi-Agents Systems. 1349–1350. 2 indexed citations
15.
Brys, Tim, et al.. (2014). Distributed learning and multi-objectivity in traffic light control. Connection Science. 26(1). 65–83. 41 indexed citations
16.
Moffaert, Kristof Van, Tim Brys, Arjun Chandra, et al.. (2014). A novel adaptive weight selection algorithm for multi-objective multi-agent reinforcement learning. 2306–2314. 21 indexed citations
17.
Brys, Tim, et al.. (2013). Learning Coordinated Traffic Light Control. VUBIR (Vrije Universiteit Brussel). 8 indexed citations
18.
Brys, Tim, et al.. (2013). On the Behaviour of Scalarization Methods for the Engagement of a Wet Clutch. VUBIR (Vrije Universiteit Brussel). 10 indexed citations
19.
Brys, Tim, Mădălina M. Drugan, Peter A. N. Bosman, Martine De Cock, & Ann Nowé. (2013). Local search and restart strategies for satisfiability solving in fuzzy logics. Data Archiving and Networked Services (DANS). 3242. 52–59. 2 indexed citations
20.
Brys, Tim, Mădălina M. Drugan, & Ann Nowé. (2013). Meta-Evolutionary Algorithms and recombination operators for satisfiability solving in fuzzy logics. 2 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026