Frédéric Elisei

1.1k total citations
29 papers, 449 citations indexed

About

Frédéric Elisei is a scholar working on Computer Vision and Pattern Recognition, Signal Processing and Artificial Intelligence. According to data from OpenAlex, Frédéric Elisei has authored 29 papers receiving a total of 449 indexed citations (citations by other indexed papers that have themselves been cited), including 12 papers in Computer Vision and Pattern Recognition, 12 papers in Signal Processing and 11 papers in Artificial Intelligence. Recurrent topics in Frédéric Elisei's work include Speech and Audio Processing (11 papers), Social Robot Interaction and HRI (8 papers) and Face recognition and analysis (7 papers). Frédéric Elisei is often cited by papers focused on Speech and Audio Processing (11 papers), Social Robot Interaction and HRI (8 papers) and Face recognition and analysis (7 papers). Frédéric Elisei collaborates with scholars based in France, Italy and Greece. Frédéric Elisei's co-authors include Gérard Bailly, Pierre Badin, Sascha Fagel, Yuliya Tarabalka, Maxime Bérar, Ugo Pattacini, Jocelyne Ventre‐Dominey, Barry-John Theobald, Peter Ford Dominey and Alaeddine Mihoub and has published in prestigious journals such as The Journal of the Acoustical Society of America, Pattern Recognition Letters and Speech Communication.

In The Last Decade

Frédéric Elisei

27 papers receiving 403 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Frédéric Elisei France 11 180 143 143 131 113 29 449
Sascha Fagel Germany 6 113 0.6× 73 0.5× 83 0.6× 94 0.7× 107 0.9× 30 305
Piero Cosi Italy 14 268 1.5× 155 1.1× 343 2.4× 113 0.9× 158 1.4× 75 615
Samer Al Moubayed Sweden 12 59 0.3× 147 1.0× 220 1.5× 218 1.7× 86 0.8× 49 497
Barry-John Theobald United Kingdom 14 311 1.7× 261 1.8× 87 0.6× 57 0.4× 70 0.6× 40 494
Shogo Okada Japan 13 69 0.4× 102 0.7× 307 2.1× 130 1.0× 159 1.4× 104 569
Mikio Nakano Japan 17 133 0.7× 143 1.0× 636 4.4× 291 2.2× 73 0.6× 120 964
Chaoran Liu Japan 11 61 0.3× 81 0.6× 131 0.9× 142 1.1× 82 0.7× 39 320
Florian Lingenfelser Germany 12 97 0.5× 128 0.9× 266 1.9× 222 1.7× 240 2.1× 36 599
Thurid Vogt Germany 15 307 1.7× 161 1.1× 367 2.6× 184 1.4× 464 4.1× 29 766
Masafumi Nishida Japan 10 61 0.3× 78 0.5× 203 1.4× 85 0.6× 73 0.6× 57 380

Countries citing papers authored by Frédéric Elisei

Since Specialization
Citations

This map shows the geographic impact of Frédéric Elisei's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Frédéric Elisei with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Frédéric Elisei more than expected).

Fields of papers citing papers by Frédéric Elisei

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Frédéric Elisei. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Frédéric Elisei. The network helps show where Frédéric Elisei may publish in the future.

Co-authorship network of co-authors of Frédéric Elisei

This figure shows the co-authorship network connecting the top 25 collaborators of Frédéric Elisei. A scholar is included among the top collaborators of Frédéric Elisei based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Frédéric Elisei. Frédéric Elisei is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

20 of 20 papers shown
1.
Elisei, Frédéric, et al.. (2024). Probing the Inductive Biases of a Gaze Model for Multi-party Interaction. SPIRE - Sciences Po Institutional REpository. 507–511.
2.
Bailly, Gérard, et al.. (2024). Emotags: Computer-Assisted Verbal Labelling of Expressive Audiovisual Utterances for Expressive Multimodal TTS. SPIRE - Sciences Po Institutional REpository. 5689–5695.
3.
Tarpin-Bernard, Franck, Gérard Bailly, Frédéric Elisei, et al.. (2024). The Value of a Virtual Assistant to Improve Engagement in Computerized Cognitive Training at Home: Exploratory Study. JMIR Rehabilitation and Assistive Technologies. 11. e48129–e48129. 1 indexed citations
4.
Elisei, Frédéric, et al.. (2024). Impact of verbal instructions and deictic gestures of a cobot on the performance of human coworkers. 1040–1047. 1 indexed citations
5.
Mihoub, Alaeddine, et al.. (2016). Graphical models for social behavior modeling in face-to face interaction. Pattern Recognition Letters. 74. 82–89. 13 indexed citations
6.
Bailly, Gérard, Frédéric Elisei, Alexandra Juphard, & Olivier Moreaud. (2016). Quantitative Analysis of Backchannels Uttered by an Interviewer During Neuropsychological Tests. SPIRE - Sciences Po Institutional REpository. 2905–2909. 4 indexed citations
7.
Mihoub, Alaeddine, Gérard Bailly, Christian Wolf, & Frédéric Elisei. (2015). Learning multimodal behavioral models for face-to-face social interaction. Journal on Multimodal User Interfaces. 9(3). 195–210. 14 indexed citations
8.
Parmiggiani, Alberto, Marco Randazzo, Marco Maggiali, et al.. (2014). An articulated talking face for the iCub. 1–6. 3 indexed citations
9.
Hueber, Thomas, Gérard Bailly, Pierre Badin, & Frédéric Elisei. (2013). Speaker adaptation of an acoustic-articulatory inversion model using cascaded Gaussian mixture regressions. 2753–2757. 13 indexed citations
10.
Pattacini, Ugo, et al.. (2012). I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation. Frontiers in Neurorobotics. 6. 3–3. 99 indexed citations
12.
Badin, Pierre, Yuliya Tarabalka, Frédéric Elisei, & Gérard Bailly. (2010). Can you ‘read’ tongue movements? Evaluation of the contribution of tongue display to speech understanding. Speech Communication. 52(6). 493–503. 58 indexed citations
13.
Bailly, Gérard, et al.. (2010). Gaze, conversational agents and face-to-face communication. Speech Communication. 52(6). 598–612. 47 indexed citations
14.
Bailly, Gérard, et al.. (2009). Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models. EURASIP Journal on Audio Speech and Music Processing. 2009. 1–11. 10 indexed citations
15.
Theobald, Barry-John, Sascha Fagel, Gérard Bailly, & Frédéric Elisei. (2008). LIPS2008: visual speech synthesis challenge. 2310–2313. 47 indexed citations
16.
Bailly, Gérard, et al.. (2005). Basic components of a face-to-face interaction with a conversational agent. 247–252. 2 indexed citations
17.
Gibert, Guillaume, et al.. (2004). Evaluation of a Speech Cuer: From Motion Capture to a Concatenative Text-to-cued Speech System. Language Resources and Evaluation. 3 indexed citations
18.
Bailly, Gérard, et al.. (2004). Tracking talking faces with shape and appearance models. Speech Communication. 44(1-4). 63–82. 7 indexed citations
19.
Bailly, Gérard, et al.. (2003). Audiovisual Speech Synthesis. International Journal of Speech Technology. 6(4). 331–346. 54 indexed citations
20.
Elisei, Frédéric, et al.. (2001). Creating and controlling video-realistic talking heads.. AVSP. 90–97. 23 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026