Tamar Avraham

656 total citations
9 papers, 343 citations indexed

About

Tamar Avraham is a scholar working on Computer Vision and Pattern Recognition, Cognitive Neuroscience and Media Technology. According to data from OpenAlex, Tamar Avraham has authored 9 papers receiving a total of 343 indexed citations (citations by other indexed papers that have themselves been cited), including 7 papers in Computer Vision and Pattern Recognition, 3 papers in Cognitive Neuroscience and 2 papers in Media Technology. Recurrent topics in Tamar Avraham's work include Advanced Vision and Imaging (6 papers), Visual perception and processing mechanisms (3 papers) and Visual Attention and Saliency Detection (2 papers). Tamar Avraham is often cited by papers focused on Advanced Vision and Imaging (6 papers), Visual perception and processing mechanisms (3 papers) and Visual Attention and Saliency Detection (2 papers). Tamar Avraham collaborates with scholars based in Israel. Tamar Avraham's co-authors include Michael Lindenbaum, Anath Fischer, Yoav Y. Schechner, Yizhak Ben-Shabat, Yaffa Yeshurun, Amit Aides and Alfred M. Bruckstein⋆ and has published in prestigious journals such as IEEE Transactions on Pattern Analysis and Machine Intelligence, Journal of the Optical Society of America A and Computer Vision and Image Understanding.

In The Last Decade

Tamar Avraham

9 papers receiving 332 citations

Peers — A (Enhanced Table)

Peers by citation overlap · career bar shows stage (early→late) cites · hero ref

Name h Career Trend Papers Cites
Tamar Avraham Israel 7 217 119 112 85 73 9 343
Timothée Jost Switzerland 5 241 1.1× 146 1.2× 78 0.7× 32 0.4× 32 0.4× 13 313
Stefan Maierhofer Austria 10 170 0.8× 57 0.5× 141 1.3× 68 0.8× 90 1.2× 34 380
Xiongli Chai China 13 393 1.8× 35 0.3× 24 0.2× 34 0.4× 20 0.3× 38 447
Hamed Sarbolandi Germany 4 171 0.8× 89 0.7× 68 0.6× 25 0.3× 34 0.5× 4 328
Hangwei Chen China 11 340 1.6× 57 0.5× 15 0.1× 24 0.3× 17 0.2× 36 400
Fangjinhua Wang Switzerland 5 250 1.2× 58 0.5× 79 0.7× 39 0.5× 30 0.4× 9 348
Yongwei Miao China 13 200 0.9× 14 0.1× 68 0.6× 189 2.2× 39 0.5× 53 377
Pedro Santos Germany 10 176 0.8× 33 0.3× 103 0.9× 25 0.3× 29 0.4× 53 299
David Joseph Tan Germany 11 254 1.2× 92 0.8× 67 0.6× 101 1.2× 19 0.3× 17 347
Hamid Izadinia United States 7 212 1.0× 25 0.2× 37 0.3× 34 0.4× 13 0.2× 16 284

Countries citing papers authored by Tamar Avraham

Since Specialization
Citations

This map shows the geographic impact of Tamar Avraham's research. It shows the number of citations coming from papers published by authors working in each country. You can also color the map by specialization and compare the number of citations received by Tamar Avraham with the expected number of citations based on a country's size and research output (numbers larger than one mean the country cites Tamar Avraham more than expected).

Fields of papers citing papers by Tamar Avraham

Since Specialization
Physical SciencesHealth SciencesLife SciencesSocial Sciences

This network shows the impact of papers produced by Tamar Avraham. Nodes represent research fields, and links connect fields that are likely to share authors. Colored nodes show fields that tend to cite the papers produced by Tamar Avraham. The network helps show where Tamar Avraham may publish in the future.

Co-authorship network of co-authors of Tamar Avraham

This figure shows the co-authorship network connecting the top 25 collaborators of Tamar Avraham. A scholar is included among the top collaborators of Tamar Avraham based on the total number of citations received by their joint publications. Widths of edges represent the number of papers authors have co-authored together. Node borders signify the number of papers an author published with Tamar Avraham. Tamar Avraham is excluded from the visualization to improve readability, since they are connected to all nodes in the network.

All Works

9 of 9 papers shown
1.
Ben-Shabat, Yizhak, Tamar Avraham, Michael Lindenbaum, & Anath Fischer. (2018). Graph based over-segmentation methods for 3D point clouds. Computer Vision and Image Understanding. 174. 12–23. 31 indexed citations
2.
Avraham, Tamar, et al.. (2017). 3D Point Cloud Registration for Localization Using a Deep Neural Network Auto-Encoder. 2472–2481. 163 indexed citations
3.
Avraham, Tamar, et al.. (2016). Depth perception in autostereograms: 1/f noise is best. Journal of the Optical Society of America A. 33(2). 149–149. 1 indexed citations
4.
Avraham, Tamar, et al.. (2013). Transitive Re-identification. 46.1–46.11. 5 indexed citations
5.
Aides, Amit, Tamar Avraham, & Yoav Y. Schechner. (2011). Multiscale ultrawide foveated video extrapolation. 11 indexed citations
6.
Avraham, Tamar & Yoav Y. Schechner. (2010). Ultrawide Foveated Video Extrapolation. IEEE Journal of Selected Topics in Signal Processing. 5(2). 321–334. 11 indexed citations
7.
Avraham, Tamar & Michael Lindenbaum. (2009). Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 32(4). 693–708. 92 indexed citations
8.
Avraham, Tamar, Yaffa Yeshurun, & Michael Lindenbaum. (2008). Predicting visual search performance by quantifying stimuli similarities. Journal of Vision. 8(4). 9–9. 20 indexed citations
9.
Avraham, Tamar & Michael Lindenbaum. (2006). Attention-based dynamic visual search using inner-scene similarity: algorithms and bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence. 28(2). 251–264. 9 indexed citations

Rankless uses publication and citation data sourced from OpenAlex, an open and comprehensive bibliographic database. While OpenAlex provides broad and valuable coverage of the global research landscape, it—like all bibliographic datasets—has inherent limitations. These include incomplete records, variations in author disambiguation, differences in journal indexing, and delays in data updates. As a result, some metrics and network relationships displayed in Rankless may not fully capture the entirety of a scholar's output or impact.

Explore authors with similar magnitude of impact

Rankless by CCL
2026