BetterScholar BetterScholar
Title Claps Level Year L/Y
Why is the Winner the Best?
125 auth. Matthias Eisenmann, Annika Reinke, V. Weru, M. Tizabi, Fabian Isensee, T. Adler, Sharib Ali, V. Andrearczyk, M. Aubreville, U. Baid, S. Bakas, N. Balu, Sophia Bano, Jorge Bernal, S. Bodenstedt, ... Alessandro Casella, V. Cheplygina, M. Daum, Marleen de Bruijne, A. Depeursinge, R. Dorent, J. Egger, David G. Ellis, S. Engelhardt, M. Ganz, N. Ghatwary, G. Girard, Patrick Godau, Anubha Gupta, Lasse Hansen, Kanako Harada, M. Heinrich, N. Heller, Alessa Hering, Arnaud Huaulm'e, P. Jannin, A. E. Kavur, Oldvrich Kodym, M. Kozubek, Jianning Li, Hongwei Li, Jun Ma, Carlos Mart'in-Isla, Bjoern H Menze, A. Noble, Valentin Oreiller, N. Padoy, Sarthak Pati, K. Payette, Tim Radsch, Jonathan Rafael-Patiño, V. Bawa, S. Speidel, C. Sudre, K. V. Wijnen, M. Wagner, Dong-mei Wei, Amine Yamlahi, Moi Hoon Yap, Chun Yuan, M. Zenk, Aneeq Zia, David Zimmerer, D. Aydogan, Binod Bhattarai, Louise Bloch, Raphael Brungel, Jihoon Cho, Chanyeol Choi, Qiongyi Dou, I. Ezhov, C. Friedrich, C. Fuller, R. Gaire, A. Galdran, 'Alvaro Garc'ia Faura, Maria Grammatikopoulou, S. Hong, M. Jahanifar, Ikbeom Jang, A. Kadkhodamohammadi, In-Joo Kang, F. Kofler, Satoshi Kondo, H. Kuijf, Mingxing Li, Minh Huan Luu, Tomavz Martinvcivc, Pedro Morais, M. Naser, Bruno Oliveira, David Owen, Subeen Pang, Jinah Park, Sung-Hong Park, Szymon Płotka, É. Puybareau, N. Rajpoot, Kanghyun Ryu, Numan Saeed, A. Shephard, Pengcheng Shi, Dejan vStepec, Ronast Subedi, G. Tochon, Helena R. Torres, Hélène Urien, Joao L. Vilacca, K. Wahid, Haojie Wang, Jiacheng Wang, Lian Wang, Xiyue Wang, Benedikt Wiestler, Marek Wodziński, Fangfang Xia, Juanying Xie, Zhiwei Xiong, Sen Yang, Yanwu Yang, Zixuan Zhao, K. Maier-Hein, Paul F. Jager, A. Kopp-Schneider, L. Maier-Hein
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they reall…
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
54
4 2023