Author: Ionescu, Bogdan; Müller, Henning; Péteri, Renaud; Dang-Nguyen, Duc-Tien; Zhou, Liting; Piras, Luca; Riegler, Michael; Halvorsen, Pål; Tran, Minh-Triet; Lux, Mathias; Gurrin, Cathal; Chamberlain, Jon; Clark, Adrian; Campello, Antonio; Seco de Herrera, Alba G.; Ben Abacha, Asma; Datla, Vivek; Hasan, Sadid A.; Liu, Joey; Demner-Fushman, Dina; Pelka, Obioma; Friedrich, Christoph M.; Dicente Cid, Yashin; Kozlovski, Serge; Liauchuk, Vitali; Kovalev, Vassili; Berari, Raul; Brie, Paul; Fichou, Dimitri; Dogariu, Mihai; Stefan, Liviu Daniel; Constantin, Mihai Gabriel
Title: ImageCLEF 2020: Multimedia Retrieval in Lifelogging, Medical, Nature, and Internet Applications Cord-id: 8go3wflu Document date: 2020_3_24
ID: 8go3wflu
Snippet: This paper presents an overview of the 2020 ImageCLEF lab that will be organized as part of the Conference and Labs of the Evaluation Forum—CLEF Labs 2020 in Thessaloniki, Greece. ImageCLEF is an ongoing evaluation initiative (run since 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF w
Document: This paper presents an overview of the 2020 ImageCLEF lab that will be organized as part of the Conference and Labs of the Evaluation Forum—CLEF Labs 2020 in Thessaloniki, Greece. ImageCLEF is an ongoing evaluation initiative (run since 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval of visual data with the aim of providing information access to large collections of images in various usage scenarios and domains. In 2020, the 18th edition of ImageCLEF will organize four main tasks: (i) a Lifelog task (videos, images and other sources) about daily activity understanding, retrieval and summarization, (ii) a Medical task that groups three previous tasks (caption analysis, tuberculosis prediction, and medical visual question answering) with new data and adapted tasks, (iii) a Coral task about segmenting and labeling collections of coral images for 3D modeling, and a new (iv) Web user interface task addressing the problems of detecting and recognizing hand drawn website UIs (User Interfaces) for generating automatic code. The strong participation, with over 235 research groups registering and 63 submitting over 359 runs for the tasks in 2019 shows an important interest in this benchmarking campaign. We expect the new tasks to attract at least as many researchers for 2020.
Search related documents:
Co phrase search for related documents- absolute error and machine learning technique: 1, 2
- absolute error and mae mean absolute error: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25
- living activity and machine learning: 1, 2, 3
- low fidelity and machine learning: 1
- machine learning and mae mean absolute error: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24
Co phrase search for related documents, hyperlinks ordered by date