Abstract
This paper concentrates on the user-centered search task at Image-CLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems -T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.
| Original language | English |
|---|---|
| Pages (from-to) | 793-804 |
| Number of pages | 12 |
| Journal | Lecture Notes in Computer Science |
| Volume | 3491 |
| DOIs | |
| Publication status | Published - 2005 |
| Externally published | Yes |
| Event | 5th Workshop of the Cross-Language Evaluation Forum, CLEF 2004: Multilingual Information Access for Text, Speech and Images - Bath, United Kingdom Duration: 2004 Sept 15 → 2004 Sept 17 |
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science