This paper concentrates on the user-centered search task at Image-CLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems -T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.
|頁（從 - 到）
|Lecture Notes in Computer Science
|已發佈 - 2005
|5th Workshop of the Cross-Language Evaluation Forum, CLEF 2004: Multilingual Information Access for Text, Speech and Images - Bath, 英国
持續時間: 2004 9月 15 → 2004 9月 17
ASJC Scopus subject areas