Comparison and combination of textual and visual features for interactive cross-language image retrieval

Pei Cheng Cheng*, Jen Yuan Yeh, Hao Ren Ke, Been Chian Chien, Wei Pang Yang

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

3 Citations (Scopus)

Abstract

This paper concentrates on the user-centered search task at Image-CLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems -T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.

Original languageEnglish
Pages (from-to)793-804
Number of pages12
JournalLecture Notes in Computer Science
Volume3491
DOIs
Publication statusPublished - 2005
Externally publishedYes
Event5th Workshop of the Cross-Language Evaluation Forum, CLEF 2004: Multilingual Information Access for Text, Speech and Images - Bath, United Kingdom
Duration: 2004 Sept 152004 Sept 17

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Comparison and combination of textual and visual features for interactive cross-language image retrieval'. Together they form a unique fingerprint.

Cite this