Automatic image annotation retrieval system

Pei Cheng Cheng, Been Chian Chien, Hao Ren Ke, Wei Pang Yang

Research output: Contribution to journalArticle

Abstract

Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48%; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7% to 62.7%.

Original languageEnglish
Pages (from-to)984-991
Number of pages8
JournalWSEAS Transactions on Communications
Volume5
Issue number6
Publication statusPublished - 2006 Jun
Externally publishedYes

Fingerprint

Image retrieval
Semantics
Color
Feedback
Textures
Experiments

Keywords

  • Co-occurrence model
  • Keyword-based image retrieval
  • Relevance feedback

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Cite this

Cheng, P. C., Chien, B. C., Ke, H. R., & Yang, W. P. (2006). Automatic image annotation retrieval system. WSEAS Transactions on Communications, 5(6), 984-991.

Automatic image annotation retrieval system. / Cheng, Pei Cheng; Chien, Been Chian; Ke, Hao Ren; Yang, Wei Pang.

In: WSEAS Transactions on Communications, Vol. 5, No. 6, 06.2006, p. 984-991.

Research output: Contribution to journalArticle

Cheng, PC, Chien, BC, Ke, HR & Yang, WP 2006, 'Automatic image annotation retrieval system', WSEAS Transactions on Communications, vol. 5, no. 6, pp. 984-991.
Cheng, Pei Cheng ; Chien, Been Chian ; Ke, Hao Ren ; Yang, Wei Pang. / Automatic image annotation retrieval system. In: WSEAS Transactions on Communications. 2006 ; Vol. 5, No. 6. pp. 984-991.
@article{ec5facb6220a4c1e952de1ed1124b679,
title = "Automatic image annotation retrieval system",
abstract = "Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48{\%}; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7{\%} to 62.7{\%}.",
keywords = "Co-occurrence model, Keyword-based image retrieval, Relevance feedback",
author = "Cheng, {Pei Cheng} and Chien, {Been Chian} and Ke, {Hao Ren} and Yang, {Wei Pang}",
year = "2006",
month = "6",
language = "English",
volume = "5",
pages = "984--991",
journal = "WSEAS Transactions on Communications",
issn = "1109-2742",
publisher = "World Scientific and Engineering Academy and Society",
number = "6",

}

TY - JOUR

T1 - Automatic image annotation retrieval system

AU - Cheng, Pei Cheng

AU - Chien, Been Chian

AU - Ke, Hao Ren

AU - Yang, Wei Pang

PY - 2006/6

Y1 - 2006/6

N2 - Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48%; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7% to 62.7%.

AB - Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48%; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7% to 62.7%.

KW - Co-occurrence model

KW - Keyword-based image retrieval

KW - Relevance feedback

UR - http://www.scopus.com/inward/record.url?scp=33745545461&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33745545461&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:33745545461

VL - 5

SP - 984

EP - 991

JO - WSEAS Transactions on Communications

JF - WSEAS Transactions on Communications

SN - 1109-2742

IS - 6

ER -