TY - JOUR
T1 - Automatic image annotation retrieval system
AU - Cheng, Pei Cheng
AU - Chien, Been Chian
AU - Ke, Hao Ren
AU - Yang, Wei Pang
PY - 2006/6
Y1 - 2006/6
N2 - Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48%; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7% to 62.7%.
AB - Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features, such as color, shape, and texture, of an example image or image sub-region to find similar images in an image database. Query by image example or stretch sometime for users is inconvenient. In this paper, we proposed an automatic annotation image retrieval system allowing users query image by keyword. In our system, an image was segmented into regions, each of which corresponds to an object. The regions identified by region-based segmentation are more consistent with human cognition than those identified by block-based segmentation. According to the object's visual features (color and shape), new objects will be mapped to the similar clusters to obtain their associated semantic concept. The semantic concepts derived by the training images may not be the same as the real semantic concepts of the underlying images, because the former concepts depend on the low-level visual features. To ameliorate this problem, we also propose a relevance-feedback model to learn the interests of users. The experiments show that the proposed algorithm outperforms the traditional co-occurrence model about 14.48%; furthermore, after five times of relevance feedback, the mean average precision is improved from 42.7% to 62.7%.
KW - Co-occurrence model
KW - Keyword-based image retrieval
KW - Relevance feedback
UR - http://www.scopus.com/inward/record.url?scp=33745545461&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=33745545461&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:33745545461
SN - 1109-2742
VL - 5
SP - 984
EP - 991
JO - WSEAS Transactions on Communications
JF - WSEAS Transactions on Communications
IS - 6
ER -