Decision tree-based contrast enhancement for various color images

Chun Ming Tsai*, Zong Mu Yeh, Yuan Fang Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

Conventional contrast enhancement methods are application-oriented and they need transformation functions and parameters which are specified manually. Furthermore, most of them do not produce satisfactory enhancement results for certain types of color images: dark, low-contrast, bright, mostly dark, high-contrast, and mostly bright. Thus, this paper proposes a decision tree-based contrast enhancement algorithm to enhance the above described color images simultaneously. This method includes three steps: first, statistical image features are extracted from the luminance distribution. Second, a decision tree-based classification is proposed to divide the input images into dark, low-contrast, bright, mostly dark, high-contrast, and mostly bright categories. Finally, these image categories are handled by piecewise linear based enhancement method. This novel enhancement method is automatic and parameter-free. Our experiments included different color and gray images. Experimental results show that the performance of the proposed enhancement method is better than other available methods in skin detection, visual perception, and image subtraction measurements.

Original languageEnglish
Pages (from-to)21-37
Number of pages17
JournalMachine Vision and Applications
Volume22
Issue number1
DOIs
Publication statusPublished - 2009 Sept

Keywords

  • Color images
  • Contrast enhancement
  • Decision tree-based classification
  • Parameter-free enhancement
  • Piecewise linear transformation

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Decision tree-based contrast enhancement for various color images'. Together they form a unique fingerprint.

Cite this