TY - GEN
T1 - Transformer-based Inverse Halftoning with Attention Mechanism for Halftone Image Reconstruction
AU - Lee, Wang Han
AU - Huang, Pin Tzu
AU - Kang, Li Wei
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Digital halftoning, referring to convert a continuous-tone image into a bi-level halftone image, has been applicable to several bi-level output devices. However, inverse halftoning as a classic image restoration problem is still challenging to reconstruct the continuous tone and image details from halftone images. In this paper, a transformer-based deep inverse halftoning network with attention mechanism is proposed for halftone image restoration. The key is to design an encoder-decoder architecture consisting of Swin Transformer, channel attention, and global/local attention modules, for image feature learning and reconstruction. As a result, the proposed network effectively learns features hierarchically from the input halftone image and well reconstruct the corresponding continuous-tone image. The proposed deep model has been shown to outperform the state-of-the-art (SOTA) deep halftone image restoration networks quantitatively and qualitatively.
AB - Digital halftoning, referring to convert a continuous-tone image into a bi-level halftone image, has been applicable to several bi-level output devices. However, inverse halftoning as a classic image restoration problem is still challenging to reconstruct the continuous tone and image details from halftone images. In this paper, a transformer-based deep inverse halftoning network with attention mechanism is proposed for halftone image restoration. The key is to design an encoder-decoder architecture consisting of Swin Transformer, channel attention, and global/local attention modules, for image feature learning and reconstruction. As a result, the proposed network effectively learns features hierarchically from the input halftone image and well reconstruct the corresponding continuous-tone image. The proposed deep model has been shown to outperform the state-of-the-art (SOTA) deep halftone image restoration networks quantitatively and qualitatively.
KW - attention model
KW - deep learning
KW - digital halftoning
KW - inverse halftoning
KW - transformer
UR - http://www.scopus.com/inward/record.url?scp=85213320118&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213320118&partnerID=8YFLogxK
U2 - 10.1109/GCCE62371.2024.10760529
DO - 10.1109/GCCE62371.2024.10760529
M3 - Conference contribution
AN - SCOPUS:85213320118
T3 - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
SP - 1189
EP - 1190
BT - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 13th IEEE Global Conference on Consumer Electronic, GCCE 2024
Y2 - 29 October 2024 through 1 November 2024
ER -