TY - GEN
T1 - PADU-Net
T2 - 13th IEEE Global Conference on Consumer Electronic, GCCE 2024
AU - Hu, Jing Hung
AU - Kang, Li Wei
AU - Chang, Pao Chi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Retinal vessel segmentation is a key step for the early diagnosis of fundus diseases. Deep learning-based retinal vessel segmentation has shown the potential to achieve better performance than traditional methods. However, most deep learning-based methods still suffer from insufficiently capturing global and local features simultaneously from fundus images. This may degrade the segmentation performance, resulting in infeasible diagnosis for fundus diseases. To solve this problem, this paper introduces the PADU-Net, a parallel attention-based dual U-Net architecture for retinal vessel segmentation. The key is to integrate two parallel U-Net modules, i.e., encoder-decoder architectures, equipped with local and global attention modules, respectively, used for extracting local and global features. Then the features are decoded and fused for generating the segmentation map for the input fundus image. The experiments conducted on the well-known dataset, DRIVE (digital retinal images for vessel extraction), has verified the performance of the proposed framework, outperforming the SOTA (state-of-the-art) methods.
AB - Retinal vessel segmentation is a key step for the early diagnosis of fundus diseases. Deep learning-based retinal vessel segmentation has shown the potential to achieve better performance than traditional methods. However, most deep learning-based methods still suffer from insufficiently capturing global and local features simultaneously from fundus images. This may degrade the segmentation performance, resulting in infeasible diagnosis for fundus diseases. To solve this problem, this paper introduces the PADU-Net, a parallel attention-based dual U-Net architecture for retinal vessel segmentation. The key is to integrate two parallel U-Net modules, i.e., encoder-decoder architectures, equipped with local and global attention modules, respectively, used for extracting local and global features. Then the features are decoded and fused for generating the segmentation map for the input fundus image. The experiments conducted on the well-known dataset, DRIVE (digital retinal images for vessel extraction), has verified the performance of the proposed framework, outperforming the SOTA (state-of-the-art) methods.
KW - attention model
KW - deep learning
KW - encoder-decoder architecture
KW - retinal vessel segmentation
KW - UNet
UR - https://www.scopus.com/pages/publications/85213396537
UR - https://www.scopus.com/pages/publications/85213396537#tab=citedBy
U2 - 10.1109/GCCE62371.2024.10760672
DO - 10.1109/GCCE62371.2024.10760672
M3 - Conference contribution
AN - SCOPUS:85213396537
T3 - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
SP - 152
EP - 153
BT - GCCE 2024 - 2024 IEEE 13th Global Conference on Consumer Electronics
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 29 October 2024 through 1 November 2024
ER -