SGENet: Spatial Guided Enhancement Network for Image Motion Deblurring

Yu Chieh Wang, Chia Hung Yeh

Research output: Contribution to conferencePaperpeer-review


Multi-stage architectures have been widely used for image motion deblurring and achieved significant performance. Previous methods restore the blurred image by obtaining the spatial details of the blurred input image. However, the blurred image cannot provide accurate high-frequency details, degrading the overall deblurring performance. To address this issue, we propose a novel dual-stage architecture that can fully extract the high-frequency information of the blurred images for reconstructing detailed textures. Specifically, we introduce a supervised guidance mechanism that provide precise spatial details to recalibrate the multi-scale features. Furthermore, an attention-based feature aggregator is proposed to adaptively fuse influential features from different stages in order to suppress redundant information from the earlier stage passing through to the next stage, allowing efficient multi-stage architecture design. Extensive experiments on GoPro and HIDE benchmark datasets show the proposed network has the state-of-the-art deblurring performance with low computational complexity when compared to the existing methods.

Original languageEnglish
Publication statusPublished - 2022
Event33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, United Kingdom
Duration: 2022 Nov 212022 Nov 24


Conference33rd British Machine Vision Conference Proceedings, BMVC 2022
Country/TerritoryUnited Kingdom

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'SGENet: Spatial Guided Enhancement Network for Image Motion Deblurring'. Together they form a unique fingerprint.

Cite this