Various weather conditions, such as rain, haze, or snow, can degrade visual quality in images/videos, which may significantly degrade the performance of related applications. In this paper, a novel framework based on sequential dual attention deep network is proposed for removing rain streaks (deraining) in a single image, called by SSDRNet (Sequential dual attention-based Single image DeRaining deep Network). Since the inherent correlation among rain steaks within an image should be stronger than that between the rain streaks and the background (non-rain) pixels, a two-stage learning strategy is implemented to better capture the distribution of rain streaks within a rainy image. The two-stage deep neural network primarily involves three blocks: residual dense blocks (RDBs), sequential dual attention blocks (SDABs), and multi-scale feature aggregation modules (MAMs), which are all delicately and specifically designed for rain removal. The two-stage strategy successfully learns very fine details of the rain steaks of the image and then clearly removes them. Extensive experimental results have shown that the proposed deep framework achieves the best performance on qualitative and quantitative metrics compared with state-of-the-art methods. The corresponding code and the trained model of the proposed SSDRNet have been available online at https://github.com/fityanul/SDAN-for-Rain-Removal.
- deep learning
- dilated convolution
- dual attention network
- Single image rain streaks removal
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design