Super Resolution Method Using Channel Attention for High Frequency Feature Emphasis
In this paper, we proposed a method that applied Channel Attention to emphasize features in the process of CNN-based single image super resolution. The existing Pre-upsampling method using deep learning uses an image extended to Bicubic Interpolation, so that the High amount of calculation, and distortion and meaningless values are added during the extension process. Further, in the existing super-resolution method using CNN, high-frequency components such as contours and textures are not emphasized in the process of feature extraction, and there is a problem that the contours are blurred or distorted. To solve these problems, a low-resolution image was used as input, and a Channel Attention structure with Residual Block was used. These Channel Attentions effectively extract features through emphasizing high frequency components and limiting low frequency components. This emphasized feature map was extended with sub-pixel convolution instead of deconvolution for super resolution. As a result, unnecessary duplication operations were reduced, and various features were extracted through many convolutions. Through the experiment, contour and texture expression were improved compared to Bicubic Interpolation and VDSR.