Prior literature has widely used learnable upscale modules in convolutional neural network based super-resolution algorithms. However, it is not desirable to depend solely on these learnable layers since they create unintended artifacts. Instead, it is encouraged to use a classical interpolation-based method as a backbone and to fill up missing high frequency details based on learnable parameters. Bicubic interpolation has been the most popular method, but it is vulnerable to some downsampling kernels since it only depends on a few neighboring pixels. Some kernels cause a pixel from the downsampled image to reflect information of a large set of pixels in the high resolution image. In this paper, we suggest that a frequency domain-based interpolation method which utilizes the spectral information coming from all pixels of an input image to predict a single pixel of the super-resolved image can reach the one of possible solutions for the issues mentioned above.