Developing efficient hardware solutions for processing Convolutional Neural Networks (CNNs) is an active area of research among the computer architecture community. While some model level modifications have been proposed over the years, the use of a transformed convolution scheme is the only approach, which guarantees performance improvement without the loss of accuracy. Among the transformed convolution schemes, the Winograd Minimal Filtering algorithm guarantees up to 2.25X performance improvement by significantly reducing the overall compute-intensity of the CNN. The Winograd convolution algorithm also accompanies an inherent parallelism called Intra Tile Parallelism, which presents a unique opportunity to further speedup the CNN processing. Our work proposes an efficient dataflow architecture, which exploits this Intra Tile parallelism to exhibit performance improvements for CNN processing over ResNet model. The performance improvements achieved from our experiments over the ResNet model outperform the state of the art results provided by NVIDIA's cuDNN library. We experienced a speedup of up to 2.14X for CNN layer processing time, and device memory bandwidth savings of up to 2.3X on Volta V100 Graphics Processing Unit (GPU), inside the NVIDIA's DGX-1 system, relative to their cuDNN library-based counterparts.