In recent times, research in hardware and artificial intelligence algorithms is actively progressing, and there is a continual effort to replace tasks traditionally performed by humans with machines, with deep learning models at the forefront. The parameters of these deep learning-based artificial intelligence models are showing a consistent increase, leading to a steady rise in the required computational workload. Hardware accelerators for AI computations are being investigated to handle the growing computational demands. However, these accelerators still face data bottleneck issues between memory and processors. Processing-in-memory (PIM) alleviates the data bottleneck issue inherent in von Neumann architecture by placing the processor within the memory, reducing data movement between memory and processors. In this paper, we propose a DRAM-based analog computing in-memory accelerator, employing the structure of the conventional 1T1C (1 transistor 1 capacitor) cell, using two cells to construct a single Multiply-Accumulate (MAC) unit. The accelerator presented in this paper allows all units in the processor array to simultaneously participate in computations.