Sparse General Matrix-Matrix Multiplication (SpGEMM) is a key computational kernel in various emerging applications, such as linear algebra, computational chemistry, graph analytics, and deep learning. These applications are memory-bounded that real-world matrix from graph matrix or AI show up to 0.0001% density. Prior row-wise based state-of-the-art accelerator introduces highly-banked cache to
maximize output reuse. However, inefficiently utilize the cache that processes multiple rows concurrently with high-radix and low-throughput mergers, which limits output reuse. To address this problem, this paper proposes a bitonic-sorter-based high-radix and high-throughput merger that maximizes output reuse. We minimize the overhead of high-throughput mergers by removing redundant comparison of bitonic-sorter with a novel one-cycle prediction scheme to optimize it. We further develop a fullypipelined accumulator and aligner to mitigate the long latency penalty. We implement a cycle-accurate simulator based on gem5, which shows 2x, 6x, 47x speedup over prior state-of-the-art Matraptor, GPU, and CPU, respectively.