Sign language translation (SLT) is a task that provides translation between spoken and sign languages used in the same country, which tend to show high lexical similarity but low syntactic similarity. The recent emergence of large language models (LLMs) has been remarkable for all downstream tasks in natural language processing, but they have yet to be applied to SLT. In this paper, we explore how to use an LLM with vocabulary sharing for two gloss-based SLT tasks (text-to-gloss (T2G) and gloss-to-text (G2T)) on the NIASL2021 dataset, which consists of 180,848 preprocessed Korean and Korean Sign Language (KSL) sentence pairs. The experimental results showed that Ko-GPT-Trinity-1.2B+VS, a GPT-3-based SLT model with vocabulary sharing, outperformed other SLT models, achieving BLEU-4 scores of 22.06 and 45.89 on T2G and G2T tasks, respectively. We expect that the adoption of an LLM with vocabulary sharing will significantly lessen the resource scarcity problem of SLT.