Hypergraphs can express more higher-order relations than graphs by allowing multiple nodes in a single hyperedge. In the real world, hypergraphs can easily represent complicated relations such as coauthorship, co-purchase, chemical reactions, etc. In these domains, predicting new hyperedges in a hypergraph is an essential task. It can be directly applied in item recommendations, proposing a probable set of chemicals, etc. However, hyperedge prediction is also challenging. It suffers from handling a massive amount of hyperedge sample space, which is exponential to the number of nodes. In practice, it is nearly impossible to check all the possible hyperedges to find the most probable ones. Existing methods predefine a candidate set of hyperedges, which is composed by real hyperedges (i.e. positive samples) and fake hyperedges (i.e. negative samples) sampled from the sample space. For the sampling procedure, a heuristic rule is defined under their assumptions on negative samples. Nevertheless, we found out that negative samples used in training affect the model’s capability and the performance also varies a lot depending on how we sample the negative samples in the test set. We propose an adversarial training method that doesn’t require any assumptions on the negative samples during training. Our method can be applied to any recent neural network models for hypergraphs. Additionally, we added a memory bank to our model to stabilize training. We empirically showed that our training method performs the best on average of three different test sets, each generated from a different negative sampling method. Throughout this paper, we experiment with the effect of the generator and memory bank and analyze the output generated from the generator.