In order to design a good knowledge base in a question answering perspective, it is effective to evaluate whether the knowledge base has the answer to the question. For this, it is a common process to translate natural language question to SPARQL form. However, its performance varies depending on the language, structure and length of the question. In this paper, we propose a new evaluation method that translates natural language question to the form of triple. Through this, we can assess whether the knowledge base has the triple required to answer the certain question. Moreover, we can also use the triple as training data for building a good knowledge base. In other words, we can learn how to build a good knowledge base from the question through our new evaluation method.
In order to demonstrate our evaluation method, we have developed a KB evaluation program called KB-Evaluator, and conducted an experiment to check the coverage of some knowledge bases.