Question Answering (QA), as a next step beyond search engines, has recently emerged as a technology that promises to deliver “answers” instead of “documents”. In spite of significant advances in the QA technology, QA system research has focused on “short answer” questions or factoid questions, for which a single word or short phrase is provided as an answer together with the answer sentence that contains it.
Nowadays, researches have begun to explore for answering more complex questions. Starting from definition questions to more complex questions such as “relationship” questions, attempts to employing high-level inference and domain-specific knowledge. However, the implementation of these systems is expensive as it requires sophisticated linguistic resources and human efforts, that all take time to develop. Meanwhile, hybrid approaches for combing a number of different QA techniques have been de rigueur for many practical applications. However, most multi-strategy based systems assume pre-defined policy to combine individual sub-components collectively so they are hard to adjust to new situation such as when a new component arrives.
This thesis proposes a strategy-driven compositional QA model that provides a framework for answering increasingly complex questions with a variety of atomic level QA modules. The model aims at not only improving performance of a QA system for atomic questions with collaborating QA modules of different sorts but also answering non-atomic, complex questions by composing them with various strategies.
For a user question, answer candidates are first generated by QA modules, which in and of themselves can serve as a QA system in a traditional QA setting, and validated with an appropriate strategy. Given a question, a strategy is chosen with the learning-based classification algorithm we developed. In executing a strategy, multiple QA modules are used in a cooperative fashion. The strategy learning algorithm makes it easy to ado...