In this work we address the problem of creating relational word-pair embeddings, which represent relations between word pairs as a compositional function between two words. Word-pair embeddings are useful for downstream NLP tasks, such as NLI, where knowledge about a relation between individual words is critical in inferring a relation between two pieces of text. We propose a novel method for limiting pairs to those in close proximity as a way of reducing the computation time significantly while maintaining or improving the quality. In addition, we propose to incorporate external knowledge from hierarchical sources (such as WordNet) alongside such embeddings so that not only syntagmatic relations but also paradigmatic relations are reflected. We test the proposed methods with MNLI and FEVER datasets.