With the broad and rapid adoption of Deep Neural Networks (DNNs) in various domains, an urgent need to validate their behaviour has risen, resulting in various test adequacy metrics for DNNs. One of the metrics, Surprise Adequacy (SA), aims to measure how surprising a new input is based on the similarity to the data used for training. While SA has been evaluated to be effective for image classifiers based on Convolutional Neural Networks (CNNs), it has not been studied for the Natural Language Processing (NLP) domain. This paper applies SA to NLP, in particular to three tasks: text classification, sequence labelling, and question answering task. The aim is to investigate whether SA correlates well with the correctness of the outputs. Also, SA enables prioritisation of failing inputs, thus, helps reducing the high labelling cost. An empirical evaluation shows that SA can generally work as a test adequacy metric in Natural Language Processing, especially for classification tasks.