Existing deep feature learning methods usually compute semantic similarity on an embedding space over the average of the extracted features, relying on delicately selected samples for fast convergence. These deep learned features suffer from inter- and intra-class variations since they are spread across the feature space. In this paper, we present a rank-based feature learning method by exploiting the structured information among features for better separating non-linear data. By exploring Riemannian manifolds' geometric properties, the proposed approach models natural second-order statistics such as covariance and optimizes the dispersion using the distribution of Riemannian distances between a reference sample and neighbors and builds a ranked list according to the similarities. Experiments demonstrate significant improvement over state-of-the-art methods on three widely used EEG datasets in motor imagery task classification. Furthermore, the proposed method jointly enlarges the inter-class distances reduces the intra-class distances for learned features.