Few-shot TTS is a useful but challenging task where we have to mimic a new style given a short reference speech. A popular approach for tackling this problem is to rely on architecture bottleneck for extracting style embedding. However, this approach may have robustness issues if the extracted embedding is not independent of text input, or relevance to speaker identity might be limited due to the bottleneck. In this study, we propose to use adversarial contrastive learning to extract style independent of text. Furthermore, we propose to use supervised contrastive learning to reinforce relevance to speaker identity and utilize rich representation learned by contrastive learning. Quantitative evaluation on benchmark dataset is performed in order to show that our method indeed improves robustness and relevance to speaker identity.