Many applications utilize sensors in mobile devices and machine learning to provide novel services. However, various factors
such as different users, devices, and environments impact the performance of such applications, thus making the domain
shift (i.e., distributional shift between the training domain and the target domain) a critical issue in mobile sensing. Despite
attempts in domain adaptation to solve this challenging problem, their performance is unreliable due to the complex interplay
among diverse factors. In principle, the performance uncertainty can be identified and redeemed by performance validation
with ground-truth labels. However, it is infeasible for every user to collect high-quality, sufficient labeled data. To address
the issue, we present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance
in a target domain with only unlabeled target data. Our key idea is to approximate the model performance based on the
mutual information between the model inputs and corresponding outputs. Our evaluation with four real-world sensing
datasets compared against six baselines shows that on average, DAPPER outperforms the state-of-the-art baseline by 39.8% in
estimation accuracy. Moreover, our on-device experiment shows that DAPPER achieves up to 396× less computation overhead
compared with the baselines.