We propose the Holistic Quantified-Self (HQS), a sensor-driven user model for AR glass users. Existing research on context-aware AR interfaces typically relies on singular context attributes, resulting in an incomplete representation of user states. To address this gap, we propose an integrated user model for AR glass users, encompassing physical condition, cognitive-emotional status, social circumstances, and digital consumption behavior. Our system collects data from various devices, enabling simultaneous prediction of user states and tasks within the AR environment. We envision that this approach paves the way for future adaptive AR interfaces that can dynamically adjust virtual content based on the recognized user states and tasks.