In recent years, the robotics community has witnessed a proliferation of platforms commercialized for diverse applications ranging from home service, delivery, industrial inspection, videography, and exploration in hazardous environments. As the field advances, the development of highly dynamic robotic systems, resembling the capabilities of their natural counterparts becomes increasingly essential. However, such dynamic systems face a significant challenge in the complexity of their control mechanisms, particularly when navigating the dynamic uncertainties inherent in the real world. These uncertainties pose challenges for both model-based and learning-based control approaches. Neglecting these uncertainties may lead to catastrophic failures, particularly when robots are deployed in demanding, long-term missions. To tackle these challenges, we introduce a series of methods that we call uncertainty-driven reinforcement learning framework. These methods harness the inherent uncertainty in dynamic robotics systems to enhance controller robustness and adaptability in harsh, real-world environments. The effectiveness of the proposed methods were demonstrated through a comprehensive real-world experiments featuring drones and legged robots as the primary platforms.