Robust object recognition is a key component for robots and autonomous vehicles operating in real world environments. A Range finder such as LiDAR is an increasingly common choice of a sensor for modern and future robotic systems, providing a rich source of 3D information. Even though 3D information is a crucial cue in object recognition, due to highly variable/sparse sensor measurement and a large amount of points, many previous approaches suffered from some troubles in efficiently handing data to recognize objects in real-time. In this thesis, we present a fast-but-accurate Two-Phase 3DNet by a compact network size with an embedded 2D/3D Convolutional Neural Network. Our network learns the distribution of complex 3D shape across different object classes with arbitrary poses from raw 3D LiDAR data. To efficiently train our 3D deep learning model, we develop a new volumetric representation for deformed/sparser 3D LiDAR data. We evaluate our approach on publicly available benchmarks using LiDAR, and CAD data to verify the feasibility of our proposed network model. Experiments show that our Two-Phase 3DNet enables significant performance improvement over the-state-of-the-art methods while estimating classes in real time.