Recently, deep learning is influencing not only the technology itself but also our everyday lives. Formerly, most AI functionalities and applications were centralized on datacenters. However, the primary platform for AI has recently shifted to on-devices. With the increasing demand on edge, mobile and IoT AI, conventional hardware solutions face their ordeal because of their low energy efficiency on such power hungry applications. For the past few years, dedicated DNN inference accelerators have been under the spotlight. However, with the rising emphasis on privacy, personalization and local optimization, ability to learn is becoming the second hurdle for “on-device AI.” In addition, with the recent developments in hardware research, faster DNN processing speed with low power consumption is achieved, enabling numerous applications on edge and mobile devices, which were formerly not applicable to edge and mobile devices. Applications with humanistic intelligence, which can take users' emotion into account, have been demonstrated, along with GAN and DRL as well as AI models using 3-dimensional data processing for higher accuracy.