Neural Architecture Search (NAS) has become a powerful technique for automating the design of neural architectures. However, existing NAS approaches require a significant amount of time to explore and learn a lot number of architectures that are not suitable for a given task. Furthermore, they lack generalization across different tasks and often suffer from leveraging useful knowledge learned from previous NAS tasks. To address these limitations, this study proposes an efficient and generalizable neural architecture search framework for real-world applications. The focus of this research is on designing performance predictors that can rapidly predict the target performance accurately by transferring knowledge learned from previous NAS tasks, while recognizing the dataset, hardware, and knowledge-distillation settings. Moreover, this study introduces neural architecture generation models that can generate task-specific optimized architectures for various tasks through dataset embeddings or guidance from the performance predictor. The proposed generation models efficiently generate task-specific optimized neural architectures by leveraging prior knowledge learned from previous NAS tasks or the distribution of neural architectures. Extensive experiments have been conducted in various domains, including computer vision, natural language datasets, and hardware devices, to validate the performance of this framework. The proposed approach significantly improves architecture search performance compared to previous NAS methods while greatly reducing computational costs.