Current nanometer-scale metal-oxide-semiconductor field-effect transistor (MOSFET) devices exhibit short-channel, quantum, and self-heating effects, making modeling and analysis very complex. A few recent works have employed machine-learning (ML) techniques and neural networks (NN) to model the complex relationships and optimize devices, but a problem with the NN-based device optimization is that it is data-intensive. Bayesian optimization (BO) can realize ML-based data-efficient optimization of the MOSFET device, as it finds the global optimum while requiring few training data. BO stops theoretically when every candidate is explored, so previous works used a fixed number of iterations for the stopping condition. Such an empirical stopping condition is detrimental to the efficiency and reliability of BO, because the global optimum can be found at an earlier stage or even after stopping. Recently, maximum expected improvement (EImax) with a tiny constant has been proposed as a stopping condition for BO. However, there have not been sufficient works for improving efficiency of BO. By advancing the EImax scheme, we have systemically investigated the effective stopping condition (ESC) for BO of MOSFET devices to boost the efficiency and reliability of optimization. We found that EImax less than a 1% of unit value was an efficient and reliable ESC for optimization, which resulted in up-to-87.6% and up-to-47% reductions of required training data compared with the fixed iteration method and the tiny constant method, respectively. Our study provides a novel method to boost efficiency and reliability of BOs for the optimization of MOSFET design in the semiconductor industry.