This letter proposes a:new error function at hidden layers to speed up the training of multilayer perceptrons (MLP's), With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was: demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence as obtained for MLP's without the stalling problem experienced in conventional LBL algorithms.