It has been found that some huge overshoot in the sense of supnorm may be observed when typical iterative learning control (ILC) algorithms are applied to LTI systems, even though monotone convergence in the sense of -norm is guaranteed. In this paper, a new ILC algorithm with adjustment of learning interval is proposed to resolve such an undesirable phenomenon, and it is shown that the output error can be monotonically converged to zero in the sense of sup-norm when the proposed ILC algorithm is applied. Anumerical example is given to show the effectiveness of the proposed algorithm.