The classicla theory of statistical calibration assumes that the standard measurement is exact. From a realistic point of view, however, this assumption needs to be relaxed so that more meaningful calibration procedures may be developed. This paper presents a model which explicitly considers errors in both standard and nonstandard measurements. Under the assumption that replicated observations are available in the calibration experiment, three estimation techniques (ordinary least squares, grouping least squares, and maximum likelihood estimation) combined with two prediction methods (direct and inverse prediction) are compared in terms of the asymptotic mean square error of prediction.