Most large scale problems have special structures. A linear programming problem may have a block structure and a relatively small number of interaction between the subunits, some ``hard`` combinatorial problems can also be viewed as ``easy`` problems complicated by a relatively small set of side constraints. Efforts to take advantage of such structures often result in the formulation of nondifferentiable optimization (NDO) problems. It is time-consuming to evaluate the objective function of the induced NDO problems even if the number of variables is small. Until now the subgradient method has been used most favorably to solve the induced NDO problems. The convergence speed of the subgradient method is, however, extremely slowed down at the region where the gradient of the objective function varies rapidly or is discontinuous. This problem becomes especially serious when one is solving an induced problem. To resolve this problem, Poljak and Camerini et al. have suggested in their improved subgradient method and modified gradient method to take a suitable combination of the subgradients identified during the process and to use it as the search direction. Their suggestion coincides with the current trend of NDO and it was quite obvious that their algorithm would work better. But they failed to provide concrete theoretical arguments that their algorithms are superior to the subgradient method. Moreover, the algorithms require the optimal value of the problem to be known in advance, making it impossible to apply them to the induced NDO problems. For these reasons the algorithm have not been noticed. In this thesis we show theoretically that the improved subgradient method and the modified gradient method are superior to the subgradient method. That is, we whow that the iterate produced by the methods is closer to the optimal solution that that produced by the ordinary subgradient method. We also show that the direction of the methods forms a smaller angle with the d...