Feedforward neural networks have been successfully applied to a variety of classification problems, but current-generation neural networks are reportedly inadequate for difficult problems in pattern recognition, regardless of implementation issues. In nontrivial problems, a single network trained with conventional framework would be large and would require a long training time, owing to the requisite size of the training data set. Indeed, straight-forward neural network approaches to pattern recognition, in general, seem tractable only for relatively rudimentary tasks. In this thesis, we attempt to develop a pragmatic framework of methodologies for increasing the efficiency of training process and improving the generalization capability of neural network classifiers. Our methodologies are devised for resolving problems primarily in three areas: learning, architecture, and data representation. For learning we present a rapid learning method based on Aitken``s $\triangle^2$ process and a training schedule called selective reinforcement learning; for architecture, a multistage classification scheme, a multiple network scheme, and a weight matrix reduction scheme; and for data representation, a data generation method that includes systematic noise into training samples and a preprocessing method by hidden Markov model. In order to investigate the behavior of neural network classifiers with the proposed methodologies, we designed and implemented neural networks for recognizing printed Hangul (Korean script) syllables of which the images were obtained by an optical scanner, and for recognizing on-line handwriting characters obtained by an LCD tablet. Experimental results with the 990 most frequently used printed Hangul syllables and a large-set of on-line handwriting characters show the usefulness of the proposed methodologies.