When a neural network is used to solve a given problem, it is necessary to match the complexity of the network to the problem because the complexity of the network significantly influences its generalization performance. Thus, algorithms that can find appropriate network architectures are highly desirable.
This thesis proposes algorithms which automatically organize feedforward neural networks using fixed neurons and adaptive neurons for given problems. When the training data of a learning task is given, the proposed algorithms try to find the optimal network which is compact and shows good generalization performance for the task using constructive procedures and pruning procedures. In this thesis, two types of network architectures are involved. The first one is the neural network using fixed neurons. Two algorithms for this type of architecture are proposed. They organize the multi-layer perceptron (MLP) which is one of the most widely applied and researched neural network architectures. The second one is the neural network using adaptive neurons. The proposed algorithm for this type of architecture organizes networks with modular structure whose hidden units consist of MLPs. The MLP networks of the adaptive neurons are organized by the developed self-organizing algorithms for MLPs. The proposed algorithms generate compact networks having good generalization capability and are robust to random variations in several trials. The performances are tested on four function regression problems. The results demonstrate that the proposed algorithms outperform the other approaches.