Neural networks are parallel computational models comprised of densely interconnected adaptive processing units. These networks are parallel implementations of nonlinear static or dynamic systems. A very important feature of these networks is their adaptive nature, where learning by example replaces programming in solving problems. This feature makes such computational models very appealing in application domains where one has little or incomplete understanding of the problem to be solved but where training data is readily available. Another feature is the intrinsic parallel architecture that allows for fast computation of solutions when these networks are implemented on parallel digital computers or, ultimately, when implemented in customized hardware.
Neural networks are viable computational models for a wide variety of problems. These include pattern classification, speech synthesis and recognition, adaptive interfaces between humans and complex physical systems, function approximation, image compression, associative memory, clustering, forecasting and prediction, combinatorial optimization, nonlinear system modeling, and control. These networks are neural in the sense that they may have been inspired by neuroscience but not necessarily because they are faithful models of biologic neural or cognitive phenomena.
There are a number of kinds of neural networks, such as a Hopfield network, a multilayer perceptron, a radial basis function network, an so on. In this thesis, we have presented the learning algorithms of the feedforward neural networks and the generalized diagonal recurrent neural networks, and analyzed convergence and stability property of these algorithms.
In recent years, many researchers have studied feedforward multilayer neural networks quite extensively and various fruitful results have been obtained. In particular the feedforward neural network (FNN) with the backpropagation (BP) method proposed by Rumelhart and McClelland is one of the most ...