artificial neural network architecture for interpolation, function approximation, time series modeling and control applications

File
Publisher
Florida Atlantic University
Date Issued
1994
Description
A new artificial neural network architecture called Power Net (PWRNET) and Orthogonal Power Net (OPWRNET) has been developed. Based on the Taylor series expansion of the hyperbolic tangent function, this novel architecture can approximate multi-input multi-layer artificial networks, while requiring only a single layer of hidden nodes. This allows a compact network representation with only one layer of hidden layer weights. The resulting trained network can be expressed as a polynomial function of the input nodes. Applications which cannot be implemented with conventional artificial neural networks, due to their intractable nature, can be developed with these network architectures. The degree of nonlinearity of the network can be directly controlled by adjusting the number of hidden layer nodes, thus avoiding problems of over-fitting which restrict generalization. The learning algorithm used for adapting the network is the familiar error back propagation training algorithm. Other learning algorithms may be applied and since only one hidden layer is to be trained, the training performance of the network is expected to be comparable to or better than conventional multi-layer feed forward networks. The new architecture is explored by applying OPWRNET to classification, function approximation and interpolation problems. These applications show that the OPWRNET has comparable performance to multi-layer perceptrons. The OPWRNET was also applied to the prediction of noisy time series and the identification of nonlinear systems. The resulting trained networks, for system identification tasks, can be expressed directly as discrete nonlinear recursive polynomials. This characteristic was exploited in the development of two new neural network based nonlinear control algorithms, the Linearized Self-Tuning Controller (LSTC) and a variation of a Neural Adaptive Controller (NAC). These control algorithms are compared to a linear self-tuning controller and an artificial neural network based Inverse Model Controller. The advantages of these new controllers are discussed.
Note

College of Engineering and Computer Science

Language
Type
Extent
338 p.
Identifier
12357
Additional Information
College of Engineering and Computer Science
FAU Electronic Theses and Dissertations Collection
Thesis (Ph.D.)--Florida Atlantic University, 1994.
Date Backup
1994
Date Text
1994
Date Issued (EDTF)
1994
Extension


FAU
FAU
admin_unit="FAU01", ingest_id="ing1508", creator="staff:fcllz", creation_date="2007-07-18 20:22:51", modified_by="staff:fcllz", modification_date="2011-01-06 13:08:39"

IID
FADT12357
Issuance
monographic
Person Preferred Name

Luebbers, Paul Glenn.
Graduate College
Physical Description

338 p.
application/pdf
Title Plain
artificial neural network architecture for interpolation, function approximation, time series modeling and control applications
Use and Reproduction
Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
http://rightsstatements.org/vocab/InC/1.0/
Origin Information

1994
monographic

Boca Raton, Fla.

Florida Atlantic University
Physical Location
Florida Atlantic University Libraries
Place

Boca Raton, Fla.
Sub Location
Digital Library
Title
artificial neural network architecture for interpolation, function approximation, time series modeling and control applications
Other Title Info

An
artificial neural network architecture for interpolation, function approximation, time series modeling and control applications