very high-performance neural network system architecture using grouped weight quantization

File
Publisher
Florida Atlantic University
Date Issued
1989
Description
Recently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time. The primary problem is that the execution time of neural networks increases exponentially as the neural network's size increases. This is because of the exponential increase in the number of multiplications and interconnections which makes it extremely difficult to implement medium or large scale ANNs in hardware. The Modular Grouped Weight Quantization (MGWQ) presented in this dissertation is an ANN design which assures that the number of multiplications and interconnections increase linearly as the neural network's size increases. The secondary problems are related to scale-up capability, modularity, memory requirements, flexibility, performance, fault tolerance, technological feasibility, and cost. The MGWQ architecture also resolves these problems. In this dissertation, neural network characteristics and existing implementations using different technologies are described. Their shortcomings and problems are addressed, and solutions to these problems using the MGWQ approach are illustrated. The theoretical and experimental justifications for MGWQ are presented. Performance calculations for the MGWQ architecture are given. The mappings of the most popular neural network models to the proposed architecture are demonstrated. System level architecture considerations are discussed. The proposed ANN computing system is a flexible and a realistic way to implement large fully connected networks. It offers very high performance using currently available technology. The performance of ANNs is measured in terms of interconnections per second (IC/S); the performance of the proposed system changes between 10^11 to 10^14 IC/S. In comparison, SAIC's DELTA II ANN system achieves 10^7. A Cray X-MP achieves 5*10^7 IC/S.
Note

College of Engineering and Computer Science

Language
Type
Extent
300 p.
Identifier
12245
Additional Information
College of Engineering and Computer Science
FAU Electronic Theses and Dissertations Collection
Thesis (Ph.D.)--Florida Atlantic University, 1989.
Date Backup
1989
Date Text
1989
Date Issued (EDTF)
1989
Extension


FAU
FAU
admin_unit="FAU01", ingest_id="ing1508", creator="staff:fcllz", creation_date="2007-07-18 20:12:18", modified_by="staff:fcllz", modification_date="2011-01-06 13:08:37"

IID
FADT12245
Issuance
monographic
Person Preferred Name

Karaali, Orhan.
Graduate College
Physical Description

300 p.
application/pdf
Title Plain
very high-performance neural network system architecture using grouped weight quantization
Use and Reproduction
Copyright © is held by the author, with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
http://rightsstatements.org/vocab/InC/1.0/
Origin Information

1989
monographic

Boca Raton, Fla.

Florida Atlantic University
Physical Location
Florida Atlantic University Libraries
Place

Boca Raton, Fla.
Sub Location
Digital Library
Title
very high-performance neural network system architecture using grouped weight quantization
Other Title Info

A
very high-performance neural network system architecture using grouped weight quantization