Ruiz, Laura V.

Relationships
Member of: Graduate College
Person Preferred Name
Ruiz, Laura V.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.