Publisher
Florida Atlantic University
Description
Researchers from a wide range of fields have discovered the benefits of applying neural networks to pattern recognition problems. Although applications for neural networks have increased, development of tools to design these networks has been slower. There are few comprehensive network development methods. Those that do exist are slow, inefficient, and application specific, require predetermination of the final network structure, and/or result in large, complicated networks. Finding optimal neural networks that balance low network complexity with accuracy is a complicated process that traditional network development procedures are incapable of achieving. Although not originally designed for neural networks, the Group Method of Data Handling (GMDH) has characteristics that are ideal for neural network design. GMDH minimizes the number of required neurons by choosing and keeping only the best neurons and filtering out unneeded inputs. In addition, GMDH develops the neurons and organizes the network simultaneously, saving time and processing power. However, some of the qualities of the network must still be predetermined. This dissertation introduces a new algorithm that applies some of the best characteristics of GMDH to neural network design. The new algorithm is faster, more flexible, and more accurate than traditional network development methods. It is also more dynamic than current GMDH based methods, capable of creating a network that is optimal for an application and training data. Additionally, the new algorithm virtually guarantees that the number of neurons progressively decreases in each succeeding layer. To show its flexibility, speed, and ability to design optimal networks, the algorithm was used to successfully design networks for a wide variety of real applications. The networks developed using the new algorithm were compared to other development methods and network architectures. The new algorithm's networks were more accurate and yet less complicated than the other networks. Additionally, the algorithm designs neurons that are flexible enough to meet the needs of the specific applications, yet similar enough to be implemented using a standardized hardware cell. When combined with the simplified network layout that naturally occurs with the algorithm, this results in networks that can be implemented using Field Programmable Gate Array (FPGA) type devices.