Deep learning (Machine learning)

Model
Digital Document
Publisher
Florida Atlantic University
Description
In the past few years, the development of complex dynamical networks or systems has stimulated great interest in the study of the principles and mechanisms underlying the Internet of things (IoT). IoT is envisioned as an intelligent network infrastructure with a vast number of ubiquitous smart devices present in diverse application domains and have already improved many aspects of daily life. Many overtly futuristic IoT applications acquire data gathered via distributed sensors that can be uniquely identified, localized, and communicated with, i.e., the support of sensor networks. Soft-sensing models are in demand to support IoT applications to achieve the maximal exploitation of transforming the information of measurements into more useful knowledge, which plays essential roles in condition monitoring, quality prediction, smooth control, and many other essential aspects of complex dynamical systems. This in turn calls for innovative soft-sensing models that account for scalability, heterogeneity, adaptivity, and robustness to unpredictable uncertainties. The advent of big data, the advantages of ever-evolving deep learning (DL) techniques (where models use multiple layers to extract multi-levels of feature representations progressively), as well as ever-increasing processing power in hardware, has triggered a proliferation of research that applies DL to soft-sensing models. However, many critical questions need to be further investigated in the deep learning-based soft-sensing.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Financial time-series data are noisy, volatile, and nonlinear. The classic statistical linear models may not capture those underlying structures of the data. The rapid advancement in artificial intelligence and machine learning techniques, availability of large-scale data, and increased computational capabilities of a machine opens the door to developing sophisticated deep learning models to capture the nonlinearity and hidden information in the data. Creating a robust model by unlocking the power of a deep neural network and using real-time data is essential in this tech era. This study constructs a new computational framework to uncover the information in the financial time-series data and better inform the related parties. It carries out the comparative analysis of the performance of the deep learning models on stock price prediction with a well-balanced set of factors from fundamental data, macroeconomic data, and technical indicators responsible for stock price movement. We further build a novel computational framework through a merger of recurrent neural networks and random compression for the time-series analysis. The performance of the model is tested on a benchmark anomaly time-series dataset. This new computational framework in a compressed paradigm leads to improved computational efficiency and data privacy. Finally, this study develops a custom trading simulator and an agent-based hybrid model by combining gradient and gradient-free optimization methods. In particular, we explore the use of simulated annealing with stochastic gradient descent. The model trains a population of agents to predict appropriate trading behaviors such as buy, hold, or sell by optimizing the portfolio returns. Experimental results on S&P 500 index show that the proposed model outperforms the baseline models.
Model
Digital Document
Publisher
Florida Atlantic University
Description
One basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance. This work proposes a novel model which uses an evolutionary algorithm similar to a meta-learning objective, that is fitted with a resource constraint metrics. Four reinforcement learning environments are considered with the shared concept of depth although the collection of environments is multi-modal. This system shows preservation of some knowledge in sequential task learning and protection of catastrophic forgetting in deep neural networks.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The recent rise of artificial intelligence (AI) using deep learning networks allowed the development of automatic solutions for many tasks that, in the past, were seen as impossible to be performed by a machine. However, deep learning models are getting larger, need significant processing power to train, and powerful machines to use it. As deep learning applications become ubiquitous, another trend is taking place: the growing use of edge devices. This dissertation addresses selected technical issues associated with edge AI, proposes novel solutions to them, and demonstrates the effectiveness of the proposed approaches. The technical contributions of this dissertation include: (i) architectural optimizations to deep neural networks, particularly the use of patterned stride in convolutional neural networks used for image classification; (ii) use of weight quantization to reduce model size without hurting its accuracy; (iii) systematic evaluation of the impact of image imperfections on skin lesion classifiers' performance in the context of teledermatology; and (iv) a new approach for code prediction using natural language processing techniques, targeted at edge devices.