Automated real-time machine learning for IOT for manufacturing a cloud architecture and API
Parto Dezfouli, Mahmoud
MetadataShow full item record
Due to the recent movements in Industry 4.0 and Internet of Things (IoT), accessing or generating data in the Smart Manufacturing (SM) domain has become more attainable; communication protocols such as MTConnect and OPC-UA provide access to a majority of raw data generated from machine tools while retrofit sensor packs facilitate high- frequency data acquisitions from legacy and modern equipment. These technologies have led to the generation of quantities of raw data, known as Big Data (BD), that are complex to be analyzed. Current IoT architectures and frameworks propose Cloud Computing (CC) and Centralized Training (CT) as the addressing solutions for BD and collaborative Machine Learning (ML) models. These solutions, however, have limitations such as Internet dependency and requiring expensive and high-performance cloud resources. As more data are generated, a higher performance framework is required for cloud computing of larger datasets that are either historical in nature or generated from an ever-increasing ubiquitous sensors and sensor arrays that are deployed in modern manufacturing operations. Studying IoT architectures and stream analytics is essential for creation of successful IoT platforms. In this regard, this study proposes a novel, high-performance, and data- driven IoT architecture that considers automated and scalable machine learning techniques with the focus of process control and deeper understanding of manufacturing process and systems performance in the Cyber-Physical Systems (CPS) domain. In this dissertation, first, a novel generalized three-layer IoT architecture utilizing Edge Computing (EC), Fog Computing (FC), CC, and Federated Learning (FL) is presented, where data are preprocessed in the Edge layer, ML models are incrementally trained in the Fog layer and the resulting elements of training are aggregated in the centralized cloud models. Second, two novel stream analytics engines of Outlier Detection and Bayesian Classification, capable of real-time (RT) training and prediction are proposed and analyzed for this architecture. Results show that the training latency for both the Outlier and the Bayesian engines as well as their FL algorithms remained constant as the number of data points increased. On a 1000 data point dataset, the training performances for an upcoming data point for the Outlier and Bayesian engines were on average 136 and 48 times faster, respectively, than retraining the models with all of the data points. These results suggest that the methods discussed in the proposed architecture can lead to the development of higher performance and more scalable IoT frameworks that require lower storage and computing power.