Mission-critical applications such as factory automation, self-driving cars, facial and image recognition require not only ultra-low latency but also high reliability and fast, on-the-fly decision-making. Any centralized architectures are not able to provide the new performance requirements mostly due to congestion, high latency, low bandwidth and even connection availability. Furthermore, fast decision-making on the edge needs advanced computing capabilities right on the spot, which can be provided only by onboard computers or interconnected edge-computing local nodes working together and this makes it very expensive.
Machine Learning on the edge alleviates the above issues and provides other benefits.
The IoT devices tend to have less compute resources to have decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. Even some of the cases the data never reaches the cloud because of the law of the physics, the law of the economics and the law of the land.
In this case, the data modeling and training have to happen outside of the IoT device and synched to the device when it is connected.
To achieve this on AWS we need 3 major components:
You need to build and train ML models before you start maintenance predictions. A high-level ML process to build and train models apply to most use cases and are relatively easy to implement with AWS IoT.
Start by collecting supporting data for the ML problem that you are trying to solve and temporarily send it to AWS IoT Core. This data should be from the machine or system associated with each ML model. The data is then transferred from on-site to Amazon S3 buckets you designate in your account either though VPN, direct connect connection or snowball appliance depending on the size.
AWS IoT Analytics supports the efficient storage of data and pipeline processing to enrich and filter the data for later use in ML model building.
Amazon SageMaker supports direct integration with AWS IoT Analytics as a data source. Jupyter Notebook templates are provided to get you started quickly in building and training the ML model. For predictive maintenance use cases, linear regression and classification are the two most common algorithms you can use. There are many other algorithms to consider for time-series data prediction and you can try different ones and measure the effectiveness of each in your process. Also consider that AWS Greengrass ML Inference supports Apache MXNet, TensorFlow and Chainer pre-built packages that make deployment easier.
Recently launched Sagemaker Neo, a new open source project from Amazon, optimizes the performance of ML models for variety of platforms.
Running predictions locally requires the real-time machine data, ML model, and local compute resources to perform the inference. AWS Greengrass supports deploying ML models built with Amazon SageMaker to the edge. An AWS Lambda function performs the inference. Identical machines can receive the same deployment package that contains the ML model and inference Lambda function. This creates a low-latency solution. There is no dependency on AWS IoT Core to evaluate real-time data and send alerts or commands to infrastructure to perform desired action, if required and confidence is high enough.
The AWS Lambda function linked to the ML model as part of the AWS Greengrass deployment configuration performs prediction in real time. The AWS Greengrass message broker routes selected data published on a designated MQTT topic to the AWS Lambda function to perform the inference. When an inference returns a high probability of a match, then multiple actions can be executed in the AWS Lambda function. For example, a shutdown command can be sent to a machine or, using either local or cloud messaging services, an alert can be sent to an operations team.
For each ML model, you need to determine the threshold for inference confidence that equates to a predicted failure condition. For example, if an inference for a machine you are monitoring indicates with high certainty (let’s say a level of 90%), then you would take appropriate action. However, if the confidence level is 30%, then you might decide not to act on that result. You can use AWS IoT Core to publish inference results on a dedicated logging and reporting topic.
Here are a few examples of the many ways that you can put Machine Learning at IoT Edge:
Do you have any use case with IoT or Machine Learning or both? Do you want to learn more about Swisscom’s portfolio and services on Amazon Web Services (AWS)? Then get in touch with our experts at email@example.com or visit https://www.swisscom.ch/aws.
Cloud Solution Architect
Trouve le Job ou l’univers professionnel qui te convient. Où tu veux co-créer et évoluer.
Ce qui nous définit, c’est toi.