Machine Learning on the Edge

Machine Learing

Machine Learning on the Edge

According to a Gartner study artificial intelligence (AI) will be the No. 1 trend in coming 5 years in IoT. The number of IoT devices will increase from 14.2 billion in 2019 to 25 billion in 2021. The value of AI in this context is its ability to quickly gaining insights from devices data. Machine learning (ML), an AI technology, brings the ability to automatically identify patterns and detect anomalies in the data that smart sensors and devices generate - information such as temperature, pressure, humidity, air quality, vibration, and sound. Compared to traditional BI tools the ML approaches can make operational predictions up to 20 times faster with greater accuracy.

Why machine learning at edge?

Mission-critical applications such as factory automation, self-driving cars, facial and image recognition require not only ultra-low latency but also high reliability and fast, on-the-fly decision-making. Any centralized architectures are not able to provide the new performance requirements mostly due to congestion, high latency, low bandwidth and even connection availability. Furthermore, fast decision-making on the edge needs advanced computing capabilities right on the spot, which can be provided only by onboard computers or interconnected edge-computing local nodes working together and this makes it very expensive.

Machine Learning on the edge alleviates the above issues and provides other benefits.

How do you achieve AI on IoT devices by using ML technology on AWS?

The IoT devices tend to have less compute resources to have decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. Even some of the cases the data never reaches the cloud because of the law of the physics, the law of the economics and the law of the land. In this case, the data modeling and training have to happen outside of the IoT device and synched to the device when it is connected.

To achieve this on AWS we need 3 major components:

  • AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.
  • AWS IoT Greengrass ML Inference is a feature of AWS IoT Greengrass for local machine learning (ML) inference that makes it faster and easier to deploy and run machine learning models on AWS IoT Greengrass devices.
  • Amazon Sagemaker gives the possibili​ty to build, train, and test your ML models by using power of the AWS cloud including fast, powerful instances equipped with GPU before deploying them to small, low-powered, intermittently-connected IoT devices, for example running in factories, vehicles, mines, fields, and homes wherever it applies.

How do you build ML models?

You need to build and train ML models before you start maintenance predictions. A high-level ML process to build and train models apply to most use cases and are relatively easy to implement with AWS IoT.

Start by collecting supporting data for the ML problem that you are trying to solve and temporarily send it to AWS IoT Core. This data should be from the machine or system associated with each ML model. The data is then transferred from on-site to Amazon S3 buckets you designate in your account either though VPN, direct connect connection or snowball appliance depending on the size.

AWS IoT Analytics supports the efficient storage of data and pipeline processing to enrich and filter the data for later use in ML model building.

Amazon SageMaker supports direct integration with AWS IoT Analytics as a data source. Jupyter Notebook(opens in new tab) templates are provided to get you started quickly in building and training the ML model. For predictive maintenance use cases, linear regression and classification are the two most common algorithms you can use. There are many other algorithms to consider for time-series data prediction and you can try different ones and measure the effectiveness of each in your process. Also consider that AWS Greengrass ML Inference supports Apache MXNet(opens in new tab), TensorFlow(opens in new tab) and Chainer(opens in new tab) unterstützt, pre-built packages that make deployment easier.

Recently launched Sagemaker Neo(opens in new tab), a new open source project from Amazon, optimizes the performance of ML models for variety of platforms.

How the trained ML models deployed to the edge?

Running predictions locally requires the real-time machine data, ML model, and local compute resources to perform the inference. AWS Greengrass supports deploying ML models built with Amazon SageMaker to the edge. An AWS Lambda function performs the inference. Identical machines can receive the same deployment package that contains the ML model and inference Lambda function. This creates a low-latency solution. There is no dependency on AWS IoT Core to evaluate real-time data and send alerts or commands to infrastructure to perform desired action, if required and confidence is high enough. 

Running local predictions

The AWS Lambda function linked to the ML model as part of the AWS Greengrass deployment configuration performs prediction in real time. The AWS Greengrass message broker routes selected data published on a designated MQTT topic to the AWS Lambda function to perform the inference. When an inference returns a high probability of a match, then multiple actions can be executed in the AWS Lambda function. For example, a shutdown command can be sent to a machine or, using either local or cloud messaging services, an alert can be sent to an operations team.
For each ML model, you need to determine the threshold for inference confidence that equates to a predicted failure condition. For example, if an inference for a machine you are monitoring indicates with high certainty (let’s say a level of 90%), then you would take appropriate action. However, if the confidence level is 30%, then you might decide not to act on that result. You can use AWS IoT Core to publish inference results on a dedicated logging and reporting topic.

Here are a few examples of the many ways that you can put Machine Learning at IoT Edge:

  • Physical Security – Smart devices (including the AWS DeepLens(opens in new tab)) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.
  • Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment. By using the data and model we can predict when the device can fail and pass this information to the operation staff. The operation staff can then do the maintenance on the device. The other way would be much worse, the outage costs more. Through this intelligence you can increase the availability of the device and increase the revenue.

Do you have any use case with IoT or Machine Learning or both? Do you want to learn more about Swisscom’s portfolio and services on Amazon Web Services (AWS)? Then get in touch with our experts at in new tab) or visit in new tab).

Link to the Gartner study: in new tab).

Abdurixit Abduxukur

Abdurixit Abduxukur

Cloud Solution Architect

More getIT-articles

Ready  for  Swisscom

Find the job or career to suit you. A career where you can make a difference and continue your personal development.

What you do is who we are.

Go to careers

Go to current cyber security vacancies