Edge AI Architecture and its Applications | Akira AI

Introduction to Edge AI Architecture

In the early days of computing, the distribution of applications and software was handled physically via storage devices. This mode of distribution was inconvenient because of the cost of hardware carrying the applications and inaccessibility of the physical copy to some people, physical re-purchase for upgrade and maintenance of the applications, etc.

This problem was solved in the mid-2000s with the commercialization of cloud computing. Clouds made the management and distribution of applications cheaper and more efficient. It also opened up a marketplace for smaller developers to share their applications and collaborate. Whether data storage or machine learning models, the cloud provides a comprehensive infrastructure to share resources and collaborate globally.

In the past 10 years, applications of cloud computing have grown exponentially. But a new computing model has emerged to cater to problems like low latency and data security which often come with using cloud technology, known as Edge Computing.


Read in detail about Edge AI and its Industrial Use Cases


What is Edge AI?

Edge AI as the name suggests is AI models working on Edge or on-site. When deploying ML models, there is no need to send API requests or real-time data to cloud servers for each inference. With the rise of Edge computing and supporting devices, a new marketplace has opened up for artificial intelligence applications. Now ML models can be deployed locally without the need for internet connectivity. Models deployed on Edge devices are capable of real-time data collection and inference on the spot.

ML models stored on Edge can be updated or maintained through regular connectivity with cloud platforms or through connectivity with their neighboring Edge nodes. Well-established Edge device providers also provide cloud platforms to build/train, deploy and monitor the models in case there is a need for troubleshooting and upgrading the model.

Edge AI architecture usually involves:

  • Cloud Platform: A managing cloud platform provides regular software and model maintenance by regular checks. Metrics and scores of the model are transmitted to the cloud servers regularly, and data transmission, too, in case there is a need for re-training the model. The desired changes are then sent to the edge devices.
  • Edge devices: Deployed locally to the location where they collect data and perform real-time analysis. They usually work offline but connect to cloud servers at regular intervals for software and model checks and updates.
  • Environment: Environment is fitted with edge devices, providing sensor readings or video streams or any input data configured in the edge devices. The operators at the locations receive the inference of edge models and use them for their work.

Click to read more about Implementing AI at the Edge


Why is Edge AI required?

Often models are deployed in remote locations with little to no internet connectivity. In such cases getting inference from the models becomes impossible if deployed on the cloud. In case the data required for inference is in large quantities, it becomes cumbersome and slow to upload the data to the cloud and wait for inference.

A nuclear plant located in a remote location would require regular sensor data monitoring to check for anomalies or radiation leakage. In such a situation, using Edge architecture would serve better than the cloud as it provides instant inference, and there is no need to upload the sensor data to the cloud each second.

Edge AI Devices

Commonly known devices used for Edge AI and computing:

  • Raspberry pi
  • Lenovo ThinkEdge
  • Advantech IPC-200
  • Google Coral boards
  • Jetson Series (NVIDIA)

Architecture: Most edge devices like Raspberry Pi come with a 64-bit processor and RAM. Since edge devices are meant to be lightweight in processing, the memory comprises 1-4GB RAM. Still, some edge devices can upgrade RAM to higher memory to accommodate models which require high face detection model processing power. There are slots for SD-card storage and HDMI ports for input and output. Edge devices also come with a port for power supply and ethernet connectivity.

It is essential to mention that input and output devices such as cameras or display screens are designed specifically to work with edge devices. For example, Pi Camera is designed to work with Raspberry Pi to capture high-definition images and videos.

Edge AI Platforms

Well-known platforms for Edge AI and computing:

  • AWS Greengrass: AWS Greengrass is an open-source platform for managing IoT edge devices. It provides services for building, deploying, and managing edge device models. The Greengrass software is deployed on edge devices connected to Greengrass cloud services for support.
  • Azure IOT Edge: Azure's IOT Edge service provides a cloud platform to manage the edge devices and use Azure's services and packages on the edge devices.
  • Google Distributed Cloud Edge: Google's distributed cloud edge provides Google cloud services on edge devices. It is fully managed by Google, which also provides hardware solutions. It offers real-time data analytics with Google AI and analytics.

Explore about the AI in Edge Computing for Automation in Industries


Applications of Edge AI in Industry

The Internet of Things (IoT) is a system of interconnected devices working on Edge architecture. Apple Inc.'s Siri is an AI voice assistance application that does not need internet connectivity to operate. Similarly, many computer vision applications are gradually moving towards Edge architecture to deploy their models.

NVIDIA Metropolis is an application framework for creating and deploying Edge automation and AI applications to increase the efficiency of metropolitical institutions like airports, factories, farms, hospitals, etc. Arizona's Maricopa County Department of Transportation (MCDOT) has used NoTraffic, an NVIDIA Metropolis partner, to reduce traffic on roadways in Arizona by using deep neural networks and computer vision to track real-time traffic flow.

Some sectors in which Edge AI is applied:

  • Computer Vision: Surveillance systems utilize Edge AI for object detection, face recognition, and tracking to identify anomalous behavior, unauthorized access to systems or areas, identifying subjects with past criminal records, etc., to safeguard the organization and locality. Instant detection and recognition can aid security personnel in taking immediate actions and stopping the malicious attack before it can cause further damage.
  • Manufacturing: Data streams from manufacturing machines can be used for real-time analysis using Edge AI models to monitor the manufacturing process, control temperature/pressure conditions, optimize raw material used, etc. AI models can predict faults in the machinery by continuous analysis of sensor data stream, which can lead to timely maintenance and calibration of the machinery, thus increasing productivity and reducing damage control requirements.
  • Self-driving cars: For self-driving cars to work properly, there is a need for a constant input of sensor data and frequent analysis of the input data at milliseconds frequency. Edge AI provides the best infrastructure for instant inference of the sensor data to guide the car's controls. Since the architecture is self-reliant, low bandwidth will not cause any problem.

Click to explore the Role of Edge AI in Automotive Industry


Cloud-Edge trade-off

Deploying AI models on the cloud or Edge is often a confusing decision to make. With both technologies having pros and cons, the enterprise decides which features are more critical for them. In case the enterprise requires instant inference from the model on real-time data or the site of operations is in a remote location with no or sparse internet connectivity. Edge AI is a better choice to have. Edge devices provide additional security and privacy since the data is not transmitted to cloud servers and is kept on site.

Sending data to cloud platforms requires bandwidth and storage, but edge processing reduces that cost. Suppose the enterprise can afford high latency and does not require regular model inferences. In that case, cloud infrastructure is better since it is much simpler and does not require hardware maintenance and configuration. But using Edge or Cloud architecture is usually not a two-way street since Edge architectures use cloud platforms to maintain their Edge nodes and update their models.

Conclusion

Edge AI is fairly new in computation but has grown exponentially. Each year we see its new applications and new technologies developing around it. Edge AI architecture is very beneficial in manufacturing, surveillance, and monitoring industries. With instant inference deliverance, little to no internet connectivity need, data security, and privacy, as well as cost efficiency, Edge AI has the potential to revolutionize how AI technology is developed and used worldwide. With its convenient architecture, Edge technology can help AI implementation grow and be used more widely by the masses and institutions.