Curated by Vinay Prabhakar Minj
If you want to avoid the issue of storing a large amount of data in the Cloud in order to reduce cost, then the best way to do that is to process it near the edge of your network, where the data is being generated.
One of the popular applications for Edge Computing is city surveillance.
Many surveillance cameras are coming up. For instance, Mumbai has around 5000 CCTV cameras and it plans to have over 5000 more. All these cameras will get footage of traffic, which can help in identifying mobility problems and criminal activities. But it’s practically impossible for security personnel to monitor them all. So, an option will be to use computer vision and machine learning.
The machine learning process is related to data. So, in terms of camera surveillance, the content of the camera needs to be collected and processed to perform various operations. On bandwidth requirement, a camera will need about 3 Mbps for streaming an SD video and 5 Mbps speed for an HD video. An SD video consumes 32 GB data in a day, which means 10 cameras will consume 1.7 TB of data per month. And with further increase in the number of cameras, the data consumption will also increase.
Why we need Edge Computing and where it can be used?
While the cost of the CPU and storage is one part of Cloud computing, the bandwidth is another significant part. The bandwidth cost is not just in terms of pushing the data into the Cloud, but it also includes different units such as the number of servers and storage space.
So, if you want to avoid the issue of storing a large amount of data in the Cloud in order to reduce cost, then the best way to do that is to process it near the edge of your network, where the data is being generated. Hence the term edge computing. Today humans are processing the data received from the camera. Nothing goes to the Cloud, instead, the camera feed goes to a centralised place, where someone is figuring out what’s happening and storing it.
To do the above process automatically using computer vision and machine learning algorithm, a system is required rather than a human to process the data. For example, the data from the camera at a traffic signal can be processed then and there itself.
Edge Computing can be used in applications that require autonomy (i.e. completing tasks with little or no human interaction) such as self-driving cars and Industry 4.0. Another area where Edge Computing may find use are applications that can’t tolerate latency. For example health care and financial transactions, where latency can be a reason for system failure.
AS PER THE RESEARCH DONE BY GARTNER, 50 PERCENT OF ENTERPRISE DATA WILL BE PRODUCED AND PROCESSED OUTSIDE TRADITIONAL DATA CENTRES AND CLOUDS BY 2022 – UP FROM ABOUT 10 PERCENT RECENTLY. ALSO, 80 PERCENT OF ENTERPRISES WOULD HAVE SHUT DOWN THEIR TRADITIONAL DATA CENTRES BY 2025, VERSUS 10 PERCENT IN 2018.
Latency in Cloud server
For factory automation, which is motion control, we need an end-to-end latency of 1 millisecond (ms) and for process automation monitoring, end-to-end latency of 50 ms is required.
In case of the cloud server, the end-to-end latency will take around 60 milliseconds, considering approx 29 ms for one-way latency. So, even if your cloud server is located in Mumbai and you want to automate your factory, it only takes a minimum of 60 milliseconds.
There is latency because of the number of devices where you connect with the Cloud. Latency also has a physical limitation based on the number of kilometers the fiber link runs. One cannot go down below a certain latency. That means 1 millisecond (ms) for factory automation may not work at all in cloud infrastructure.
To solve this problem of latency, Edge and Fog computing can be introduced in IoT. There are three laws which explain why we need Edge computing :-
- Law of physics – act locally
- Law of economics – pre-processing reduces cost
- Law of land – data should stay locally
As per the research by Gartner, 50 percent of enterprise data will be produced and processed outside traditional data centers and Clouds by 2022 – up from about 10 percent recently. Also, 80 percent of enterprises would have shut down their traditional data centers by 2025, versus 10 percent in 2018.
All this indicates that computation is moving towards Edge and distributed systems.
What is EDGE computing?
Edge computing is the optimisation of Cloud, to move the compute close to the data source. It is not a new thing that came up in the last two years. But, it is something which started 30 years back. People gradually moved from mainframes to microcomputers (PCs) and to Cloud computing. Subsequently, computing started to become fairly cheaper. Then people started realising that the work that is done on the Cloud can also be done on the Edge, where the bandwidth is comparatively cheaper. And as mission-critical applications started coming in, latency became a problem.
While EDGE computing refers to the delivery of computing capabilities of a network in order to improve the performance, operating cost and reliability of applications and services, Fog computing is a distributed computing concept where compute and data storage resource, as well as applications and their data, is in an optimal place between the user and Cloud to improve performance and redundancy.
One of the thing that these commonly used terminologies will essentially have is a single node, or a compute node, where data processing takes place. It can be called as an Edge node. You basically have a set of nodes or multiple nodes called as Fog nodes. Edge computing people call Fog computing as Cloud Edge.
THERE IS LACK OF TRAINED PERSONNEL TO MANAGE THE DEVICES (BOTH EDGE AND FOG) DUE TO THE COMPLEXITY OF SYSTEMS AND TECHNICAL BARRIER.
A use case of smart surveillance
A smart surveillance system was deployed for mobility management and citizen safety in an industrial township, which extends around 3.2 sq km.
There was a network of 200 plus IP cameras, which were connected over a high-speed optical link. All the data went to a centralised place where data processing took place. The data processing was done manually before AiKaan Labs deploy a smart solution. And what they wanted was a Fog infrastructure to support multiple video analytics applications. It was required to take care of mobility management (or traffic management) and safety of the people in the township. This is where Fog computing played a huge role. It supported video analytics software at the central station and made easier for the vendor to run their instance of applications.
In another use case involving steam boilers, the steam boiler vendor wanted to optimise the operation and save up to 15 percent of the cost. To achieve this, data collection was done and subsequently processed using a PLC locally.
Challenges in deploying EDGE and Fog computing
Some of the major challenges people face while doing Edge and Fog computing include:
- Remote connectivity and debugging, where the multiple devices needed to be identified and connected.
- Model, firmware and data upgradation, as video analytics requires Machine Learning model updates and some gateways needed firmware upgrades.
- Lack of trained personnel to manage the devices (both Edge and Fog) due to the complexity of systems and technical barrier.
About The author
This article is an extract from a speech presented by Chetan Kumar S, CEO and co-founder, AiKaan Labs (www.aikaan.io), at IEW/IOTSHOW.IN 2019.