Edge & Cloud Networks — A Robust, Scalable & Efficient Symbiosis

Ideaspring Capital
4 min readJan 21, 2020

Over the last decade, the biggest technology advancement for enterprise software has been the adoption of the cloud. There are several reasons for this shift, mainly:

  1. Customers prefer moving away from the complexity of having to deploy, customize, upgrade and manage the complexity of enterprise software. They prefer vendors taking complete ownership of the life-cycle of the solution.
    They prefer to focus on how effectively they can consume the offering rather than investing in the IT skills required to maintain that offering.
  2. Cloud vendors now have extremely mature offerings. Customers understand that cloud vendors have the expertise to offer better hosting infrastructure, better management and better SLA terms than they can manage in-house.
    SLA terms today expansively include system up-time, data loss prevention and security, and maintenance and response time guarantees which are part of the overall contract.
  3. Cloud vendors have heavily invested in providing guarantees for cloud security and compliance which have helped customers to migrate.
  4. Customers are now comfortable with paying regular monthly and annual subscriptions for Software-as-a-Service and understand the benefits of moving to that model.

Edge computing is the processing of data close to the edge of the network, close to the source of the data. The concept has always existed as it is a critical element of any distributed system.

And with platforms moving to the cloud which is usually remote to the source of data, the importance and complexity of the edge has grown. The edge and the cloud share a symbiotic relationship as follows:

The edge sits close to the source of data. It processes and aggregates data before sending it to the cloud which results in significantly less data bandwidth and high scalability.

The edge can make decisions locally, triggering alerts and notifications allowing you to respond to the data in near real-time.

The cloud-based platform acts as a repository for data being streamed from the edge. The platform is useful for broader analytics of the data, ex post facto.

Edge computing is applicable to virtually any domain. However, it is extremely critical in the Internet of Things (IoT).

Machines and sensors generate a tremendous amount of information at a very fast rate which cannot be pushed to the cloud.

Hence, data needs to be processed and acted upon close to the source. The rise of IoT has resulted in technology advancements in edge computing in three broad areas:

  1. Edge hardware: Manufacturers are building specialized hardware appliances that have the compute and storage power that the edge needs.
  2. Edge Software Frameworks: Open source frameworks like Eclipse Kura that allow you to build a robust and extensible edge infrastructure.
  3. Use Cases: Improved edge hardware and frameworks have resulted in the edge getting more sophisticated. Edge systems are now going beyond just data processing to edge analytics and machine learning.

Predicting machine failure of critical assets is an extremely important use of AI on the edge for IoT. Preventive maintenance of assets that are likely to fail, can have a tremendous impact on the operations of a manufacturing company.

This is also an excellent example of how edge systems and the cloud platform work together.

While there could be many indicators of machine failure, vibration analysis is usually a very effective way to know if things are likely to fail. Vibration analysis is done by sampling, usually in kHz (kiloHertz) which translates to 1000 samples per second.

This is a huge amount of data. The edge processes and transforms this data into spectrum data using techniques like fast fourier transformation or FFT. Samples of this spectrum analysis are pushed to the cloud platform.

The cloud platform has historical data for the machine and similar machines. Failure is usually predicted using a machine learning technique called anomaly detection, that involves building a model using historical data collected when the machine is operating normally.

Deviation from the model is called an anomaly, which is flagged. Users are notified and they can take the machine down for preventive maintenance which prevents a costly breakdown in the future.

The platform is responsible for building, refining and maintaining the machine learning model. Once built, the model is made available to the edge for execution.

The edge compares the incoming vibration data to the model built on the platform. Any anomalies are flagged in real-time thus preventing a costly breakdown.

The cloud platform and the edge play complementary roles. The platform deals with data aggregation, model building and refining. The edge deals with the deployment, execution and monitoring of the model.

This approach leverages the strength of each subsystem and results in a robust, highly scalable and efficient system.

In the future, the distinction between the edge and the cloud is likely to get blurred. Cloud vendors are already providing hybrid cloud offerings that allow you to specify a public and a private cloud.

This will offer the flexibility of moving data processing workflows between the two environments which will greatly simplify and standardize the way in which edge and cloud solutions are built.

This article was written by Ranjit Nair, co-founder and CTO of Altizon, a Big Data and IoT product company focused on building data driven applications that help enterprises gain new customer insights, boost operational efficiencies and build a competitive advantage.

--

--

Ideaspring Capital

An early-stage VC fund investing in technology product companies in India.