Fexco Front End Architecture – Fog Computing – Part 1

Now we have extreme fast and powerful API running on the back-end with the cloud computing. Bringing us many benefits, like powerful computing, maintainability, availability etc. That is really good for a big company, being able to analyse a huge amount of data and learn with it. We have a robust API, with all redundancies, replications, securities, etc. From a high level perspective, we can see something like this:

Fig 1. Example: Retrieving currency rate from the cloud

Is there a way to improve that? What about this distance between the Consumer and Cloud? If the consumer`s network is not strong or large enough to communicate with cloud every time? Or if you don’t have a big pocket to deal with so much requests on the cloud?

Ok, we can conclude one thing: The same way that our back-end (API) has changed, our consumers (front-end) needs to change. The devices need to send the API only the necessary, compacted, smaller. We need to reduce the data transferred.

Fog Computing

“According to Wikipedia, Fog Computing is an architecture that uses one or more collaborative end-user clients or near-user edge devices to carry out a substantial amount of storage (rather than stored primarily in cloud data centers), communication (rather than routed over the internet backbone), control, configuration, measurement and management (rather than controlled primarily by network gateways such as those in the LTEcore network).”

From the necessity to process huge amount of data generate from IoT(Internet of Things), Fog computing brings the idea to reduce this distance between cloud and consumers. This means that IoT data should be processed locally, closest to where it’s collected, in devices themselves or in gateways. So for instance, it could be something like this:

Fig 2. Example: Retrieving currency rate from Fog server

One key aspect of this new era is that both data consumption and production are heavily distributed and at the edges of the network (i.e. closer to or at end-user devices). With data also produced at the edge, both data generation and consumption can occur at many different places and times. By processing data at the edge of the user we can obtain low latency, absorbing the intensive traffic and relieves the delay between the communication among cloud and users, being more scalable, sustainable and efficient solution.

Summary

From a development perspective, the implementation can be more complex to achieve this process and data handle. By networking perspective, one of the huge benefits of Fog Computing is in the user experience reducing the time response bringing performance and availability. From a Cloud perspective, we reduce the amount of “dirty” data received, reducing the bottleneck on the cloud and consecutively saving us money. The next step of what we going to do is how to bring this idea to our front end architecture, choosing the frameworks and implementing it. We need to get real projects examples and see how they going to fit to this architecture.

References

https://www.automationworld.com/fog-computing-vs-edge-computing-whats-difference
https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html
https://www.computer.org/cms/Computer.org/magazines/whats-new/2017/07/mcd2017020026.pdf

Tulio Castro

Author: Tulio Castro

Fexco Senior Software Engineer

3 Replies to “Fexco Front End Architecture – Fog Computing – Part 1”

  1. Interesting article, I was at a conference recently where the guys from Tesla were speaking about the amount of data a typical Tesla car transfers back to base (an enormous amount of data ) and how the cloud would not be in a position to handle this. Edge or fog computing was their answer on how to handle the large volumes of data required for their smart car software updates and car data uploads. Edge practices for connectivity such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X) were muted, fundamentally technology such as edge computing (fog computing) will be required to deliver this type of connectivity (IoT).

    1. Exactly, I’ve seen some projects using sensors which produces more than 1GB of data per hour. With this idea, they could process and reduce this amount of data for something smaller and then send it to the API. Same thing for cities, robots, industry machines, etc.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.