SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Road to Open Edge Computing


The community has been focusing on requirements such as small footprint and high performance, which are both crucial for most edge use cases. However, evaluating success always depends on the use case and its specific demands.

Having the workloads still running and available on the disconnected site is the easier challenge to overcome, but if you need to be able to start new instances or provide user management functionality, you will need control functions available locally. The two main options to choose from are the Centralized Control Plane model and the Distributed Control Plane model (see Figure 2, on the previous page). 

As you can see, the footprint of the two options are quite different when it comes to the edge data centers. It is also more complicated to manage and orchestrate all the control functions that provide the autonomy on the edge as you may need to synchronize user information and other bits of metadata throughout the infrastructure to maintain a consistent view. 

You can find several examples that realize the above models. For example, the Distributed Compute Node (DCN) option of the Openstack TripleO project brings you the centralized architecture, or StarlingX is applicable if you are in need of a distributed setup.

A practical example

Let’s take a closer look at StarlingX, which is an open source project supported by the OpenStack Foundation.

This project is a great example of the integration work that is needed to provide a flexible and robust edge platform. It also uses building blocks that you are most probably already familiar with, such as Linux, OpenStack, Kubernetes, Ceph and so forth. This points towards the aforementioned evolution path by providing you the pieces of a traditional cloud computing platform while giving you the option to deploy selected services on edge sites to get the required functionality (see Figure 3, below). 


Figure 3: StarlingX
(Click to enlarge)

The project is utilizing containerization both for the platform services (to get flexibility and easier management) as well as for workloads, where it is applicable. As you can see on the diagram above there are a couple of services in the architecture that are responsible to manage the lifecycle of the hardware as well as software infrastructure, including the Distributed Edge Cloud component that is responsible for keeping your edge sites in sync.

The community has been focusing on requirements such as small footprint and high performance, which are both crucial for most edge use cases. However, evaluating success always depends on the use case and its specific demands.

Taking it to the edge

Edge computing is breaking down barriers between industries by taking computing power to cars, factories, fields and your home. The lines disappear between solutions as well, and new goals for operators require flexibility, agility and the ability to integrate as the environments are growing organically to take the edge always a little further out.

As the business models evolve, the cost implications will also become clearer, which will further encourage the industry players to participate in the open source efforts and work on the building blocks together.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel