SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Rearchitecting the Internet


Infrastructure providers must work with cloud partners and service providers to alleviate bottlenecks at the edge.

While estimates vary, some experts believe that a self-driving test vehicle can produce as much as 30 terabytes of data in a single day of driving, and that data will need to run through powerful analytics programs to produce actionable information. In this instance, edge data centers will prioritize what data needs to remain on the edge to be processed by the vehicle’s onboard computing power, and what data should be relayed back to centralized data centers or the cloud for analysis.

Artificial Intelligence

According to a survey by McKinsey & Company, one out of five of C-level executives report that they are using AI as a core part of their business. Indeed, AI and an associated technology, Deep Learning, are already at work helping financial institutions to detect fraudulent transactions, assisting retailers in creating personalized marketing, guiding streaming video content providers to make suggestions for our next Netflix binge-watch, and even assisting healthcare providers in making medical diagnoses.

AI and Deep Learning technologies promise to save organizations billions of dollars over the next few decades. However, as companies turn to new AI applications driven by high-performance computing (HPC), they are finding that traditional computing platforms and legacy data centers are not equipped to handle these new demands.

Traditional data centers were built to deliver 5kW to 10kW of power per rack, which was sufficient for servers running standard CPUs that run hot and fail to accommodate large numbers of processors or servers in a single rack or cabinet. Conversely, edge data centers that can offer 30kW to 35kW or more per rack allow more computing power to occupy a smaller physical footprint. This power density also allows customers to take advantage of new, more energy-efficient processors and servers that require less power to perform even more processing-intensive work, such as AI and Deep Learning applications.

Virtual and Augmented Reality

Tractica forecasts that worldwide enterprise VR hardware and software revenue will increase from $1 billion in 2018 to $12.6 billion annually by 2025. VR/AR applications are presently at work in such industries as automotive, energy, utilities, manufacturing, travel and transportation, and healthcare.

Creating entirely virtual worlds—or superimposing digital images and graphics on top of the real world in an immersive way—requires a lot of processing power. Both VR and AR also require high-bandwidth and ultra-low latency. Here again, as the lowest latency point of demarcation between service delivery and consumption, and by processing data away from the cloud and closer to the source, the Edge delivers the ultra-low latency data transport required by VR/AR applications. 

Demystifying Data Traffic Flows

The Internet just wasn’t constructed to handle the traffic flows of today; it was built for download-centric traffic. So much content is now being created at the edge and it’s creating huge bottlenecks.

Hence, infrastructure providers must work with cloud partners and service providers to alleviate bottlenecks at the edge and ultimately help enterprises demystify their business-critical data traffic flows. Does traffic need to go back to the core? Does data need to remain at the edge?

As the IoT, autonomous vehicles, AI, VR/AR applications and other latency-sensitive and network-critical trends become more mainstream, computing and networking at the edge will become essential to alleviate global network congestion issues. Enterprise organizations and service providers will have to collaborate to solve for these challenges and create multi-directional traffic flows, and that means they need edge data centers in all their various form factors, and sited wherever customers demand.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel