SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Intelligent File Transfer as part of Service Agility in decentralized IoT Networks


Between 2017 and 2021, the volume of worldwide traffic is expected to almost double, with 95 percent of that growth coming from the cloud or MEC

Another way of supporting the access networks, primarily in fixed line, is fog computing. According to the OpenFog Consortium, fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control, and networking anywhere along the continuum from cloud to things. Conceptually, it takes some—or all—of these resources down to the device level. And while MEC is intended for mobile networks, fog can include wireline networks.

MEC can be combined with other key technologies, like network slicing, to support 5G in new economic ways never before seen. Network slicing virtually carves up parts of the network to better allocate them for delivery of specific services. A slice meant to carry video might be tuned for high throughput, while a slice meant to power self-driving cars might be low-throughput but also extremely low latency. The idea is that the network stands ready to support a range of use cases in a way today’s networks currently cannot. Network slicing also suggests different cycles of NFV and container updates in the context of service life cycle management. By putting content and applications at the edge, the network owner can achieve operational and cost efficiencies while introducing new services, reducing network latency and— ultimately—improving the end consumer’s experience quality.

Early estimates suggest the number of towers in 5G will increase tenfold compared to the previous mobile versions. This proliferation would mean that—in the US alone—we would have to maintain, update, and upgrade over two million sites with the latest and greatest in applications and with NFV on a regular basis. In order to keep the MEF cells up and running, big data collected from cells have to be sent to central offices for further analysis. On the other hand, the agility of new services would require an instantaneous update of many sites, with additional challenges for multinational service providers that have numerous remote locations.

During the transfer of data between different locations and servers via the public Internet, leased lines or even dedicated lines, the transfer rate rarely reaches even half of the anticipated bandwidth. This adds additional cost in delivering new services, which can be especially problematic for connections with a large bandwidth or a high delay. In some cases, this kind of delay might be related to cross traffic or the fact that the bandwidth is used by another user or other applications. But we know for a fact, that the data transmission over private networks show no better results.

One of the most significant shifts is the evolution of data center-focused networks. Driven by the enormous expansion of cloud computing, the data center market is growing rapidly as business and enterprise users move intranet data service onto the internet. Between 2017 and 2021, the volume of worldwide traffic is expected to almost double, with 95 percent of that growth coming from the cloud or MEC, with modern high-performance networks to operate at speeds of 10, 40 and even 100 Gbps based.  In case of large data transfer however, large bandwidth does not automatically correlate with faster and higher application traffic.

The Transmission Control Protocol is the de-facto standard for reliable data transmission across nearly all IP networks. Even with the increasing portion of multimedia traffic, about 90 percent of the bytes and packets transferred over the Internet are sent using TCP. To avoid congestion in a network, TCP uses a mechanism which reduces the so-called congestion window whenever it detects congestion. This leads to a short interruption of data transmission. According to this congestion avoidance mechanism, packet losses are considered as a sign of congestion. However, in real networks, there are several different reasons for packet losses. If a packet loss is not caused by a network congestion, reducing the transfer rate may not be effective, and it causes an unnecessary slowdown of data transmission. While there are a variety of techniques for users to fine-tune TCP to make it perform better, the main reason for poor performance still remains unchanged: the protocol itself.

In order to achieve a fast and efficient data transfer, a new transfer mechanism is needed, one superior to current TCP implementation. Over the last three years, Dexor has implemented a completely new patented protocol with the vision of creating a best-in-class protocol for fast and efficient data transfer. This protocol has proven to be superior to actual TCP implementation, as well as to proprietary data transfer solutions. It is called Reliable Multi Destination Transport protocol (RMDT). The keys to success for RMDT protocols are its algorithms for the analysis of the available bandwidth and very efficient data management in the protocol stack. Furthermore, the RMDT protocol can operate both in the classic domain of one-sender-to-one-recipient delivery, as well as deliver data with multi-gigabit speed to to multiple destinations simultaneously.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel