SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

The Power of Video Analytics


The first challenge is deciding how to deal with compressed video data, which suffers from a number of legacy limitations.

Improving video quality

While more and more user-generated content is created, uploaded, and shared from a wider variety of devices each day, the distribution network is also undergoing massive change. This presents significant challenges for video service assurance. In the past, it was really about as simple as monitoring video from point A to point B, all within the operator domain. Today, a video stream passes through numerous points before ending up on any number of devices for viewing. 

Take, for example, a mobile user who decides to stream a video through Amazon Prime while riding on an Amtrak train. First, the content itself originates from an Amazon server, and then is shuttled through some type of optimization or transcoding. Then it must traverse numerous terrestrial and mobile networks to reach the user. Since this user is in transit, the video stream will likely pass through the network of a roaming partner. Now, perhaps, the Amtrak Wi-Fi begins to work (fellow Amtrak riders, I know, this is a stretch—but bear with me!) Now the “last mile” might shift over to Amtrak’s Wi-Fi network. This is a very complex environment in which to ensure video quality, and requires powerful analytic tools that can operate in real-time to deliver end-to-end service visibility. 

This challenge is about to become even more demanding as network virtualization takes hold. Instead of monitoring physical boxes, service providers must ascertain the performance of a video service as it moves through virtual, dynamic network elements in a multi-vendor environment. This is really the “phase two” of network virtualization that no one is talking about: service assurance in a virtualized environment. While NFV and SDN will bring cost reductions in some areas, they will also create a requirement for powerful data analytics tools to maintain high quality of service. 


Pipeline recently met with IneoQuest, a vendor that offers this sort of end-to-end visibility into video services. IneoQuest enables CSPs to see the entire picture, no pun intended, from the head-end to the multi-screen customer. It also provides a number of analytical tools with which service providers can better monetize their video. For example, by understanding every session, frame, and packet, and correlating this data with viewership data (what was watched, by whom, where, and on what device), service providers can better monetize their video assets.

Legacy challenges

As the technology for consumption, creation, and delivery moves forward at a rapid pace, it’s prudent to point out that the standards used for video data are in need of replacement. This, of course, makes sense, as the transcoding standards were developed many years before big data was ever a consideration, let alone pervasive mobility and network virtualization. The MPEG standard, for example, is a very popular compression format used to deal with bandwidth considerations. So far so good; and if you look on your laptop or mobile phone, chances are much of the video content is in MPEG format. The problem is that the MPEG format doesn’t play nice with popular platforms like Hadoop or Message Passing Interface (MPI).

There are workarounds, luckily. One example comes from Pivotal. The Pivotal Data Science team developed a successful prototype of a distributed video transcoding system on Hadoop. According to Pivotal, it can transcode gigabytes of hour-long MPEG-2 videos to a Hadoop sequence file of image frames in only minutes, and can accomplish this in a virtualized environment. 



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel