SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Taking the Operational Data Shortcut
with Analytics Query Acceleration

By: Nick Jewell

According to McKinsey, all companies will use data to optimize their decision-making processes for their employees by 2025, powering workflows and improving delivery. Economist Impact found that 90 percent of businesses think that adopting a data-driven approach to decisions will be a strategic imperative for them in the future.

Instead of remaining the preserve of leadership teams and data scientists, however, this approach will span everyone across the organization. Whatever role someone works in, data can and should be applied to help them work more efficiently and improve the quality of decisions. The challenge here is how to turn this from a great idea on paper into practical and useful processes that matter to individuals.

To make this happen, data must be more useful where it is needed, and people have to be supported with the right processes to help in their work. It will demand more modernization around how companies view data, and work on how to get analytics to the users that need those results in a way that suits their needs. This involves accelerating how users can query company data and use the results in their work, so they are more effective over time.

This will force some changes in how people work. However, it will lead to huge opportunities for those that get it right. Rather than only looking at “what” is taking place, can you ask, “what if?”

Understanding your organization’s needs

One of the biggest challenges is that ‘data’ is a very loaded term, meaning different things to different people. While it is right to argue that data should be used more effectively across the business, the actual process for delivering on this will be different depending on the role that someone has, the industry they work in, and the existing applications and data that are in place. That data will come in multiple forms and formats, from unstructured to structured and from simple time series to complex and interconnected.

Traditionally, companies try to address this by setting up data pipelines to handle the connectivity to business applications. These pipelines control the flow of data from applications into the environments used to store it. Each pipeline will cleanse, organize, and present that data as a final product to those that would consume it. This vision of “data in, value out” is a great one, but it is exceedingly difficult to achieve in practice.

Just ask anyone who works with pipelines for operational data. They will be the first to tell you that when it comes to operational data—for instance, the data covering financial performance, supply chain, human resources, and more—organizations routinely fail to deliver value from their analytics projects, or they take months to provide simple dashboards. While the vision is that data will improve performance, profitability, and help everyone across the business, the reality can be disappointing.

The reason for this is that data is not simple. Just like John Donne wrote, “No man is an island” in 1642, no data exists by itself. That’s particularly true of operational data. For example, financial data lives across multiple enterprise resource planning (ERP) applications, business applications, finance tools, and spreadsheets. Normally, it must be manually stitched together for a complete view of the business. Similarly, looking at overall business metrics and performance indicators can limit how operational staff freely explore transactional-level details. When you have transformed and aggregated data together to provide one answer, it’s then hard to separate all that data back out again to identify a critical trend, verify the accuracy of metrics, or perform more root cause analysis.

Let’s look at an example: telecommunications companies deal with huge amounts of data on customer activity—who called whom, when did they call, how much data did they use, how long for, and what was the charge to be applied. Each of these transactions would then have to be reconciled against a customer’s account to show how much credit they used, and what they have left. These companies would also look at these customer records in aggregate to see network performance and capacity, any quality of service issues, and where maintenance might be needed.

From all this, there are dozens of actions that might be taken. Every month, bills will be sent to customers, while further analysis might be applied to reduce churn or upsell more services. Internally, network maintenance and management can be automated to prevent issues, while orders for fixes can also be scheduled. Companies will have reporting around what has happened, telling them what took place, but these efforts don’t provide that operational insight on a day-to-day, hour-to-hour, or minute-to-minute basis, which is what operational analytics requires.

Building the right approach to data

Data pipelines were designed to support data coming from applications and turn the information into business value. Modern applications can create highly ordered data on their activities. Imagine how a streaming service tracks a user watching a TV show—one data set will track their activity, another records each ‘like’ or addition to a viewing list. All this data is then processed through data pipelines using cloud technology. The data is extremely well-ordered and managed to begin with, so the fact that there might be millions of actions all taking place at the same time is fine. Billions upon billions of records can be analyzed in no time.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel