SUBSCRIBE NOW
IN THIS ISSUE
PIPELINE RESOURCES

Tackling the Risks of Data Sprawl


We’ve worked with several companies who have unfortunately faced ransomware attacks during the pandemic, and a few of the common denominator risks include a lack of good endpoint protection, solid firewalls, network monitoring or strong password policies. A critical note is that no one piece of this security puzzle will keep breaches at bay...
top of your drawer where you can quickly access them. On the other side of the spectrum, your more nostalgic favorites might not qualify to be in the drawer anymore since you’re not reaching for them on a regular basis. That ugly Christmas sweater you only wear once a year is well-placed in a box in the attic. 

From a networking perspective, centralized data is the starting point; then from there, companies should layer on multilayered security through endpoint protection boosted by the latest generation of firewalls that can detect anomalies and enforce policies. Security monitoring is also key, making sure to leverage encryption and set policies for who can access files and data. (Most executives are guilty of forgetting to VPN in at the coffee shop, so it’s best to remove the risk altogether!) This is most strategically achieved through user access policies and firewall access so teams can containerize their endpoints and map addresses to access this data.

Network monitoring is also a key component to a secure network. Automation of device inventories is the starting point to ensure you know what devices are really on your network.  Adding a HIDS (host-based intrusion detection system) will help with file integrity monitoring, rootkit and malware detection. Include a LIDS (log-based intrusion detection system) to automatically sift through log files created by network devices and servers. These systems should also include self-healing capabilities to take action when unwanted behaviors are identified.

Network microsegmentation, arguably the most granular approach to segmenting a network, is also extremely effective. With a different approach than network segmentation and application segmentation, microsegmentation focuses on a very granular division of individual servers and applications to protect them separately. Microsegmentation applies more barriers and safeguards throughout the environment, allowing for easier damage isolation in the event of an attack and a smoother recovery process. Instead of centering on north-south external traffic (attackers moving in and out of a network), microsegmentation focuses on internal east-west traffic (attackers moving within a network). This approach, which is most important for organizations with sensitive workloads that are tasked with high regulatory compliance, adds additional visibility to traffic even within the same subnet for further security. Instead of only deploying firewall rules to a particular IP or network, security policies apply to the virtual machine itself and enable intra-subnet traffic filtering. As a workload migrates, that security follows it throughout the entire application lifecycle.

When it comes to preventing breaches, the companies that are doing it right are consolidating data into the cloud and investing in edge-based security devices so they can protect remote sites. They’re also using single sign-on and enforcing password policies. The companies that are doing it even better are moving to controlled devices, such as setting up remote desktops so one bout of foul play won’t infect the whole network. And the companies that are on top of their games have set up network monitoring systems and ensured they’re budgeting for consistent network upgrades. With technology changing so rapidly, companies need to keep investing to stay current. 

We’ve worked with several companies who have unfortunately faced ransomware attacks during the pandemic, and a few of the common denominator risks include a lack of good endpoint protection, solid firewalls, network monitoring or strong password policies. A critical note is that no one piece of this security puzzle will keep breaches at bay; only the multilayered approach of a thoughtful data protection strategy will do the trick. And even then, cyberattacks are not entirely preventable. Ensuring recoverability of data, applications, and critical systems is the best way to hedge against a crippling breach.

32110 serves as a good guide: 3 different copies of data across 2 different media, 1 of which is offsite and 1 of which is immutable or air-gapped, with 0 errors after backup recoverability verification. By applying the 32110 principle and looping in proper data tiering and microsegmentation best practices, businesses can stay out of the headlines and smooth out their processes.



FEATURED SPONSOR:

Latest Updates





Subscribe to our YouTube Channel