Mapping Network Security Resilience To The NIST Cybersecurity Framework
On May 11, 2017 President Trump issued his Presidential Executive Order 13800. As part of this executive order, all government agency heads will be held accountable for implementing solutions and managing the risks associated with threats to our nation’s cybersecurity and thus must take immediate action to review cybersecurity protocols in order to upgrade each department’s IT infrastructure. Furthermore, the executive order mandates the use of the NIST Framework for Improving Critical Infrastructure Cybersecurity within government agencies.
The NIST Framework for Improving Critical Infrastructure Cybersecurity provides a common language for understanding, managing, and expressing cybersecurity risk. This framework is built upon concepts to organize information, enable risk management decisions, address threats, and improve through lessons learned.
The foundation to these concepts are aligned within five core functions:
A new whitepaper from Ixia (a Keysight Business) called Deploying a Layered Visibility and Cybersecurity Architecture provides an overview of how to combine a visibility architecture with a security architecture to address the NIST architecture. The following excerpt provides a short of overview of how to accomplish this. A full discussion on the subject is contained within the whitepaper.
Let's review the foundation concepts.
The Identify function focuses on who and what could be affected by various scenarios, governance mandates, and agency risk. The purpose is to gain a better understanding of agency cybersecurity risks associated with specific systems, assets, data, and capabilities. A typical security architecture would focus just on security processes and continuous analysis of relevant security patches along with their priority. However, there are two additional fundamental activities that needs to be addressed:
Continuous monitoring of critical applications, operating systems, and the network
Comprehensive monitoring of public and private cloud resources
As the NIST cyber security framework demonstrates, continuous monitoring is important to network security. This is due to the both the visibility (i.e. insight) that is provided into the network as well as the increase in productivity that is provided. This includes planning for where and how to collect critical sets of data. While the where (edge, core, cloud, etc.) is network dependent, the how is often universal and consists of using taps, NPBs, and various security and monitoring tools.
The Protect function outlines how safeguards for the critical infrastructure need to be developed and implemented, supporting the ability to not only limit cybersecurity events, but also activities to contain them. For instance, a typical security architecture would focus on access controls, protective technology (firewalls), safeguarding data, processes, and procedures. Besides the common wisdom just mentioned, there are additional activities that government agencies should consider to increase their defensive capabilities.
These activities include:
Deploy threat intelligence gateways to eliminate up to 30 percent of threats immediately
Incorporate the use of security device testing appliances to validate the effectiveness and performance of your security devices before you deploy them in the live network
Implement cyber range training activities to strengthen the skills of your threat response teams
What is not often considered in typical architectures are threat intelligence gateways that drop traffic to and from known bad IP addresses. These devices reduce analysis load on security tools by eliminating a substantial portion of the threat upfront. By adding a threat prevention gateway, which uses automated updates with known bad IP addresses, you can significantly minimize both incoming and outgoing traffic to bad actors. A threat intelligence gateway has been shown to reduce the amount of data that needs to be screened by an IPS by up to 30 percent.
Another overlooked security defense is the use of security device testers. These devices provide proactive testing of security tools (firewalls, IPS, web application firewalls, etc.) in a lab environment to ensure that the tools protect and defend the network as advertised. Another activity is cyber threat training, which gives security personnel the ability to experience real attacks (in a controlled environment) so that they can better recognize the symptoms and precursors on a live network. With this knowledge, security personnel not only respond better to attacks, but they can help the systems recover faster to reduce liabilities.
The Detection function focuses on continuous end-to-end monitoring to detect anomalies. This component is concerned with identification of actual issues, intrusions, and breaches. This is where a typical security architecture would focus on deploying some set of inline tools (IPS, WAF, etc.) or out-of-band tools (SIEM, DLP, IDS, etc.). In this context, inline refers to being directly in the path of live traffic, i.e. traffic must pass through the device before it continues on. Out-of-band is where data has been copied and sent to devices that are not in the path of the live network traffic and thus do not delay or impede the flow of data across the network.
There are several activities that can be implemented to strengthen the Detect functional area:
Deploy inline security tools with fail-over technology (bypass & heartbeats)
Deploy inline packet brokers for improved fail-over scenarios (hot standby, load sharing)
Deploy SSL decryption for improved threat visibility
Focus on data loss prevention by filtering monitoring data to out-of-band security tools (SIEM, DLP, IDS, forensic analysis) for faster analysis
Proactively look for indicators of compromise using application intelligence
Capture suspicious data with packet captures for either immediate or delayed analysis
Once inside the network, inline security tools are often deployed near the perimeter. This can create a single point of failure, even if the tools have a built-in bypass capability. For instance, what happens when you want to remove the device altogether? An external bypass with heartbeat capability allows you to increase network availability and reliability with microsecond automatic failover and fail-back technology.
A new threat is the inclusion of malware within SSL/TLS encrypted data payloads. This data can be unencrypted by a network packet broker (NPB), or a purpose-built device, and then passed on to security tools (like an IPS or WAF) for further examination. In addition, the NPB can be used for serial data chaining to pass suspect data to multiple security tools for analysis. Data that is safe is re-encrypted and sent back to the bypass switch to traverse downstream.
This monitoring data can also be filtered by application type. For instance, maybe you only want to see Facebook data or do not want to analyze Netflix data. That data is emphasized in the filtering process. In addition, the application intelligence gateway can generate NetFlow data and additional data (like geolocation, browser type, device type, etc.) that can be used in further analysis of the data. Data masking, Regex searching, and packet capture (PCAP) capabilities are also provided to help analyze specific data natively within that solution or by third-party tools.
The Respond function is about taking action to mitigate the threat and keep it from spreading. Typical activities include taking equipment and/or the network offline, turning off features temporarily, active debugging of the problem/ attack, and the coordination and implementation of responses and next actions (informing senior management within the agency, law enforcement, etc.). Some IT departments also focus on using specific tools like a SIEM or DLP to create a faster threat analysis.
There are additional activities that can be conducted such as:
Deploy automation with representational state transfer (REST) interfaces to packet brokers to adjust packet capture filters
Deploy automation between SIEM, management systems, and NGFW to execute automated scripts that add rules to block attacking source IP addresses
Continually update threat intelligence gateways to reduce/prevent the exfiltration of data
One of the most powerful, but often overlooked, features for data center automation is automating the network monitoring switch. In this case, automation means packet brokers can initiate functions (e.g., apply filters, add connections to more tools, etc.) in response to external commands. This data center automation is akin to software defined network (SDN) capabilities, which allow a switch/controller to make real-time adjustments in response to events or problems within the data network. However, the source of the command doesn’t have to be an SDN controller. It could be a network management system (NMS), provisioning system, SIEM, or some other management tool in your network.
The final function of Recovery is centered on the remediation and repair of the data network and its components. This phase is the main area for repair and restoration of any network capabilities that are damaged. There is also emphasis on prevention of the issue in the future. Typical architecture activities include reprogramming component software, changing passwords, applying patches, and turning off features permanently.
The following additional activities should be considered as well:
Use security device testers to run “what if” scenarios to validate the efficacy of new security device settings
Use proactive monitoring to test performance of new features
Once an intrusion or breach has occurred and a solution defined, that solution can then be tested with a security device testing solution. The test solution can stress the equipment and network to its breaking point to see if the fix is a long-term solution or not. This data is critical for network and device dimensioning. As part of the test effort, you can see the real performance impact of decisions with various “what if” simulations (like SSL key and cipher impacts, latency due to SSL, how application intelligence would look with different geographies and traffic mixes, how distributed denial of service (DDOS) mitigation would affect your network performance, etc.). Running these types of simulations is important because you cannot just cut services to your customers to correct a problem in the long term, you need to restore services but ensure the integrity of your new network configuration.
If you want more information on this topic or network visibility solutions, check out the Ixia whitepaper Deploying a Layered Visibility and Cybersecurity Architecture and the ebook The Definitive Guide to Network Visibility Use Cases.
Author: Keith Bromley is a senior product marketing manager for Keysight Technologies with more than 20 years of industry experience in marketing and engineering. Keith is responsible for marketing activities for Keysight's network monitoring switch solutions. As a spokesperson for the industry, Keith is a subject matter expert on network monitoring, management systems, unified communications, IP telephony, SIP, wireless and wireline infrastructure. Keith joined Ixia in 2013 and has written many industry whitepapers covering topics on network monitoring, network visibility, IP telephony drivers, SIP, unified communications, as well as discussions around ROI and TCO for IP solutions. Prior to Keysight, Keith worked for several national and international Hi-Tech companies including NEC, ShoreTel, DSC, Metro-Optix, Cisco Systems and Ericsson, for whom he was industry liaison to several technical standards bodies. He holds a Bachelor of Science in Electrical Engineering.