Metadata - we all need it now!
top of page

Metadata - we all need it now!


Metadata – we all need it now

Not so long ago, flow analysis was one of the tools of choice when it came to troubleshooting security or operational problems on networks. Many vendors developed tools which could take these flow records and store them in a data base, so that you could get real-time and historical reports.

However, metadata analysis is now seen as the must have pieces of technology for keeping modern networks running both securely and efficiently. Metadata analysis systems typically use network traffic or packets as a data source. You can typically source these via SPAN, mirror ports or TAPs. The clever part of metadata analysis involves data reduction. This is where you take raw network traffic and capture interesting pieces of data like IP addresses, website names or filenames. In some instances, you end up with a 4000:1 compression ratio. For example, if I transfer a 4MB file across the network, I may capture 1KB of metadata.

The screen shot below from our own LANGuardian system is a good example of this data reduction. What you see here, is the metadata associated with a user accessing files on a network file share. We don’t store all of the data associated with the files, just the human readable information which can be used to generate an audit trial of who is accessing what on the network.

Capturing metadata from SSL certificates

With the significant increase in cloud application usage, and as one might come to expect, most cloud based services use encryption; this can present challenges when it comes to finding out what users are doing on a network. You can buy firewalls with SSL decryption features, but this puts a massive resource load on the firewall. A 10Gb/s firewall may only be able to process 1Gb/s with SSL decryption enabled. You can also use SSL decryption appliances, but the costs associated may rule them out for you.

Traditional flow based tools struggle with SSL traffic. They just see IP addresses and traffic volumes. A metadata analysis engine like LANGuardian and others can extract some information from encrypted traffic such as HTTPS, but also encrypted mail, both sent (SMTP) and received (POP3/IMAPS). LANGuardian can dissect the server's SSL certificate (which is always required to be presented to the client) and can extract the server name, thus for instance reporting access to storage.live.com vs. to 1.2.3.4. An example of this is shown below.

I’ll find the budget for this somewhere

You know a technology is useful when after a demo someone says to you “I need this, I may not have budgeted for this solution, but I am going to find the funds from somewhere” I hear this more and more, after I show people how metadata can be passively captured from network traffic. Network Managers don’t want to be calling on Wireshark to troubleshoot every operational and security issue. Wireshark is a fantastic tool, but it can use up a lot of time and it may need specialist network skills to interpret the results.

Continuous metadata capture gives you the equivalent of a CCTV system for your network. It is always on, capturing user and device data so you can have a real-time view or look back on historical data and get more detail about an incident.

Eating your own "dogfood"

Eating your own dogfood is a slang term used to reference a scenario in which a company uses its own product. I had an interesting use case for metadata analysis within our corporate headquarters last week. I live about 40 miles from the office which may not sound a lot, but I must navigate through a couple of big traffic bottlenecks. If I travel at the wrong time, it can often end up a 2+ hour journey to get to work. As a result, I come into work early to avoid the worst of the traffic.

One morning last week, I got to work at my usual early time and started work on a couple of customer requests. All was fine until a few other colleagues arrived and suddenly, I heard the cry out “the Internet is slow”. As we all know, the Internet is rarely slow and it is normally a local issue. I logged onto our local LANGuardian and within seconds, I could see that one of the developers had kicked off a data replication process which was hogging bandwidth on our Internet connection. Seconds to diagnose and seconds to fix. Our next step will be to set up an alert if traffic goes above a certain level, so that we don’t need to wait for user complaints.

It was a great example of how we all need metadata, even on small networks.

Author - Darragh Delaney - Director of Technical Services at NetFort Technologies now part of Rapid7. Darragh is Cisco CCNA certified and has extensive experience in the IT industry, having previously worked for O2 and Tyco before joining NetFort Technologies in 2005. As Director of Technical Services and Customer Support, he interacts on a daily basis with NetFort/Rapid7 customers and is responsible for the delivery of a high quality technical and customer support service. http://ie.linkedin.com/in/darraghdelaney

33 views
bottom of page