December 4, 2012 at 8:49 am #8057
Being responsible as an Infosec admin, i have to scan the 30000+ systems (laptop,desktop,servers, Devices). I am using nessus scanner to scan.
Challenge i m finding is to export the CSV and then take out Falsepositives.Then i need to implement action of resolving the vulnerabilties by forwarding to technical teams. Also it very difficult to keep the track of vulnerabilities on excel.
Lookin for the best practise & automate process of vulnerability tracking so as to work on proactive solutions and to identify the count of vulnerabity reported on each system through every scan cycle
December 4, 2012 at 2:13 pm #51078cd1zzParticipant
One approach might be to consider using NeXpose instead. Reason I say this is because I’ve found if you don’t have a “trained eye” it’s hard to spot which vulns have public exploits. With that many systems, your first step needs to be knocking off the low hanging fruit. Nexpose does a decent job of telling you at least when a vuln has a corresponding exploit in the framework. It wont tell you when there are other exploits, which are not in the frame work, but it’s going to help prioritize with a large number of systems.
Another recommendation is to look for the largest category of issues. For example, if you have 20,000 https websites, you’re likely going to see 20,000 “invalid certificate” issues, which you can simply just filter out. (Scanning the IP instead of the hostname will cause this for example.)
As per automating the process, you could just script it after you get your orgs typical false positives filtered out and send it to a ticketing system.
December 4, 2012 at 2:32 pm #51079MaXeParticipant
Even though this is not a technical response, but in case you need full CVSS scores, some scanners such as Qualys includes that as well. (Qualys is similar to Nessus, except it’s in the cloud, but they have network appliances you can deploy as well, both physical and virtual.)
There really isn’t a good way of removing false positives from excel, except with some regex when you see a pattern, or by using perhaps another tool that may show the results in a better way as cd1zz suggested. I haven’t tried NeXpose on enterprise networks as you describe so I can’t really compare yet ;D
December 4, 2012 at 4:01 pm #51080dynamikParticipant
I was in a similar situation a couple years ago and budget requirements forced me to roll my own (crude) tool to handle this. I had a CSV of False-Positives, and another that contained ‘Acceptable’ vulnerabilities (i.e. those that could not be remediated for whatever reason). I wrote a Python script that took a list of exported CSVs and would filter vulnerabilities (vulnerability name and IP match) from the FP and Acceptable CSVs and then created a new CSV with the remaining vulnerabilities sorted based on CVSS. It wasn’t ideal, but the few hours spent creating the script made up for the time spent immediately.
If you have the money, you should probably see if the Nessus SecurityCenter (I’m not familiar with the functionality personally) or another product has better vulnerability management capabilities.
I’m not sure if this hasn’t always been in Nessus, but I know that now it will also report if a public exploit exists, as well as which framework it is in (Core, Canvas, and/or Metasploit).
Your remediation recommendations will often be based on common sense, but you’ll get better at it as you gain experience. Especially since you’re working in a single environment and not consulting with many clients. You’ll quickly develop a feel for what is “normal” on your network.
Try not to overwhelm your IT team with recommendations. They’re likely understaffed and overloaded as well, and a massive list of vulnerabilities to remediate will likely get ignored. Do what cd1zz mentioned and focus on the low hanging fruit at the onset. If you’re just getting started, you’ll like find areas where fixing one core issue will have significant impact (i.e. fully patching a system that fell through the cracks and is missing years of updates). If you’re using PHP, Java, Adobe, etc., you’ll also likely find that a single update will remove a lot of high/critical vulnerabilities from your list.
Other recommendations may strongly depend on your infrastructure. For example, if there are critical SMB vulnerabilities in a DMZ that only has HTTPS publicly available and administrators access that environment via an RDP jump box, those systems are probably at a lower risk of exploitation than if they were on a user network where anyone could download malware that tears through the other systems on the network. Obviously, an attacker may be able to do something through a vulnerable web app, but the DMZ systems would typically be at less risk of exploitation under normal circumstances.
Note, I’m not saying you should disregard those types of vulnerabilities entirely, just that there may be mitigating circumstances that lower their priority, and you can in-turn focus on others that are a higher priority. This will often be a judgment call on your part; I don’t know of any guide or framework that assists with more detailed prioritization like this.
December 4, 2012 at 6:18 pm #51081tturnerParticipant
I touched on some of these issues in a recent blog post (mostly focused on vuln mgmt lifecycle and how current products don’t really meet our needs) http://sentinel24.com/blog/vuln-mgmt-lie/
It’s really a shame that the vuln scan vendors missed the boat here. If you purchase additional expensive tools like RedSeal you can start painting the picture of which vulns actually matter and follow attack paths, but you are looking at about a $50,000 entry point there. According to Ron Gula, Nessus + PVS + Security Center can do this as well but nobody helps with the integration with workorder systems in any meaningful way that is so critical to remediation workflows.
False positive reduction is hell in and of itself and is often a manual process. One thing that can help here is a documented change management process. If you know you patched XYZ on servers ABC and the report is telling you otherwise, chances are it’s a false positive. one thing that really makes all this hard is backported change issues inherent in many Linux distros where the Apache issue has been fixed but it still reports as a vulnerable version. You may have better luck tracking positive change than trying to track reported vulns.
The other thing is understanding context of reported vulns. There have been instances where critical vulns did not concern me because they held no critical information, had no trust relationships to other systems and were fairly invisible to the public. Little impact here. Starting with an understanding of your assets, what matters, etc is hugely beneficial.
As ajohnson infers, rolling your own is often the only way to accomplish this. I’ve been debating creating my own tool for awhile now to work with common scanner products, workorder systems and other open source vulnerability and application repositories but the problem is what I’ve defined for my own needs is hugely intimidating to take it on as a project. I’d be happy to work with someone if they are feeling up to the task though.
December 4, 2012 at 6:27 pm #51082TribanParticipant
Another idea to add to this already decent list, is separate your scans into priority and critical systems. For example run your server/network scans less frequently. Since those are static devices, there is no reason to run them more than once a month (depending on your patch schedule). After systems have been patched/reconfigured, rinse and repeat until the only checks you continue to see are patch related. Also if you see the same vulns continually being reported even after patching, you may want to manually check and see if it is a false positive. I’ve seen SQL installs cause that type of behavior due to a bad install of an older patch.
As for the Desktops/laptops, that is a different beast. See what your most common vulnerabilities are and try to fix/patch them on a global level. Non-Windows related patch vulnerabilities will most likely be issues with 3rd party software (Java/Adobe) and/or system configurations. You may not see those with Nessus unless you are doing authenticated based scans. In my last gig I used multiple applications to monitor the environment – LANGuard (only supports 3K IPs), WSUS reporting (patching), Microsoft Baseline Security Analyzer (MBSA – Security configurations/Patching), and NexPose. For deploying both MS and 3rd party patches, we used a Kace appliance which worked well out-of-the-box and had a good amount of support both from the vendor and community for creating custom scripts to deploy things like java and adobe patches.
I guess where I am going, no single tool will solve all your problems. You may need to use multiple ones and also manually test some of the questionable findings. Using Authenticated scans will give you a better idea and eliminate a number of false positives as well. Organize the findings by risk and address accordingly. If defense was easy, everyone would be doing it 😀
December 7, 2012 at 12:48 pm #51083RoleReversalParticipant
I’ve not taken a good look myself, only ran across them myself a couple of days ago, but Risk.io may do what you need.
It is a commercial service, but there is a free/limited option, and all new accounts start with a 30day Pro trial.
Hopefully it might solve your issues, either way I’d be interested in your thoughts and experiences if you do give them a go.
December 7, 2012 at 2:06 pm #51084tturnerParticipant
Thanks Andrew it looks interesting. Just not sure how crazy I am about shooting all my vulnerabilities up to a cloud service. I’d welcome something like this internally though.
January 6, 2013 at 3:04 am #51085
Thnks to all …your advice helped me in reducing my time & efforts…
January 6, 2013 at 2:55 pm #51086TribanParticipant
Great man! Glad we could help!
January 18, 2013 at 10:02 pm #51087
- You must be logged in to reply to this topic.