AnsweredAssumed Answered

Integration of VM Data with Splunk...any positive success stories?

Question asked by void on Sep 30, 2015
Latest reply on Oct 6, 2015 by void

My organization has a pretty significant investment in Splunk and I am wondering what other users are doing to help visualize the massive amounts of information that can come from scanning thousands of assets in an environment.  I realize there is currently an app available in the Splunk add-ons but was looking to see if that really is the best way to do it or if folks have had more success through other means.

 

Things like...

 

1) Did you create a custom set of dashboards or did you leverage the 'beta' Qualys app for Splunk?  Side note - my TAM provided me a version quite a while ago and while I didn't look too deeply into it, our Splunk consultant said it wasn't doing the best a de-duping some of the results....again Im not trying to call it junk or anything as I know it looked pretty solid at last years QSC but maybe someone here can provide better information about it...

 

2) How do you handle de-duping results? Possibly use the Qualys HOST ID? DNS/FQDN to aggregate? Or just limit the search scope to 7 days or whatever the time frame would be to have any single asset in your environment be scanned at least once?

 

3) Do you import only vulnerabilities? Or ALL information (Potential Vulns and Info Gathered)? I can see some benefit to including information gathered simply because its grabbing quite a bit of information that may be somewhat useful to some of our incident response folks who may want to get a better understanding of what is immediately on that host, what ports are open, etc...they already live in splunk and for them to be able to quickly pull up that type of information in the same UI via a simple search would be pretty efficient/useful.

 

4) How do you pull the data from Qualys? By that I mean to you periodically (say daily) query the API for new updates to IP/NB/DNS tracked assets and download that differential? As a point of reference, we currently poll the API for 'completed scans' every hour and when there is a new scan report available then it is downloaded and processed.

 

Appreciate any feedback people can provide as this would be a big value add to our program.

Outcomes