There is no one-size-fits all answer, however here is what I do in my environment of ~1000 machines.
A nightly scan of all my machines which includes the Standard TCP ports and no UDP ports (this is configurable in the option profile).
A weekly scan of all my machines which includes all TCP and UDP ports.
My weekly scan is run on the weekends and takes significantly longer than the daily.
I enable authentication on both my profiles, always important if you want results to be as accurate as possible.
To determine for yourself I would ask these questions:
How many machines do you have?
How long does it take to scan them all?
How long is your scanning window?
How often do you run and review a Qualysguard report?
Also, how 'fresh' do your results need to be?
One thing people forget is that you can scan production networks mid day by leveraging the performance settings aspect of the option profiles. This ensure you can get results from those devices that might only be available from 8-5.
It really does depend on the size of the network, times you can scan, network bandwidth etc.
I would suggest you determine how oftern you want to report. Once you know you want to report monthly for example then you should structure your scanning so that it is scanned as least once a month.
You can always setup scans/profiles to react to incidents, for example if you need to scan all of the end users to react to an emerging threat.
I think best practice would be to scan every 3 months for a small network.
I think everyone that has contributed to the discussion has provided great ideas. Also you will need to factor in the TYPE of scan - vulnerability, compliance, web app, etc, if you are using any of these. I think Qualys is working to combine vulnerability and compliance into one scan, but until that happens you will have separate scans. As from a compliance perspective, I'm assuming you are thinking about vulnerability, compliance scans will determine how long it will take to accomplish these 3 tasks: Reporting, Assement, and Remediation before audits are due.
1 - When verification of remediation efforts are necessary.
2 - When you need to detect new vulnerabilities released that affect the platforms relevant to your environment.
Another influencing factor you should consider is the time it takes to remediate issues. If you scan too frequently you'll simply drown your support team in lists of (repetative) issues and then they will either switch off or simple start risk accepting everything.
Do you have a patching policy? Although patches are not the answer to everything they do address many vulnerabilities, especially in the Windows environment. Consider overlaying or integrating your patching policy to your scanning frequency. Its questionable if there is much point scanning at a higher rate than you are obliged to fix fix things
I think a lot of organisations take the blanket approach of scanning a who estate or whole environments depending on OS. If you can, consider scanning by Business Risk categorisation of your platforms. The higher the risk the more frequent the scan.
We scan monthly across an estate of 2200 Unix and windows servers and we we use the blanket approach and we're seeing the remediation rate start to plateau.
Retrieving data ...