We're in the process of trying to roll out OSD to production. Too many content location requests can peg the MP.
We're aware of CPU spikes related to large numbers of content lookup requests associated primarily with software update deployments. Of primary concern is when we make updates available to 20,000 computers.
What we see is that even when we deploy the content to the collection off hours (say 1am) , the desktops will download content off-hours. Then, beginning around 8am we see a steady progression in the number of clients checking in. What we are trying to work
towards is being able to predict how many content lookups we anticipate at any given points.
We know that at any minute where we attempt to process more than 3,000 content lookups in the same minute, the SQL processor gets pegged, and the MP log states a timeout - which could potentially impact OSD.
Has anyone else attempted to track when clients enter the environment, how many content location requests are placed per minute/hour etc? <cough cough> GARTH JONES? </cough cough> We can attempt to archive the SQL table with the policy
timestamp, and check for differences, but this is not necessarily a reflection of whether you needed content or not. I am not aware of any way to track content location requests short of trying to logscrape mp_Location.log..
If this was possible, the next step would be to try and say 'collections X has 12,000 machines, Y has 3000 machines, and Z has 200 machines, so we expect 15,200 machines to request content on Monday morning. Based on historical averages, we expect x . minute,
with the threshold of 3,000/minute being exceeded between 8:54am and 9:09am. '
Thoughts?
Will