Although you might be able to conduct a meaningful simulation of an operating-system scheduler, "frankly, I doubt it." There are simply too many variations in what might happen "next." (Or, the data is simply too abstract ... you gather it, but what does it mean, what does it tell you, and what can you do with it, and how can you usefully compare it to anything else?)
Often, what is done is to instrument certain applications so that they collect statistics over the space of at least several seconds by which you can infer what must be happening, generally using classical statistical analysis techniques upon the collected data-set.
A very strong approach is to set some sort of objective about which one can say that it was either "met," or "not met," and from this set a goal concerning this objective. You have now engineered a binomial test situation. An experiment. Alternatively, you can decide upon a "closeness" factor ... how close were we to meeting the goal? And, you can analyze the distribution of those.
I guess it's a bit like quantum physics, where you can never actually observe something, but you can analyze its effects.