I’ve been benchmarking systems for long enough to know that no matter how many questions I think that my results answer, what I’m really doing is creating about three new questions for each question I solve. This is what happened with my earlier run of Vista benchmarks - I’d run some tests, you’d then come back and offer different scenarios that you’d like to see done and different platforms for those scenarios.
Benchmarking is an artificial activity. The goal is to eliminate as many variables as possible and achieve some consistent metric. Problem is, by removing the variables you’re actually shifting the process out from reality and into a make believe land which only exists on the PCs being benchmarked. Before you carry out your daily PC tasks I bet that you don’t take elaborate steps to ensure that you get consistency. Hence my enthusiastic use of the phrase “your mileage WILL vary.”
Another fundamental problem with benchmarking is that neither the tests nor the results are exactly what people want. Ultimately, what everyone wants to see is a benchmark of their daily tasks carried out on their PC. That, I’m afraid, is something I cannot provide.
Microsoft, Windows Vista, Vista SP1, RTM, Office 2007, Benchmarking