How do you benchmark real-world work?

Adrian Kingsley-Hughes and I have been focusing lately on a tiny aspect of PC performance. He ran two sets of file management benchmarks on a test PC in his lab, I performed similar tests on a machine in my lab. Results? Inconclusive.But are both of us missing the real point of owning and using a […]

Adrian Kingsley-Hughes and I have been focusing lately on a tiny aspect of PC performance. He ran two sets of file management benchmarks on a test PC in his lab, I performed similar tests on a machine in my lab. Results? Inconclusive.

But are both of us missing the real point of owning and using a PC? Can any stopwatch-based measurement of isolated tasks as performed by individual hardware and software components really measure the worth of a technology investment? I don’t think so.

This is not a new question for me. Back in the early 1990s, when I was editor of the late, lamented PC Computing, we differentiated our product reviews from those of sister public PC Magazine by focusing on usability. The highly regarded PC Magazine Labs was the quintessential “speeds and feeds” shop. We focused on usability, going to the extreme of spending a small fortune (I still remember the budget battles) building a state-of-the-art usability lab and hiring usability professionals to run it.

I liked our reviews better than the ones at PC Mag because we didn’t have a one-size-fits-all conclusion. Instead, using the usability data, we tried to determine which product was a better fit for readers and prospective buyers with different needs. I think that approach still works today.

Full Article

Desktop, PC, Performance, Benchmark, Benchmarking