This article continues the series on resource allocation for virtual machines by examining CPU utilization, and talking about some of the challenges associated with obtaining accurate Performance Monitor readings in a virtual server environment.
The first thing that I want to talk about is performance monitoring. It seems a little odd to me, but for some reason performance monitoring seems as though it has become the hot virtualization topic almost overnight. I think that part of the reason for this is that people are starting to realize that the Performance Monitor cannot be completely trusted in a Hyper-V environment. It isn’t just Performance Monitor that becomes unreliable though. Many of the other available resource monitoring mechanisms can also no longer be trusted. For example, it is very common for the Hyper-V management console to report completely different levels of CPU usage than what is displayed under the Windows Task Manager. In fact, if you look at Figure A, you can see that the Hyper-V manager is reporting 5% CPU utilization, while the Windows Task Manager is reporting that the virtual machine is using 0% of the CPU resources.