M
Matt Baker
Guest
The data comes from the windows wmic command line tool. It is calculated as KernelModeTime + UserModeTime * 100 / "elapsed time of process"). This is a calculation that is in place for any any process, including classic appserver agents. Elapsed time of process is "now" - "CreationDate". This is what I could gather in a few minutes of reading code. I think this was done because other operating systems like linux generate timing data in "jiffies" which is scheduling timing, rather than raw milliseconds. OS's don't really give "current % used". Rather the data is exposed to calculate it yourself. I don't know why this choice was made many many years ago. If I understand this correctly, KernelModeTime and UserModeTime is cumulative over the life of the process. It is not a % of "now". The WMIC query is more or less like this: >wmic Process where Processid=#### get CommandLine,CreationDate,ExecutablePath,KernelModeTime,MinimumWorkingSetSize,ParentProcessId,Processid,UserModeTime,WorkingSetsize Whereas the "CPU Utilization" on the second screenshot is a more fine grained value. It also comes from wmic using this query which is Windows reported current average for the last 1 second. This comes from windows perf counter data. It is "cooked" data which basically means wmic takes 2 samples for the Processor and User time with a 1 second delay, and subtracts them and divides by the time difference, basically the same as the per PID metric, but only calculated over 1 second, rather than the elapsed time. >wmic path Win32_PerfformattedData_PerfOS_Processor get PercentInterruptTime,PercentPrivilegedTime,PercentProcessorTime,PercentUserTime I agree the data isn't in the best format, and is too coarse. Rather the per process CPU usage should be using the same double sample and short interval, rather than single sample since beginning of process which would give a much clearer "now" value.
Continue reading...
Continue reading...