Quote:
Originally Posted by Ammad
I am using OEL 5.3 64-bit on HP DL580 (4 CPU). i am decompressing the Oracle backup by tar command.
|
You have 30GB of files in cache, but apparently the tar file you are reading was not in that cache. That seems reasonable to me. But if it doesn't match your expectation, explain.
Quote:
only one core is being utilized 65%
|
Probably that means the limit on throughput is reading that tar file from disk. So one core at 65% is enough to decompress at the rate the input file can be read.
Maybe the operation you are doing is so big, that the write behind of its output is aging out of cache enough that the actual limit on throughput is writing the output to disk. More likely, the output is filling your 30GB of cache, while the disk is busy reading the input.
Either way, the limit is the disk.
Quote:
and rest of the cores are free and load average is 4.4.
I want to know why the CPU is not being equally utilized?
|
Why should the work be balanced among cores?
Even if there were more work than one core could handle (if you had some faster RAID instead of your current disk system), I don't think the decompression algorithm has been coded to make good use of multiple cores.
My experience with Windows on single threaded applications on multi core machines is that on almost every I/O stall the OS switches the thread to a different core. That reduces throughput because of L1 (and maybe L2) cache effects. That balances the temperature across the cores, which might be beneficial (I don't really know).
My experience with similar situations on Linux is that threads do not move among cores as often. So compared to Windows, you get slightly better throughput and worse thermal balance.
I don't know enough to say which is better.
Were you worried about thermal balance? Or did you misunderstand the fact that disk speed limits this operation and think better balance across cores could give better throughput?