Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I need to do some speed tests and benchmarks on my hard disk drive. Unfortunately the files being written/read are being cached, (normally it's a good thing, but not when I need to accurately measure the performance of the harddisk). I looked in 'mount' options, 'hdparm', even the kernel configuration. Unfortunately I couldn't find anything. I feel that it should be a simple switch somewhere, but I'm just missing it.
Please, if any one knows how to disable the cache, I'd appreciate the help. Even if it takes to hack the kernel, I have no problem.
Thanks in advance,
PS. I hope I put the question in the correct forum!
Does passing sync,dirsync flags to mount do what you need? For testing a new disk, I'd use dd between it and some file on tmpfs, initialized from /dev/urandom (surely it would overwrite all partition tables and any data on it, so write speed should be tested like that only for empty disks).
Thanks raskin for the reply.
Although sync option will allow me bypass the cache when writing, the cache is still used for reading. So unfortunately it is not enough. I need read commands to read from the disk.
And no, it's not a new disk, I do need the file system for my tests!
Well, preventing read cache usage is pretty simple. You can do some raw-read tests with dd. You can also do some tests (one-time read only, unfortunately, using drop_caches):
Writing to this will cause the kernel to drop clean caches, dentries and
inodes from memory, causing that memory to become free.
To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
echo 3 > /proc/sys/vm/drop_caches
As this is a non-destructive operation and dirty objects are not freeable, the
user should run `sync' first.
Thanks again for your help. Actually this is really interesting, it's a new way that I didn't know about.
Actually I also know another technique too, it is to unmount/remount the device. However, both ways will free the cache for once. In my test I may read the same block twice in the same test! Anyway, i think I can work-around this point and make sure no reading is done twice on the same block (although this test may lose some value for certain reason!).
But still, if you, or anyone, know a way to completely disable the cache, it would save me a lot of hassle.
If there is no such a was as disabling the cache in linux. Then I think I'll have to go for limiting my test a bit.
What amount of data are we speaking about? Keep a low-CPU memor-eater running, and cache will die off on its own... See Documentation directory in kernel sources to find out how to make programs always take over cache.. I do not know, but probably finding cache lookup in VFS code and making them always fail is not hard.
I'm speaking about 100~400MB, but it is worth mentioning that the access is actually somehow random no sequential. (In some cases I may access some blocks frequently). It is actually a complicated test, that's really the problem.
I guess memory eater process may be what I need, it should resolve the problem. I think i'll have to disable the swap though, it's IOs may affect the results of my test.
And yes, I'll take a look at the VFS code first.
Following up on the previous posts, I wrote a little script to continuously flush the cache:
And why would you want to do that ?.
The OP had a specific reason for bypassing cache - in general use, file caching is a (significant) benefit.
Running your little script exposes you to significant risk of data loss.