Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am using SQLite for creating a database file. The database file needs to be non persistent and so I am creating the
same in tmpfs. The size of my tmpfs is 10 MB. My concern is that I am not able to get any performance gain. The time taken to do write in tmpfs database file is almost same as the time taken to write to a
database file when it is in hard disk.
I am using following command to create the tmpfs
mount -t tmpfs -o size=10m tmpfs /mnt/WCFRamDisk
After creating folder /mnt/WCFRamDisk, I am creating the database file in the path /mnt/WCFRamDisk/MyDB
Please let me know how can I get the advantage of using the tmpfs?
When you write to files on disk, the OS uses disk caching in RAM, so the task will usually finish before the data is actually written to disk. And if you read it again right after, the file will be in the cache so the OS will not have to read the file from disk again. So using tmpfs is only slightly faster than a regular filesystem.
But there is a big difference if you do other tasks in between. For example:
- Write some data to tmpfs.
- Do some other tasks that uses the disk a lot. (So the disk cache gets full of other stuff.)
- Read the data from tmpfs.
So tmpfs can help a lot if there is information that is read regularly, with other things happening in between. Otherwise there is not much difference.
Thanks for the reply. Can you tell me how will the behavior differ if I don't have a hard disk in my system but used a flash instead. Generally one do not assign swap space for systems with flash. In that case will there be any performance gain if I use tmpfs? Do you think ramfs is a better choice?
I you want performance, try a ramdisk backed by a partition on your hardisk utilizing md-raid w/ write-behind & write-mostly (soft RAID1).
Create a ramdisk, and a partition on HDD (preferably MLC SSD). Use mdadm to create a RAID1 set utilizing the "write-behind" and "write-mostly" features. If done right, you should seen some pretty amazing performance. The only drawback to this approach is it takes a minute or so to rebuild after reboot (depending on size), but it's pretty fast regardless - especially when backed by a MLC SDD (best approach by far).
The script below goes should be self explanatory. The "SETUP STUFF" only needs to be done once, the "REBUILD" portion can be added to /etc/rc.local to enable persistence across reboots.
#### REBUILD THE DEVICE ON REBOOT #######
mke2fs -R stride=16 /dev/ram1
mount -o noatime /dev/ram1 /var/cache/ramdisk/bitmap.md0
mdadm -A --run /dev/md0 /dev/VolGroup00/LogVol01
mdadm -a /dev/md0 /dev/ram0
mount -o noatime, /dev/md0 /ramdisk
Of course, that is if you want to have performance, and survive a reboot... If you want pure performance, just work straight off a ramdisk. And you don't want to use expensive flash for swap space as you will drastically shorten the life of the device.
Actually my final target hardware will not have a hard disk but a flash. Pardon me for my ignorance , but can you please be more descriptive like how can I have this "pure performance" using ram disk. I have tried using a ramdisk in my PC based environment , but I am not getting any performance gain. I am using the following simple script to create the ramdisk:
mount -t ramfs -o size=10m ramfs /mnt/WCFRamDisk
dbName=/mnt/WCFRamDisk/CFWMediaContent.db
sqlCommands=SQLCmds
if [ -f $dbName ]; then
echo "$dbName already exists"
echo "$dbName Deleting ..."
rm -f $dbName
fi
sqlite3 $dbName < $sqlCommands;
echo "$dbName - newly Created"
exit 0
What he was talking about was using RAID 1 where one of the disks is a ramdisk and the other a regular disk. This is only needed when you need persistent data.
I guess the slow part of your script is the sqlite3 command. Depending on what is in the SQLCmds file, it could be quite slow. But I think the bottleneck will be CPU speed, not disk IO. If this will happen on every boot, and the SQLCmds is static, why not drop the sqlite3 command altogether and just copy the file to the ramdisk? What I mean is that every time the SQLCmds changes, run the sqlite3 command on it. You can just copy the .db file over to the ramdisk on every boot.
As far as the "pure performance" of the ramdisk, I was assuming you were after IOPS, in which case ramfs is a an excellent choice (relatively no seek time). I suppose a better understanding of what you goal is would be helpful in this case.
This is script is ran during system startup. The job of SQLCmds is to create the database schema on the blank db file (created in ramfs ). The actual writing of records in the db is done by another application. The insertion of records in the db is what is taking almost the same time as it is taking for disk file.
My PC environment is having RAM of size 1 GB and I am creating a tmpfs/ramfs of size 10 mb. My CPU is Intel 2-Core and I running Ubuntu 9.04.
My goal is to have a non persistent database file. So I decide to create the same in tmpfs/ramfs but my concern is writing in the tmpfs/ramfs database file is taking same time as it is taking for a disk file.
Again, where ramdisk/ramfs really shines is in sheer number of I/O operations a second when compared to traditional secondary storage - this is particularly evident both on large transactions (where vm.dirty_ratio is exceeded), and small random/seek type IOP's (e.g., in the case of a database). That said, it's important to realize that a write of ~10MB on a system with ~1GB of RAM will in nearly all cases exibit similar performance characteristics both with a traditional HDD and ramfs - this (as Guttorm explained previously) is because you are actually writing to the VFS/Buffer Cache first in both instances (remember ramfs is a partition in VFS). Only after "pdflush" wakes up does actual I/O to physical disk take place (after merging, sorting i/o, etc... depending on the scheduler being used). I believe the above fact has already revealed itself in your testing.
Perhaps running the following test would be helpful to further illustrate. In this test we compare I/O performance by bypassing VFS on our hard disk (via the "oflag=direct" option in dd):
#Create a ramfs & setup a hdd directory for our test
mkdir -p /testdir/ramfs
mount -t ramfs my_ramfs /testdir/ramfs -o size=10m,maxsize=10m
#Test I/O on a HDD partition both with the buffer cache, and without (bypassing VFS with the "oflag=direct" option in dd)
#First test writes with the buffer cache:
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/test_file.$i bs=1k count=100;done
#Next test writes bypassing the buffer cache/VFS.
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/test_file.$i bs=1k count=100 oflag=direct ;done
#Now test reads w/VFS:
for i in `seq 1 10`;do dd if=/testdir/test_file.$i of=/dev/null bs=1k count=100;done
#And without VFS:
for i in `seq 1 10`;do dd if=/testdir/test_file.$i of=/dev/null bs=1k count=100 iflag=direct;done
#Now we test I/O on our ramfs partition (obviously no option to bypass VFS, again because ramfs is a partition in VFS)
#First test writes:
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/ramfs/test_file.$i bs=1k count=100;done
#And reads:
for i in `seq 1 10`;do dd if=/testdir/ramfs/test_file.$i bs=1k of=/dev/null;done
You should see that the ramdisk is roughly ~150X faster
Another good test would be to run something like IOZone or Postmark to simulate real I/O.
Your welcome. This is a question/topic that comes up a lot, so for the benifit of many I posted this. I realize that you were probably already aware of some of the above. Good luck with your project!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.