LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 10-14-2009, 07:58 AM   #1
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Rep: Reputation: 2
Using tmpFs is not improving performance


Hello,

I am using SQLite for creating a database file. The database file needs to be non persistent and so I am creating the
same in tmpfs. The size of my tmpfs is 10 MB. My concern is that I am not able to get any performance gain. The time taken to do write in tmpfs database file is almost same as the time taken to write to a
database file when it is in hard disk.

I am using following command to create the tmpfs

mount -t tmpfs -o size=10m tmpfs /mnt/WCFRamDisk

After creating folder /mnt/WCFRamDisk, I am creating the database file in the path /mnt/WCFRamDisk/MyDB

Please let me know how can I get the advantage of using the tmpfs?

Thanks and Regards,
Souvik
 
Old 10-14-2009, 08:46 AM   #2
Guttorm
Senior Member
 
Registered: Dec 2003
Location: Trondheim, Norway
Distribution: Debian and Ubuntu
Posts: 1,293

Rep: Reputation: 335Reputation: 335Reputation: 335Reputation: 335
Hi

When you write to files on disk, the OS uses disk caching in RAM, so the task will usually finish before the data is actually written to disk. And if you read it again right after, the file will be in the cache so the OS will not have to read the file from disk again. So using tmpfs is only slightly faster than a regular filesystem.

But there is a big difference if you do other tasks in between. For example:
- Write some data to tmpfs.
- Do some other tasks that uses the disk a lot. (So the disk cache gets full of other stuff.)
- Read the data from tmpfs.

So tmpfs can help a lot if there is information that is read regularly, with other things happening in between. Otherwise there is not much difference.
 
Old 10-15-2009, 12:17 AM   #3
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Original Poster
Rep: Reputation: 2
Thanks for the reply. Can you tell me how will the behavior differ if I don't have a hard disk in my system but used a flash instead. Generally one do not assign swap space for systems with flash. In that case will there be any performance gain if I use tmpfs? Do you think ramfs is a better choice?

Regards,
Souvik
 
Old 10-15-2009, 01:22 AM   #4
Forrest Coredump
Member
 
Registered: Oct 2009
Location: Southwestern United States
Distribution: Redhat Enterprise Linux 4-5 (Current RHCE), Fedora Core 11 (FC11), Arch Linux, BT3 (Current GCIH)
Posts: 42

Rep: Reputation: 16
A suggestion:

I you want performance, try a ramdisk backed by a partition on your hardisk utilizing md-raid w/ write-behind & write-mostly (soft RAID1).

Create a ramdisk, and a partition on HDD (preferably MLC SSD). Use mdadm to create a RAID1 set utilizing the "write-behind" and "write-mostly" features. If done right, you should seen some pretty amazing performance. The only drawback to this approach is it takes a minute or so to rebuild after reboot (depending on size), but it's pretty fast regardless - especially when backed by a MLC SDD (best approach by far).

The script below goes should be self explanatory. The "SETUP STUFF" only needs to be done once, the "REBUILD" portion can be added to /etc/rc.local to enable persistence across reboots.


##### SETUP STUFF ########
#mount -o noatime /dev/ram1 /var/cache/ramdisk/
#touch /var/cache/ramdisk/bitmap.md0
#mdadm -C /dev/md0 -n 2 -l 1 -b /var/cache/ramdisk/bitmap.md0 --write-behind=64 /dev/ram0 --write-mostly /dev/sdb1 --force
#mdadm --detail --scan >> /etc/mdadm.conf
## END SETUP STUFF ######

#### REBUILD THE DEVICE ON REBOOT #######
mke2fs -R stride=16 /dev/ram1
mount -o noatime /dev/ram1 /var/cache/ramdisk/bitmap.md0
mdadm -A --run /dev/md0 /dev/VolGroup00/LogVol01
mdadm -a /dev/md0 /dev/ram0
mount -o noatime, /dev/md0 /ramdisk
 
Old 10-15-2009, 01:31 AM   #5
Forrest Coredump
Member
 
Registered: Oct 2009
Location: Southwestern United States
Distribution: Redhat Enterprise Linux 4-5 (Current RHCE), Fedora Core 11 (FC11), Arch Linux, BT3 (Current GCIH)
Posts: 42

Rep: Reputation: 16
Of course, that is if you want to have performance, and survive a reboot... If you want pure performance, just work straight off a ramdisk. And you don't want to use expensive flash for swap space as you will drastically shorten the life of the device.
 
Old 10-15-2009, 02:40 AM   #6
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Original Poster
Rep: Reputation: 2
Actually my final target hardware will not have a hard disk but a flash. Pardon me for my ignorance , but can you please be more descriptive like how can I have this "pure performance" using ram disk. I have tried using a ramdisk in my PC based environment , but I am not getting any performance gain. I am using the following simple script to create the ramdisk:

mount -t ramfs -o size=10m ramfs /mnt/WCFRamDisk
dbName=/mnt/WCFRamDisk/CFWMediaContent.db
sqlCommands=SQLCmds
if [ -f $dbName ]; then
echo "$dbName already exists"
echo "$dbName Deleting ..."
rm -f $dbName
fi
sqlite3 $dbName < $sqlCommands;
echo "$dbName - newly Created"
exit 0

Am I missing some thing here?

Regards,
Souvik
 
Old 10-15-2009, 03:08 AM   #7
Guttorm
Senior Member
 
Registered: Dec 2003
Location: Trondheim, Norway
Distribution: Debian and Ubuntu
Posts: 1,293

Rep: Reputation: 335Reputation: 335Reputation: 335Reputation: 335
Hi

What he was talking about was using RAID 1 where one of the disks is a ramdisk and the other a regular disk. This is only needed when you need persistent data.

I guess the slow part of your script is the sqlite3 command. Depending on what is in the SQLCmds file, it could be quite slow. But I think the bottleneck will be CPU speed, not disk IO. If this will happen on every boot, and the SQLCmds is static, why not drop the sqlite3 command altogether and just copy the file to the ramdisk? What I mean is that every time the SQLCmds changes, run the sqlite3 command on it. You can just copy the .db file over to the ramdisk on every boot.
 
Old 10-15-2009, 03:17 AM   #8
Forrest Coredump
Member
 
Registered: Oct 2009
Location: Southwestern United States
Distribution: Redhat Enterprise Linux 4-5 (Current RHCE), Fedora Core 11 (FC11), Arch Linux, BT3 (Current GCIH)
Posts: 42

Rep: Reputation: 16
As far as the "pure performance" of the ramdisk, I was assuming you were after IOPS, in which case ramfs is a an excellent choice (relatively no seek time). I suppose a better understanding of what you goal is would be helpful in this case.
 
Old 10-15-2009, 05:25 AM   #9
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Original Poster
Rep: Reputation: 2
Hello Guttorm,

This is script is ran during system startup. The job of SQLCmds is to create the database schema on the blank db file (created in ramfs ). The actual writing of records in the db is done by another application. The insertion of records in the db is what is taking almost the same time as it is taking for disk file.
My PC environment is having RAM of size 1 GB and I am creating a tmpfs/ramfs of size 10 mb. My CPU is Intel 2-Core and I running Ubuntu 9.04.
 
Old 10-15-2009, 05:29 AM   #10
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Original Poster
Rep: Reputation: 2
Hello Forrset Coredump,

My goal is to have a non persistent database file. So I decide to create the same in tmpfs/ramfs but my concern is writing in the tmpfs/ramfs database file is taking same time as it is taking for a disk file.
 
Old 10-15-2009, 08:06 PM   #11
Forrest Coredump
Member
 
Registered: Oct 2009
Location: Southwestern United States
Distribution: Redhat Enterprise Linux 4-5 (Current RHCE), Fedora Core 11 (FC11), Arch Linux, BT3 (Current GCIH)
Posts: 42

Rep: Reputation: 16
Souvik,

Again, where ramdisk/ramfs really shines is in sheer number of I/O operations a second when compared to traditional secondary storage - this is particularly evident both on large transactions (where vm.dirty_ratio is exceeded), and small random/seek type IOP's (e.g., in the case of a database). That said, it's important to realize that a write of ~10MB on a system with ~1GB of RAM will in nearly all cases exibit similar performance characteristics both with a traditional HDD and ramfs - this (as Guttorm explained previously) is because you are actually writing to the VFS/Buffer Cache first in both instances (remember ramfs is a partition in VFS). Only after "pdflush" wakes up does actual I/O to physical disk take place (after merging, sorting i/o, etc... depending on the scheduler being used). I believe the above fact has already revealed itself in your testing.


Perhaps running the following test would be helpful to further illustrate. In this test we compare I/O performance by bypassing VFS on our hard disk (via the "oflag=direct" option in dd):

#Create a ramfs & setup a hdd directory for our test
mkdir -p /testdir/ramfs
mount -t ramfs my_ramfs /testdir/ramfs -o size=10m,maxsize=10m

#Test I/O on a HDD partition both with the buffer cache, and without (bypassing VFS with the "oflag=direct" option in dd)

#First test writes with the buffer cache:
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/test_file.$i bs=1k count=100;done
#Next test writes bypassing the buffer cache/VFS.
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/test_file.$i bs=1k count=100 oflag=direct ;done

#Now test reads w/VFS:
for i in `seq 1 10`;do dd if=/testdir/test_file.$i of=/dev/null bs=1k count=100;done
#And without VFS:
for i in `seq 1 10`;do dd if=/testdir/test_file.$i of=/dev/null bs=1k count=100 iflag=direct;done



#Now we test I/O on our ramfs partition (obviously no option to bypass VFS, again because ramfs is a partition in VFS)

#First test writes:
for i in `seq 1 10`;do dd if=/dev/zero of=/testdir/ramfs/test_file.$i bs=1k count=100;done

#And reads:
for i in `seq 1 10`;do dd if=/testdir/ramfs/test_file.$i bs=1k of=/dev/null;done

You should see that the ramdisk is roughly ~150X faster

Another good test would be to run something like IOZone or Postmark to simulate real I/O.

Hope this helps.
 
Old 10-16-2009, 12:05 AM   #12
souvikdatta
LQ Newbie
 
Registered: Jun 2004
Posts: 11

Original Poster
Rep: Reputation: 2
Forrest,
Thanks a lot for this wonderful explanation. I got your point.

Regards,
Souvik
 
Old 10-16-2009, 12:52 AM   #13
Forrest Coredump
Member
 
Registered: Oct 2009
Location: Southwestern United States
Distribution: Redhat Enterprise Linux 4-5 (Current RHCE), Fedora Core 11 (FC11), Arch Linux, BT3 (Current GCIH)
Posts: 42

Rep: Reputation: 16
Your welcome. This is a question/topic that comes up a lot, so for the benifit of many I posted this. I realize that you were probably already aware of some of the above. Good luck with your project!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Improving the performance !!! kshkid Programming 9 12-20-2006 12:23 PM
Firefox -- improving performance 1kyle SUSE / openSUSE 2 03-23-2006 02:36 AM
Improving Fedora performance? Fenster Fedora 11 10-09-2004 10:36 PM
improving 3d performance ababkin Linux - Hardware 1 04-08-2004 11:33 PM
Improving hd performance psyklops Linux - General 2 08-21-2003 08:19 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 05:49 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration