LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices


Reply
  Search this Thread
Old 08-31-2016, 01:29 AM   #1
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Rep: Reputation: Disabled
ZRAM compression ratio


Hello all,

I am trying the evaluate ZRAM for my project and it’s not giving compression ratio as we expected, though zram manual claims 2:1 ratio. Can someone gives me some light on this.

You can see below in the steps I am copying a 35MB file in to the /newroot mount point mounted from /dev/zram0 device. The memory usage in ZRAM for this file also seems to be 35MB only. Since it is a compressed RAM device, I am expect memory usage of this file to get reduced.

Please let me know am I missing something here.

BTW, I am using kernel 3.4 version, FYI.

Thanks,
Senthil

Below are steps are followed to mount zram and copy the file.

root@sthangar-VirtualBox:~# modprobe zram
root@sthangar-VirtualBox:~# lsmod | grep zram
zram 28672 0
lz4_compress 16384 1 zram
root@sthangar-VirtualBox:~# echo 1 > /sys/block/zram0/reset
root@sthangar-VirtualBox:~# echo $((10240*1024*1024)) > /sys/block/zram0/disksize
root@sthangar-VirtualBox:~# /sbin/mkfs.ext4 /dev/zram0
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: 4096/2621440 done
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 1e7dcec3-262d-49dc-9acd-eeb1da71d66c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: 0/80 done
Writing inode tables: 0/80 done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: 0/80 done
root@sthangar-VirtualBox:~#mount /dev/zram0 /newroot/
root@sthangar-VirtualBox:~#cd /newroot/
root@sthangar-VirtualBox:~#cat /proc/meminfo;free;df .; du -sh .

MemTotal: 2048464 kB
MemFree: 569352 kB
MemAvailable: 1144284 kB
Buffers: 36936 kB
Cached: 662456 kB
SwapCached: 0 kB
Active: 1001864 kB
Inactive: 329944 kB
Active(anon): 633332 kB
Inactive(anon): 12168 kB
Active(file): 368532 kB
Inactive(file): 317776 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 2096124 kB
SwapFree: 2096124 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 632464 kB
Mapped: 174536 kB
Shmem: 13088 kB
Slab: 60300 kB
SReclaimable: 40600 kB
SUnreclaim: 19700 kB
KernelStack: 5760 kB
PageTables: 25068 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3120356 kB
Committed_AS: 3121172 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 395264 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 75712 kB
DirectMap2M: 2021376 kB
total used free shared buff/cache available
Mem: 2048464 719576 569196 13088 759692 1144128
Swap: 2096124 0 2096124
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 23028 9626436 1% /newroot
20K .


root@sthangar-VirtualBox:/newroot# ls -al /boot/initrd.img-4.4.0-31-generic
-rw-r--r-- 1 root root 35859385 Aug 26 13:17 /boot/initrd.img-4.4.0-31-generic



cat /proc/meminfo;free;df .; du -sh
MemTotal: 2048464 kB
MemFree: 533300 kB
MemAvailable: 1143672 kB
Buffers: 36940 kB
Cached: 697476 kB
SwapCached: 0 kB
Active: 1031992 kB
Inactive: 334808 kB
Active(anon): 633300 kB
Inactive(anon): 12168 kB
Active(file): 398692 kB
Inactive(file): 322640 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 2096124 kB
SwapFree: 2096124 kB
Dirty: 35036 kB
Writeback: 0 kB
AnonPages: 632460 kB
Mapped: 174536 kB
Shmem: 13088 kB
Slab: 61128 kB
SReclaimable: 41428 kB
SUnreclaim: 19700 kB
KernelStack: 5744 kB
PageTables: 25132 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3120356 kB
Committed_AS: 3121172 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 395264 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 75712 kB
DirectMap2M: 2021376 kB
total used free shared buff/cache available
Mem: 2048464 719668 533252 13088 795544 1143624
Swap: 2096124 0 2096124
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 58048 9591416 1% /newroot
35M .
 
Old 08-31-2016, 01:56 AM   #2
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,120

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Why are you not using the metrics provided in the sysfs mount - notably "orig_data_size" and "compr_data_size" ?.
The device/filesystem are not being compressed, the data is.
 
Old 09-07-2016, 08:55 AM   #3
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Thanks for the response.

I tried dumping the data from metrics provided by the sysfs and I see the difference.
cat compr_data_size
35917181
cat orig_data_size
38658048

But finally it should also reflect in usage of the mounted partition as well correct ? Could you please explain why is not reflecting in df or /proc/meminfo or du.

The reason why I am looking at these parameters is that.. We are running a system where it contains 1.5 GB of data in a tmpfs (RAMFS). And we are running out of RAM and hence I am looking at ZRAM an alternative by which I can store the data in compressed way which in turn can help me to have good saving in overall RAM utilization.

Please let me know my understanding is correct ?

Thanks,
Senthil
 
Old 09-07-2016, 01:17 PM   #4
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,804

Rep: Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306
the real compression ratio highly depends on the content of the file (data) itself, so 2:1 is probably a typical average, but is not valid for every and each file.
Try it also with a text file (you may try something from /var/log, or concatenate several logs into a bigger file). And also you may try it with a video or music file.
 
Old 09-07-2016, 09:32 PM   #5
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,120

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Your test file is the absolute worst example of compressible data - it is binary, and already compressed. You achieved less than 1% compression - not unexpected. If this is a valid example of your expected data, there is no point using zram.
 
Old 09-07-2016, 11:03 PM   #6
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Thanks for both of your response. Please find below my observation on a file which contains randomly generated ASCII characters.

root@sthangar-VirtualBox:~# base64 /dev/urandom | head -c 100000000 > file.txt

root@sthangar-VirtualBox:/# ls -al file.txt
-rw-r--r-- 1 root root 100000000 Sep 8 08:36 file.txt

root@sthangar-VirtualBox:~# file file.txt
file.txt: ASCII text


Free memory before copying the file:

root@sthangar-VirtualBox:~#cat /proc/meminfo;free;df .; du -sh .
MemTotal: 2048464 kB
MemFree: 803676 kB
MemAvailable: 1156440 kB
.....
total used free shared buff/cache available
Mem: 2048464 670988 803676 13692 573800 1156440
Swap: 2096124 0 2096124

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 23028 9626436 1% /newroot

20K .



Confirming that ZRAM is mounted.
root@sthangar-VirtualBox: /newrootroot@sthangar-VirtualBox:/newroot# mount | grep zram
/dev/zram on /newroot type ext4 (rw,relatime,data=ordered)

Copying the 100 MB file.
root@sthangar-VirtualBox:/newroot# cp /file.txt .

Free memory after copying the file:
root@sthangar-VirtualBox:~#cat /proc/meminfo;free;df .; du -sh .
MemTotal: 2048464 kB
MemFree: 702904 kB
MemAvailable: 1155132 kB
....
total used free shared buff/cache available
Mem: 2048464 670948 702888 13692 674628 1155116
Swap: 2096124 0 2096124

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 120688 9528776 2% /newroot

96M .


If you see the memory difference before and after copying the file, the difference is almost is 100000000 Bytes only (no compression took place).

Thanks,
Senthil
 
Old 09-08-2016, 02:22 AM   #7
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,804

Rep: Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306
using random chars you will hardly get any real compression, that is quite similar to binary data. That's why I suggest you to use logfile, which can be compressed very well.
What kind of files do you want to store in zram at all?
 
Old 09-08-2016, 03:04 AM   #8
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
My understanding is that, the ASCII files will give good compression ratio. BTW, I am planning to store mostly executables (elf files) in the ZRAM.
 
Old 09-08-2016, 04:54 AM   #9
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Real life ASCII files will give good compression ratio - things like log files, as suggested. You have created randomly generated ASCII files, which will NOT give a good compression ratio.

Basically, compression works by finding repeating patterns in a file, or finding more common patterns in a file. But if you randomly generate an ASCII file, there won't be repeating patterns or more common patterns to take advantage of.
 
Old 09-20-2016, 01:03 AM   #10
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Apologize for the late response.

I tried copying a file from /var/log which is a real life ASCII file.

root@ubuntu:/home/sthangar# file monitor-make.2
monitor-make.2: ASCII textfile monitor-make.2

and checked the content of this file, it has many repeated patterns like below. Please see below.

#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:32:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:33:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:34:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:35:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:36:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:37:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:38:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:39:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:40:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:41:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:42:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:43:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:44:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:45:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:46:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:47:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:48:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:49:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:50:01 2016,0,0,0
#summary(time-totalj-compiles-cc1s),Sun Mar 27 03:51:01 2016,0,0,0
.
.
.
.

And it is approximately 12MB file.
root@ubuntu:/home/sthangar# ls -al monitor-make.2 -rw-r--r-- 1 root root 12101586 Sep 19 22:30 monitor-make.2


Compressed using lz4 utility and checked it has become ~1MB .root@ubuntu:/home/sthangar# lz4 monitor-make.2 > monitor-make.2.lz4Compressed 12101586 bytes into 989531 bytes ==> 8.18%
root@ubuntu:/home/sthangar# ls -al monitor-make.2.lz4
-rw-r--r-- 1 root root 989531 Sep 19 22:40 monitor-make.2.lz4
root@ubuntu:/home/sthangar# file monitor-make.2.lz4monitor-make.2.lz4: LZ4 compressed data (v1.4+)

Copying the same file to the zram mount point and checking the free space
root@ubuntu:/newroot# cat /proc/meminfo;free;df .; du -sh .
MemTotal: 998408 kB
MemFree: 186664 kB
MemAvailable: 279976 kB
Buffers: 13460 kB
Cached: 205120 kB
SwapCached: 14944 kB
Active: 300700 kB
Inactive: 216400 kB
Active(anon): 145416 kB
Inactive(anon): 171404 kB
Active(file): 155284 kB
Inactive(file): 44996 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 1046524 kB
SwapFree: 808748 kB
Dirty: 48 kB
Writeback: 0 kB
AnonPages: 293388 kB
Mapped: 60372 kB
Shmem: 18300 kB
Slab: 83004 kB
SReclaimable: 41208 kB
SUnreclaim: 41796 kB
KernelStack: 9120 kB
PageTables: 25540 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1545728 kB
Committed_AS: 2983324 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 165888 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 141184 kB
DirectMap2M: 907264 kB
DirectMap1G: 0 kB
total used free shared buff/cache available
Mem: 998408 510308 186412 18300 301688 279832
Swap: 1046524 237776 808748
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 23028 9626436 1% /newroot
20K .

root@ubuntu:/newroot# cp /home/sthangar/monitor-make.2 .

root@ubuntu:/newroot# cat /proc/meminfo;free;df .; du -sh .
MemTotal: 998408 kB
MemFree: 173200 kB
MemAvailable: 279224 kB
Buffers: 13660 kB
Cached: 217632 kB
SwapCached: 14944 kB
Active: 301628 kB
Inactive: 228192 kB
Active(anon): 145420 kB
Inactive(anon): 171408 kB
Active(file): 156208 kB
Inactive(file): 56784 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 1046524 kB
SwapFree: 808752 kB
Dirty: 11840 kB
Writeback: 0 kB
AnonPages: 293392 kB
Mapped: 60540 kB
Shmem: 18300 kB
Slab: 83036 kB
SReclaimable: 41208 kB
SUnreclaim: 41828 kB
KernelStack: 9120 kB
PageTables: 25540 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1545728 kB
Committed_AS: 2983328 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 163840 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 141184 kB
DirectMap2M: 907264 kB
DirectMap1G: 0 kB
total used free shared buff/cache available
Mem: 998408 510880 173200 18300 314328 279224
Swap: 1046524 237772 808752
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 34848 9614616 1% /newroot
12M .


Now look at the difference between the free/used space before and after copying the file
It remains only ~12M, not 1M (which is expected with LZ4 compression)

Looking at the metrics given by sysfs mount for zram, it claims it is compressed to 1MB
root@ubuntu:/newroot# cat /sys/block/zram0/compr_data_size
1390771
root@ubuntu:/newroot# cat /sys/block/zram0/orig_data_size
14921728

Essentially what I see is, though zram claims has compressed the data (from the sysfs metric), but when we look at the actual usage/free in the mounted ZRAM partition, it is not. Can you someone look in to this issue and let me know why ? and how can I get rid of this ?

Thanks,
Senthil
 
Old 09-20-2016, 01:43 AM   #11
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,804

Rep: Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306Reputation: 7306
you will not see it on the mounted filesystem, it will always report the real size. You need to check the space used by the whole /dev/zram0 to see how was the content compressed.
 
Old 09-20-2016, 02:05 AM   #12
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Okay, if it will not reflect in mounted FS. Why is it not reflecting in cat /proc/meminfo;free; ? Could you please explain ?

Could you please explain How to check the space used by "dev/zram" ?

Thanks,
Senthil
 
Old 09-20-2016, 02:06 AM   #13
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Okay, if it will not reflect in mounted FS. Why is it not reflecting in cat /proc/meminfo;free; ? Could you please explain ?

Could you please explain How to check the space used by "dev/zram" ?

Thanks,
Senthil
 
Old 09-20-2016, 08:51 AM   #14
IsaacKuo
Senior Member
 
Registered: Apr 2004
Location: Baton Rouge, Louisiana, USA
Distribution: Debian Stable
Posts: 2,546
Blog Entries: 8

Rep: Reputation: 465Reputation: 465Reputation: 465Reputation: 465Reputation: 465
Quote:
Originally Posted by tsenthilnath View Post
Okay, if it will not reflect in mounted FS. Why is it not reflecting in cat /proc/meminfo;free; ? Could you please explain ?
I haven't used zram yet, so I don't know precisely what effect it will have, but I have used the somewhat similar tmpfs extensively. My experience is that "free" reports numbers that aren't intuitively accurate, due to the way it assumes buffers/cache are backed up by a backing file system. I think that if it gets you confused with tmpfs, it will get you doubly confused with zram. Zram actually has a backing file system of sorts, but it exists in RAM and there is no way to predict beforehand how much the backing file system will consume when it is needed.

I notice that the numbers you give show only about 50% of RAM used - including cache and buffers. This means there is no pressure at all to unload any cache or buffers. As such, there is absolutely no reason for zram to unload the uncompressed copy (which is immediately available and accessible) leaving only the compressed copy in memory.

Now, you're used to looking at tmpfs so you presumably already know you can't just add cache/buff to the "free" entry to get the amount of RAM that is really available. On a system without tmpfs, you can add cache/buff to "free" to get how much RAM is truly available. That's because the OS can dump it at any time so long as the backing file system is up to date. But tmpfs is different - there is no backing file system. Thus, any pages consumed by tmpfs will never be dumpable.

Zram does have a backing file system, even if it's also in RAM. I don't know how to properly calculate how much RAM is truly available, though. I'd have to experiment with it myself to see exactly what it does. For example, it should ideally be pretty lazy about compressing pages until it starts looking like it may be necessary. Until RAM gets low, it's better to leave everything in its uncompressed state and to NOT create compressed backing store. That compressed backing store would consume CPU and RAM that would otherwise be available as buff/cache. So ... well, I'd have to experiment with it myself to see exactly what it does.
 
Old 09-21-2016, 03:27 AM   #15
tsenthilnath
LQ Newbie
 
Registered: Aug 2016
Posts: 12

Original Poster
Rep: Reputation: Disabled
Thanks for your reply.

My actual requirement is in Embedded box which contains all the binaries in tmpfs and has already reached the MAX limits of the RAM and hence I am exploring zram as an alternative to my tmpfs which can save memory. But when I prototyped zram in my Embedded box I couldn't see much of a savings. So I tried the same in a Host Machine and found a similar observation. The observation which is given in the thread is also done in the Host machines and hence you see it is so rich in memory. If zram does backing file system and consume same memory/more memory of tmpfs, I don't understand the need for compression.

Anyway please try this from your side and let us know a solution/workaround to save memory.

Thanks,
Senthil
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Where did zram go? Slax-Dude Slackware 5 10-31-2015 01:26 PM
Compression ratio and mplayer stdout output. stf92 General 0 07-06-2011 03:19 AM
Determining the compression ratio when using -9 with gzip kaplan71 Linux - Software 2 04-08-2010 03:45 PM
Slax with SquashFS-4 new compression algorithm and layered compression ratios? lincaptainhenryjbrown Linux - Software 2 06-19-2009 05:29 PM
File Roller: how to set compression ratio? How to preserve all color info in JPEGs? bezdomny Linux - Software 3 03-30-2009 12:53 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel

All times are GMT -5. The time now is 04:59 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration