Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying the evaluate ZRAM for my project and it’s not giving compression ratio as we expected, though zram manual claims 2:1 ratio. Can someone gives me some light on this.
You can see below in the steps I am copying a 35MB file in to the /newroot mount point mounted from /dev/zram0 device. The memory usage in ZRAM for this file also seems to be 35MB only. Since it is a compressed RAM device, I am expect memory usage of this file to get reduced.
Please let me know am I missing something here.
BTW, I am using kernel 3.4 version, FYI.
Thanks,
Senthil
Below are steps are followed to mount zram and copy the file.
Why are you not using the metrics provided in the sysfs mount - notably "orig_data_size" and "compr_data_size" ?.
The device/filesystem are not being compressed, the data is.
I tried dumping the data from metrics provided by the sysfs and I see the difference.
cat compr_data_size
35917181
cat orig_data_size
38658048
But finally it should also reflect in usage of the mounted partition as well correct ? Could you please explain why is not reflecting in df or /proc/meminfo or du.
The reason why I am looking at these parameters is that.. We are running a system where it contains 1.5 GB of data in a tmpfs (RAMFS). And we are running out of RAM and hence I am looking at ZRAM an alternative by which I can store the data in compressed way which in turn can help me to have good saving in overall RAM utilization.
the real compression ratio highly depends on the content of the file (data) itself, so 2:1 is probably a typical average, but is not valid for every and each file.
Try it also with a text file (you may try something from /var/log, or concatenate several logs into a bigger file). And also you may try it with a video or music file.
Your test file is the absolute worst example of compressible data - it is binary, and already compressed. You achieved less than 1% compression - not unexpected. If this is a valid example of your expected data, there is no point using zram.
root@sthangar-VirtualBox:~# file file.txt
file.txt: ASCII text
Free memory before copying the file:
root@sthangar-VirtualBox:~#cat /proc/meminfo;free;df .; du -sh .
MemTotal: 2048464 kB
MemFree: 803676 kB
MemAvailable: 1156440 kB
.....
total used free shared buff/cache available
Mem: 2048464 670988 803676 13692 573800 1156440
Swap: 2096124 0 2096124
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 23028 9626436 1% /newroot
20K .
Confirming that ZRAM is mounted.
root@sthangar-VirtualBox: /newrootroot@sthangar-VirtualBox:/newroot# mount | grep zram
/dev/zram on /newroot type ext4 (rw,relatime,data=ordered)
Copying the 100 MB file.
root@sthangar-VirtualBox:/newroot# cp /file.txt .
Free memory after copying the file:
root@sthangar-VirtualBox:~#cat /proc/meminfo;free;df .; du -sh .
MemTotal: 2048464 kB
MemFree: 702904 kB
MemAvailable: 1155132 kB
....
total used free shared buff/cache available
Mem: 2048464 670948 702888 13692 674628 1155116
Swap: 2096124 0 2096124
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/zram0 10190136 120688 9528776 2% /newroot
96M .
If you see the memory difference before and after copying the file, the difference is almost is 100000000 Bytes only (no compression took place).
using random chars you will hardly get any real compression, that is quite similar to binary data. That's why I suggest you to use logfile, which can be compressed very well.
What kind of files do you want to store in zram at all?
Real life ASCII files will give good compression ratio - things like log files, as suggested. You have created randomly generated ASCII files, which will NOT give a good compression ratio.
Basically, compression works by finding repeating patterns in a file, or finding more common patterns in a file. But if you randomly generate an ASCII file, there won't be repeating patterns or more common patterns to take advantage of.
Now look at the difference between the free/used space before and after copying the file
It remains only ~12M, not 1M (which is expected with LZ4 compression)
Looking at the metrics given by sysfs mount for zram, it claims it is compressed to 1MB
root@ubuntu:/newroot# cat /sys/block/zram0/compr_data_size
1390771
root@ubuntu:/newroot# cat /sys/block/zram0/orig_data_size
14921728
Essentially what I see is, though zram claims has compressed the data (from the sysfs metric), but when we look at the actual usage/free in the mounted ZRAM partition, it is not. Can you someone look in to this issue and let me know why ? and how can I get rid of this ?
you will not see it on the mounted filesystem, it will always report the real size. You need to check the space used by the whole /dev/zram0 to see how was the content compressed.
Okay, if it will not reflect in mounted FS. Why is it not reflecting in cat /proc/meminfo;free; ? Could you please explain ?
I haven't used zram yet, so I don't know precisely what effect it will have, but I have used the somewhat similar tmpfs extensively. My experience is that "free" reports numbers that aren't intuitively accurate, due to the way it assumes buffers/cache are backed up by a backing file system. I think that if it gets you confused with tmpfs, it will get you doubly confused with zram. Zram actually has a backing file system of sorts, but it exists in RAM and there is no way to predict beforehand how much the backing file system will consume when it is needed.
I notice that the numbers you give show only about 50% of RAM used - including cache and buffers. This means there is no pressure at all to unload any cache or buffers. As such, there is absolutely no reason for zram to unload the uncompressed copy (which is immediately available and accessible) leaving only the compressed copy in memory.
Now, you're used to looking at tmpfs so you presumably already know you can't just add cache/buff to the "free" entry to get the amount of RAM that is really available. On a system without tmpfs, you can add cache/buff to "free" to get how much RAM is truly available. That's because the OS can dump it at any time so long as the backing file system is up to date. But tmpfs is different - there is no backing file system. Thus, any pages consumed by tmpfs will never be dumpable.
Zram does have a backing file system, even if it's also in RAM. I don't know how to properly calculate how much RAM is truly available, though. I'd have to experiment with it myself to see exactly what it does. For example, it should ideally be pretty lazy about compressing pages until it starts looking like it may be necessary. Until RAM gets low, it's better to leave everything in its uncompressed state and to NOT create compressed backing store. That compressed backing store would consume CPU and RAM that would otherwise be available as buff/cache. So ... well, I'd have to experiment with it myself to see exactly what it does.
My actual requirement is in Embedded box which contains all the binaries in tmpfs and has already reached the MAX limits of the RAM and hence I am exploring zram as an alternative to my tmpfs which can save memory. But when I prototyped zram in my Embedded box I couldn't see much of a savings. So I tried the same in a Host Machine and found a similar observation. The observation which is given in the thread is also done in the Host machines and hence you see it is so rich in memory. If zram does backing file system and consume same memory/more memory of tmpfs, I don't understand the need for compression.
Anyway please try this from your side and let us know a solution/workaround to save memory.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.