LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices


Reply
  Search this Thread
Old 10-01-2018, 02:33 PM   #1
slacker13
LQ Newbie
 
Registered: Oct 2018
Distribution: Slackware 13.37
Posts: 4

Rep: Reputation: Disabled
32 bit PAE with SSD large file lockup


I have a HP Z420 with 32GB of ram and a Samsung 840 EVO 1TB SSD running 32 bit Slackware 13.37 2.6.37.6 kernel on ext4 file system with no swap. When I try to pull a large file(~600GB) from Tape to the SSD I get either OOM or a system lockup. I have no problems pulling the file to a spinning mass drive.
I can make it work with the SSD if I turn off the PAE kernel config option, which means I can't use my 32GB of ram. I have tried upgrading the Firmware on the SSDs. I have tried 2.6.39.4, 3.19.5 and 4.18 kernels all with basically the same results. I do tend to get more of the file off of tape with the newer kernels but still eventually dumps/runs out of memory before it finishes. I have tried using the different I/O schedulers currently noop deadline. I should also note that this is not isolated to one system. I have more than a dozen systems configured the same and can reproduce the error on any of them. I have spent weeks reading different post and trying differenet options. Any help would much appreciated.
 
Old 10-02-2018, 10:16 AM   #2
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,292

Rep: Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322
How are you 'pulling' the file? It seems to want to read it all into memory, then write it all to disk. That shouldn't be necessary. I have 6 Gigs of ram here and rsync copied a file of over 40 Gigs today. Of course you're on 32 bit stuff there. PAE wasn't a panacea - you were still limited to 4G for any process; it's just that different processes could access different 4G pages - wasn't that the story?

Have you any way of limiting the amount read to chunks of 3G? That won't overload your 4G memory address space.
 
Old 10-02-2018, 12:48 PM   #3
slacker13
LQ Newbie
 
Registered: Oct 2018
Distribution: Slackware 13.37
Posts: 4

Original Poster
Rep: Reputation: Disabled
Thank you for the help! I am using tar to pull a tar ball from the tape...tar -xvf /dev/st0 . The way it is currently being used is data files are backed up incrementally daily to a NAS into a single file_date.tar at the end of the week (~600GB) then they are weekly put on LTO3 tape. I am not that familiar with how tar works changing blocks, but I can look into it. I guess I was more focused on the fact that it works fine going to a spinning mass drive but not an SSD unless I turn off PAE in the kernel config. I guess I got off on a rabbit trail and got too focused. Again... I really appreciate you taking the time to help.
 
Old 10-02-2018, 06:47 PM   #4
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,126

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Quote:
Originally Posted by slacker13 View Post
I guess I was more focused on the fact that it works fine going to a spinning mass drive but not an SSD unless I turn off PAE in the kernel config.
This is what I found interesting when I saw your post. All disks are not treated equally, especially when known to have features that can be taken advantage of. The multiqueue block device layer was introduced basically because of solid state drives (over-simplification).
I'd say the 32-bit kernel maybe has been left behind - especially with a ridiculous amount of RAM like that. The bounce buffers must be getting hammered. Maybe you've exposed a bug, but I suspect nobody will care enough to fix it.
If you have a test system you can try this, which I use when systems clog up due to slow USB devices. Different circumstances, but it will stop RAM being consumed for write buffers.
Code:
sudo su -c "echo 100000000 > /proc/sys/vm/dirty_bytes"
That's 100M (I use 10M), so might go slow-ish, but not so you'd notice.
 
Old 10-03-2018, 08:28 AM   #5
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,292

Rep: Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322
In addition to syg00's suggestion, I used rsync. I feel you are making a rod for your own back using a 600G tar file in any case on a 32 bit box which can't see more than 4G of ram at a time. I can understand tar's appeal on tapes. If you were starting from scratch, would you do it that way? Tapes in the 21st Century?
 
Old 10-03-2018, 09:12 AM   #6
slacker13
LQ Newbie
 
Registered: Oct 2018
Distribution: Slackware 13.37
Posts: 4

Original Poster
Rep: Reputation: Disabled
business_kid: You got me thinking about the read and write sizes, so I tried to pull the files off of the tape with dd using different sizes with out much different results. I tried everything from 512b to 1G. It does change the speed at which memory is consumed, but eventually ends with the same results.

syg00: I had read/tried some other posts talking about changing dirty_bytes and ratios. I tried your suggestion of 100M and it did change the amount I was able to get off the tape before it died. I also tried different values. 10M, 100M, 1G...pretty much same results.

I also tried a combination of dirty_bytes 100M and dd ifs=10K ofs=10K, but still same results. Based on what little I thought I knew about the dirty_bytes I would have thought I would have seen difference in how the memory was consumed, but I really couldn't see much difference with top...maybe a slight difference with the rate that it was consumed. I guess I don't really understand dirty

Eventually we will move to 64 bit, but I have a huge 32 bit realtime app that has to be ported first...probably a year or more out. I have not got far enough yet to see if a 64 bit system gives me the same problems.
I have different requirements that make tape the simplest solution.

I really appreciate you both taking the time to help me.
 
Old 10-04-2018, 04:21 AM   #7
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,292

Rep: Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322Reputation: 2322
Once you fix in your head that for the purposes of reading the tape, you haven't got 32G, but 4G of ram, you'll get there. I feel the solution is not to store 600G tars at all, as you can only stream them.

Could you put in a VM, with NO PAE in the kernel and allow it 4G of ram, and use that to restore the tars to a shared folder? :-D. Given tape speeds, one cpu should do it.
 
Old 11-20-2019, 12:28 PM   #8
slacker13
LQ Newbie
 
Registered: Oct 2018
Distribution: Slackware 13.37
Posts: 4

Original Poster
Rep: Reputation: Disabled
Adding update just to complete the thread
32bit, SSDs, and ~600GB files just seems to be a bad combo. After migrating to a 64bit kernel the problem seems to be resolved.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Missing some of 4GB RAM with 64-bit processor but 32-bit PAE motherboard chipset zjmarlow Linux - Hardware 12 07-26-2015 11:41 AM
[SOLVED] Any difference in CrunchBang pae or non-pae for 32-bit 1 GB laptop? linustalman Debian 4 08-26-2014 12:50 PM
LXer: Adobe Flash Troubles: 64-Bit or 32-Bit PAE? LXer Syndicated Linux News 0 04-21-2011 12:11 AM
LXer: Ubuntu 32-bit, 32-bit PAE, 64-bit Benchmarks LXer Syndicated Linux News 0 12-30-2009 11:00 AM
Large File Support on a 32-bit machine JapanMtlExpat Linux - Software 3 11-26-2004 02:50 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel

All times are GMT -5. The time now is 02:41 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration