Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hello, I am attempting to recover a deleted text file. I used dd to make an image of the sectors on the hard drive which contained the data. Now, since I am not getting good results with foremost, and know all of the lines I'm looking for contain "Style", I want to grep the .img, but when I do it runs out of memory. I have tried the grep option -D, set to skip, and tried adding a 3GB swap to account for the 2.7G image. It still "exhausts" it's memory, and it seems to happen really quick now.
this is the output of ulimit -a
Code:
root: $ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 7167
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 3124
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7167
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Last edited by throughthegreens; 02-22-2010 at 09:03 PM.
I get the same error with or without using cat. However I have found the file by grepping whole partition with a script that dumps every 20000 and greps it repeatedly. So, my problem is solved but I'm still curious as to how I can defeat "exhausted memory"
If you are concerned with only text characters, then
tr '[:cntrl:]' '\n' < diskcut.img | grep -a -B 6 -A 1 -e "Style"
might work .
But it's odd. I dd'ed one partition of my hdd and create a file
(nealy equal 3.5 GB in size), and run exactly same command as yours
on the file , but 'memory exhausted' did'nt come out.
I'm using debian lenny , grep 2.5.3 .
as for why you don't get the error, are you on a 64bit? I read that 64bit machines handle largefiles. 32bit should now too, but I may be running a kernel which does not have the support built in.
Last edited by throughthegreens; 02-23-2010 at 07:34 PM.
I found out that booting into Slackware's "huge" kernel, I could grep a ~3G file. So it's my kernel. I've posted to the Software Forum to find out what options I need in my .config. thank you.
if anyone's interested it seems to be a fault with grep. I am grepping a 1.4G file now. If I grep for something that's known to be in the file, it works fine and grep prints all matches to stdout, however if I grep some random phrase not in the file, grep has trouble when it doesn't find anything and begins consuming memory, which can be seen when swap starts filling up at about 3M a second until theres 100's of MB in there, then grep fails with "memory exhausted". I am reading a wikipedia page on memory leaks and it seems to describe what is happening.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.