Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm still relatively new to all but the more superficial aspects to Linux so i chose this section for my post.
My problem (hydra is a fortran77 program):
[sgvalcke@tarf rundir]$ ./hydra_zmet
Killed
It is not simply a matter of having a huge array somewhere in the program, because I am always able to run hydra a number of times. Then (after i've run it say 20 times) i start getting the "killed" message (it appears immediately, program is terminated), and have to reboot ( :/ )in order to be able to run hydra again.
I'm not sure if the following information is helpful here, I post it anyway:
I'm not sure what to think of the large amount of used memory, as the only app I'm running atm is firefox. I've been reading about RAM, swap, ... on this forum where the general message is: Linux loads your RAM in an intelligent manner, when it is needed for some app that app gets all the memory it needs. But why then can't i run my program anymore?
some further information:
[sgvalcke@tarf rundir]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 4087
virtual memory (kbytes, -v) unlimited
I would really appreciate any help on this (I must be doing something wrong somewhere ), rebooting my pc now and then just isn't working for me.
Hmmm, I always thought my problem was a Linux "feature", but it could be possible that it is the program itself which is causing the problems.
As as far as i know allocating in fortran was introduced in fortran 90 so there should not be any -explicit- allocating done anywhere in the (entirele written in fortran 77, but for a small piece of code in c used to swap bytes between little and big endian) code. And even for explicit allocating in fortran 90 it seems that allocated memory is automatically freed when the program terminates.
I'll google around a bit, if this is indeed a programming issue i guess I've come to the wrong place with my question (any help solving it or tracing the problem would still be appreciated )
I don't have very much experience with C/C++, and even less with Fortran, but thought I'd offer an idea maybe as a test:
Load up all sorts of stuff into memory first, and have all kinds of stuff occupying memory. Then try your program. If it runs FAR LESS times before 'killed' than it otherwise would with nothing else running, THEN try it starting frssh with absolutely NOTHING else running. If it now runs MANY more times, I guess it would be a fair assumption that excessive memory consumption/lack of free memory is preventing it from running any further.
If this was helpful even in the very least, I'm happy
Best of luck!
"Killed" message implies that the program gets a signal (SIGTERM, SIGHUP, SIGKILL, etc), doesn't handle/ignore it, and exits (which is the default reaction). To find out which signal is causing the problem, start off by echo'ing the command's return code:
Code:
./hydra_zmet #=> yields "killed" message
echo $? #run this directly after the "killed" message
Unfortunately, this signal can come from many places, including you (ie if you close the terminal window the program is associated with or the shell it runs in), the kernel (ie if the program tries to access another program's memory), etc. But at least the return code should tell us which type of signal was sent (which should be > 128).
To see if it's really the amount of memory in use that's giving you trouble, try running "top" (or "top -n1" if you want to post it's output here on the forum) and check if any swapping process is running all the time. When memory becomes scarce, the system should start swapping (=putting parts of memory on the hard disk), which'll significantly decrease overall system performance.
But frankly, I doubt that memory usage is the problem.
Sorry that it took so long to reply, i've had to run a lot of short instances of the program yesterday to get the killed message again, it's finally here now
137 = 128 + 9, so signal 9 was sent. Signal 9 is usually the SIGKILL, or hard kill, signal, which cannot be caught or ignored by the program.
However, I have never seen the system actually issue a SIGKILL on it's own, so it must either come from your program or from a call to the "kill" utility.
Assuming you're not actually killing them, the question becomes: does the program kill itself, ie when too many instances of the program are already running?
If you were issuing "kill" commands, then the "Killed" messages may come from that (please note that they may be shown with a slight delay after you issue the kill command). Or maybe some other user with root access is killing off your processes?
I think that is a safe assumption, my schizophrenia has not been that bad lately
I just tried out what was suggested earlier in this thread: filling my memory and then trying to run, I immediately received "killed". apparently from the moment I have more than 290M memory in buffers the program won't run anymore.
That means my problem, which I attributed to Linux, was probably caused by me: start an instance of the program at say 270M in buffers, then i open up 2 instances of emacs, when the running hydra_zmet terminates and i want to start up another one i get a "killed" message because my 2 newly opened emacses push my memory usage over the threshold.
I should've known that Linux wasn't to be blamed , silly me And thanks for taking the trouble of looking into my problem.
So, instead of swapping memory when it becomes scarce, it just kills off your new processes?
I don't think this is normal/expected behaviour. Is your swapper process running? And could you look
for some settings on virtual memory?
So, instead of swapping memory when it becomes scarce, it just kills off your new processes?
I don't think this is normal/expected behaviour.
If u put it that way it does seem a little drastic.
Quote:
Is your swapper process running?
This is where my Linux-newbieness kicks in i'm afraid. The only reference to swap I found with ps aux is:
Code:
root 49 0.0 0.0 0 0 ? SW Mar20 0:04 [kswapd0]
Code:
And could you look for some settings on virtual memory?
I could if i knew where to look I don't know if this is helpful:
Code:
[sgvalcke@tarf sgvalcke]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 4087
virtual memory (kbytes, -v) unlimited
or this? (I've started about 18 instances of emacs, ./hydra_zmet gives "killed" now.):
Code:
[sgvalcke@tarf sgvalcke]$ vmstat -a -S M
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free inact active si so bi bo in cs us sy id wa
2 0 7 8 47 402 0 0 15 12 124 100 51 2 47 0
seems to suggest that swapping is used. Do you have a separate swap partition? And if so, how big is it? If you don't, then how much free space is left on your hard disk?
7 MB has been swapped, but no swapping (si & so) is being done. I'm no expert in this, but 7 MB of virtual memory does seem too little to me.
There seems to be 1G of swap mem available. Initiating more apps causes the swap used to go up, there must be something preventing that from happening when trying to run hydra_zmet.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.