LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 03-20-2007, 06:15 AM   #1
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Rep: Reputation: 0
"Killed" message


Hi,

I'm still relatively new to all but the more superficial aspects to Linux so i chose this section for my post.

My problem (hydra is a fortran77 program):

[sgvalcke@tarf rundir]$ ./hydra_zmet
Killed

It is not simply a matter of having a huge array somewhere in the program, because I am always able to run hydra a number of times. Then (after i've run it say 20 times) i start getting the "killed" message (it appears immediately, program is terminated), and have to reboot ( :/ )in order to be able to run hydra again.

I'm not sure if the following information is helpful here, I post it anyway:

[sgvalcke@tarf rundir]$ free -m
total used free shared buffers cached
Mem: 497 437 60 0 6 185
-/+ buffers/cache: 245 252
Swap: 1023 167 856

I'm not sure what to think of the large amount of used memory, as the only app I'm running atm is firefox. I've been reading about RAM, swap, ... on this forum where the general message is: Linux loads your RAM in an intelligent manner, when it is needed for some app that app gets all the memory it needs. But why then can't i run my program anymore?

some further information:

[sgvalcke@tarf rundir]$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 4087
virtual memory (kbytes, -v) unlimited


I would really appreciate any help on this (I must be doing something wrong somewhere ), rebooting my pc now and then just isn't working for me.

Cheers,

Adversus
 
Old 03-20-2007, 06:24 AM   #2
hacker supreme
Member
 
Registered: Oct 2006
Location: As far away from my username as possible
Distribution: Gentoo
Posts: 259
Blog Entries: 1

Rep: Reputation: 31
maybe it has a memory leak?
Does it allocate memory anywhere?
If it does then you may want to check that it returns it before exiting.

I'm not sure if this is the case with FORTRAN becuase i only have C and C++ experience.
 
Old 03-20-2007, 08:08 AM   #3
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Hmmm, I always thought my problem was a Linux "feature", but it could be possible that it is the program itself which is causing the problems.

As as far as i know allocating in fortran was introduced in fortran 90 so there should not be any -explicit- allocating done anywhere in the (entirele written in fortran 77, but for a small piece of code in c used to swap bytes between little and big endian) code. And even for explicit allocating in fortran 90 it seems that allocated memory is automatically freed when the program terminates.

I'll google around a bit, if this is indeed a programming issue i guess I've come to the wrong place with my question (any help solving it or tracing the problem would still be appreciated )

Last edited by Adversus; 03-20-2007 at 08:12 AM.
 
Old 03-20-2007, 08:28 AM   #4
GrapefruiTgirl
LQ Guru
 
Registered: Dec 2006
Location: underground
Distribution: Slackware64
Posts: 7,594

Rep: Reputation: 556Reputation: 556Reputation: 556Reputation: 556Reputation: 556Reputation: 556
I don't have very much experience with C/C++, and even less with Fortran, but thought I'd offer an idea maybe as a test:
Load up all sorts of stuff into memory first, and have all kinds of stuff occupying memory. Then try your program. If it runs FAR LESS times before 'killed' than it otherwise would with nothing else running, THEN try it starting frssh with absolutely NOTHING else running. If it now runs MANY more times, I guess it would be a fair assumption that excessive memory consumption/lack of free memory is preventing it from running any further.
If this was helpful even in the very least, I'm happy
Best of luck!
 
Old 03-20-2007, 09:11 AM   #5
timmeke
Senior Member
 
Registered: Nov 2005
Location: Belgium
Distribution: Red Hat, Fedora
Posts: 1,515

Rep: Reputation: 61
"Killed" message implies that the program gets a signal (SIGTERM, SIGHUP, SIGKILL, etc), doesn't handle/ignore it, and exits (which is the default reaction). To find out which signal is causing the problem, start off by echo'ing the command's return code:
Code:
./hydra_zmet #=> yields "killed" message
echo $?      #run this directly after the "killed" message
Unfortunately, this signal can come from many places, including you (ie if you close the terminal window the program is associated with or the shell it runs in), the kernel (ie if the program tries to access another program's memory), etc. But at least the return code should tell us which type of signal was sent (which should be > 128).

To see if it's really the amount of memory in use that's giving you trouble, try running "top" (or "top -n1" if you want to post it's output here on the forum) and check if any swapping process is running all the time. When memory becomes scarce, the system should start swapping (=putting parts of memory on the hard disk), which'll significantly decrease overall system performance.
But frankly, I doubt that memory usage is the problem.
 
Old 03-22-2007, 03:57 AM   #6
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Sorry that it took so long to reply, i've had to run a lot of short instances of the program yesterday to get the killed message again, it's finally here now

doing the echo thing gives 137 as a result.
 
Old 03-22-2007, 04:08 AM   #7
timmeke
Senior Member
 
Registered: Nov 2005
Location: Belgium
Distribution: Red Hat, Fedora
Posts: 1,515

Rep: Reputation: 61
137 = 128 + 9, so signal 9 was sent. Signal 9 is usually the SIGKILL, or hard kill, signal, which cannot be caught or ignored by the program.

However, I have never seen the system actually issue a SIGKILL on it's own, so it must either come from your program or from a call to the "kill" utility.

Assuming you're not actually killing them, the question becomes: does the program kill itself, ie when too many instances of the program are already running?

If you were issuing "kill" commands, then the "Killed" messages may come from that (please note that they may be shown with a slight delay after you issue the kill command). Or maybe some other user with root access is killing off your processes?
 
Old 03-22-2007, 04:56 AM   #8
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Quote:
Assuming you're not actually killing them
I think that is a safe assumption, my schizophrenia has not been that bad lately

I just tried out what was suggested earlier in this thread: filling my memory and then trying to run, I immediately received "killed". apparently from the moment I have more than 290M memory in buffers the program won't run anymore.

That means my problem, which I attributed to Linux, was probably caused by me: start an instance of the program at say 270M in buffers, then i open up 2 instances of emacs, when the running hydra_zmet terminates and i want to start up another one i get a "killed" message because my 2 newly opened emacses push my memory usage over the threshold.

I should've known that Linux wasn't to be blamed , silly me And thanks for taking the trouble of looking into my problem.

Last edited by Adversus; 03-22-2007 at 04:57 AM.
 
Old 03-22-2007, 05:22 AM   #9
timmeke
Senior Member
 
Registered: Nov 2005
Location: Belgium
Distribution: Red Hat, Fedora
Posts: 1,515

Rep: Reputation: 61
So, instead of swapping memory when it becomes scarce, it just kills off your new processes?
I don't think this is normal/expected behaviour. Is your swapper process running? And could you look
for some settings on virtual memory?
 
Old 03-22-2007, 05:43 AM   #10
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by timmeke
So, instead of swapping memory when it becomes scarce, it just kills off your new processes?
I don't think this is normal/expected behaviour.
If u put it that way it does seem a little drastic.

Quote:
Is your swapper process running?
This is where my Linux-newbieness kicks in i'm afraid. The only reference to swap I found with ps aux is:

Code:
root        49  0.0  0.0     0    0 ?        SW   Mar20   0:04 [kswapd0]
Code:
And could you look for some settings on virtual memory?
I could if i knew where to look I don't know if this is helpful:

Code:
[sgvalcke@tarf sgvalcke]$ ulimit -a
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) 32
max memory size       (kbytes, -m) unlimited
open files                    (-n) 1024
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) 10240
cpu time             (seconds, -t) unlimited
max user processes            (-u) 4087
virtual memory        (kbytes, -v) unlimited
or this? (I've started about 18 instances of emacs, ./hydra_zmet gives "killed" now.):

Code:
[sgvalcke@tarf sgvalcke]$ vmstat -a -S M
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free  inact active   si   so    bi    bo   in    cs us sy id wa
 2  0      7      8     47    402    0    0    15    12  124   100 51  2 47  0
 
Old 03-22-2007, 05:58 AM   #11
timmeke
Senior Member
 
Registered: Nov 2005
Location: Belgium
Distribution: Red Hat, Fedora
Posts: 1,515

Rep: Reputation: 61
Quote:
max memory size (kbytes, -m) unlimited
seems to suggest that swapping is used. Do you have a separate swap partition? And if so, how big is it? If you don't, then how much free space is left on your hard disk?

7 MB has been swapped, but no swapping (si & so) is being done. I'm no expert in this, but 7 MB of virtual memory does seem too little to me.
 
Old 03-22-2007, 07:27 AM   #12
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Code:
[sgvalcke@tarf sgvalcke]$ df -a -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hda2             112G   54G   52G  51% /
none                     0     0     0   -  /proc
none                     0     0     0   -  /sys
none                     0     0     0   -  /dev/pts
usbfs                    0     0     0   -  /proc/bus/usb
/dev/hda1              97M   35M   58M  38% /boot
none                  249M     0  249M   0% /dev/shm
none                     0     0     0   -  /proc/sys/fs/binfmt_misc
sunrpc                   0     0     0   -  /var/lib/nfs/rpc_pipefs
Doesn't seem like I have a separate swap partition..

Code:
[sgvalcke@tarf sgvalcke]$ free -m
             total       used       free     shared    buffers     cached
Mem:           497        492          5          0          8         71
-/+ buffers/cache:        412         85
Swap:         1023         27        996
There seems to be 1G of swap mem available. Initiating more apps causes the swap used to go up, there must be something preventing that from happening when trying to run hydra_zmet.

Last edited by Adversus; 03-22-2007 at 07:40 AM.
 
Old 03-22-2007, 10:08 AM   #13
Emerson
LQ Sage
 
Registered: Nov 2004
Location: Saint Amant, Acadiana
Distribution: Gentoo ~amd64
Posts: 7,661

Rep: Reputation: Disabled
Linux kernel kills processes in its own when resources become low. And not all types of memory usage are swappable.
 
Old 03-22-2007, 11:17 AM   #14
Adversus
LQ Newbie
 
Registered: Mar 2007
Location: Belgium
Distribution: Fedora Core 2
Posts: 13

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Emerson
Linux kernel kills processes in its own when resources become low. And not all types of memory usage are swappable.
That settles it then i guess

when i get "killed" again i just have to close a few instances of emacs and i should be fine
 
Old 03-23-2007, 11:58 AM   #15
wpn146
Member
 
Registered: Jan 2005
Distribution: Solaris, Linux Fedora Core 6
Posts: 170

Rep: Reputation: 30
Is "hydra" an old compiled binary using 16 bit opcodes? If so, you may need to recompile it using a 32 bit compiler.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
vpnc spawned from "expect" killed upon exit saravkrish Linux - General 1 01-29-2007 08:17 PM
valgrind crashes at start up with "Killed" error message lasindi Slackware 0 04-22-2006 08:54 PM
"yum -y upgrade" FC4->FC5 killed my machine! U4ea Fedora 1 04-20-2006 10:47 AM
FC4 install errors, "diabling IRQ #10" "nobody cares" error message??? A6Quattro Fedora 6 07-20-2005 12:49 PM
error message when pressing "Next" "BIOS Problems" , help ! HeRCuLeSX Fedora 3 07-21-2004 02:37 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 04:04 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration