Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A process from some software I am running keeps crashing with seemingly no real pattern. I ahve tried using ddd/gdb to run the process in question but everytime it crashes no useful information is returned. I also tried getting a core file with the same result. It seems as though according to linux the program has exited normally.
This obviously points towards the process itself having a bug but there are other instances of the same program running on other machines in the network with no problems at all.
I have made comaprisons of hardware/drivers (lspci etc) installed on various machines and all are exactly the same as the machine in question so my question is (at long last): What else should I be looking for?
Try issuing the command: `ulimit -c`. Does it report 0? That means processes won't dump cores. Try issuing `ulimit -c unlimited`.
But actually what you should do, to make the sure the process is using the new ulimit value:
Edit your $HOME/.bash_profile (or the equivalent shell script that runs on login), and put in the `ulimit -c unlimited`. Then log out and log back in and then start the process. I believe then by default you should see a file called 'core' in the directory of the binary if the process happens to core dump.
Thanks for your response. Unfortunately I am not in a position to test out your suggestion at the moment but I do have another question on the subject in the meantime.
When I was trying to get a core dump I ran another instance of the problem process from the command line (rather than the GUI), but before I did this I ran:
'limit coredumpsize unlimited' and then ran the program in the same terminal. My question is: What is the difference between limit and ulimit? I take it that by adding ulimit -c unlimited to the .bash_profile this will apply to all processes run? would this not put a strain on my memory?
I have tried to find out myself but cant really find any decent answers.
Regards
Jim
Last edited by hotspur919; 05-17-2011 at 10:51 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.