LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 07-08-2008, 12:01 PM   #1
jose_y
LQ Newbie
 
Registered: Jul 2008
Posts: 1

Rep: Reputation: 0
Question core file size problem


Hi, I am new in this forum so I do not know exactly where to cataloglize my problem.if it is not the right place be glad if you can redirected me for the right place .


****************************************
Problem Description
****************************************
I am programing in an embedded linux.
I have problem in which the core-file which is created by a crash process is corrupted because Memory size process is bigger than the available space in the directory in which

the core file is created as can be seem from the above:

Swap: 0k av, 0k used, 0k free 152420k cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
540 root 9 0 233M 99M 2936 S 0.1 9.5 0:00 0 syslogd
(crashed procces info)


and the FS allocation of the core-file directory is :
admin_0 12 $ df
Filesystem 1k-blocks Used Available Use% Mounted on
none 215040 46804 168236 22% /core

during the crash the memory allocation is:
Filesystem 1k-blocks Used Available Use% Mounted on

/
none 215040 215040 0 100% /core
(the directory is full)


So from the above analysis it can be conclude that the core file suppose to be bigger then the free memory in the directory and a corrupted file is created.

I have no more disk memory so increasing the directory size is not an option.I was thinking in dividing the core file in some files and zipped via " tar -cvzf" in order to prevent the above problem. The zipped files are much smaller ~ 0.5M . I think there is a lot of junk information in those files. Anyone knows a way to create a compacted core file in which I will be able to run at least the "backtrace" in gdb?

Are anyone familiar with the above situation? What do you think about mine solution? This is hard to be implemented? Anyone deals with the code which create core files?

I will be glad for any suggestion to solve my problem.

Thanks a lot,
Jose
 
Old 07-08-2008, 05:21 PM   #2
Mr. C.
Senior Member
 
Registered: Jun 2008
Posts: 2,529

Rep: Reputation: 63
No core compression that I am aware of. The core is created in the kernel, so you'd have to modify the dump core routine.

There was some discussion on the kernel list about modifying the core dumper routine. Search "coredump: core dump"
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
ulimit -c does not limit the core file size after a particular point meetolnx Linux - Newbie 2 10-04-2007 12:08 AM
any ideas to reduce log file size or make log file size managed? George2 Programming 2 08-13-2006 06:55 AM
DVD file size problem awolbush Mandriva 6 02-05-2006 09:23 AM
K3B Problem & File Size David@330 Linux - Newbie 0 08-30-2004 02:13 AM
Core File Size bspicer Linux - General 3 04-16-2002 04:28 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 07:40 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration