Restricting sensitive information in the core dumps
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Restricting sensitive information in the core dumps
Hi,
I am trying to find a way for removing sensitive information from the core dumps.
I am aware that best way to avoid this problem is to disable core dumps in the system.
Other than this are there any options which are system specific(not individual process specific) for removing sensitive information from the core dumps?
Any help or suggestions in this direction are appreciated.
I think for using these solution we have to be aware of all sensitive fields and it has to be continuous process to update any newly added field.
Is there something that we can do based on the type of data like all private variables are not dumped in core dump?
I'm not really sure I understand that. How do you know about "all sensitive fields", how do you know about "newly added field"? What do you mean by variable at all? Do you know how the data handled by a [any] process stored in the memory at all? And what about shared memory, what about cache and other things (like database, remote drives....)
I think for using these solution we have to be aware of all sensitive fields and it has to be continuous process to update any newly added field.
Is there something that we can do based on the type of data like all private variables are not dumped in core dump?
Thanks & regards,
Azeem
I was thinking of users, passwords, email addresses - externally accessible information. If you're into restricting private variables only available, I presume, in your source code, my advice is to restrict the core dump circulation. My answer related purely to GPL or soon-to-be GPL software where source will be available. Do I gather that's not your use case?
If you're talking about a GCC core file, it is a binary file.
Perhaps there is a structure that is explained in the GCC utilities or source. Perhaps even more globally for debuggers in general because I'm betting that the GCC utilities would have decided to employ standard output information for compiled code debug, if anything like that exits.
I suspect what you're talking about is sensitive data that a program can access. For instance: say a database program crashed, generated a core file, and then there may be copies of record data in the core file which will be viewable when someone debugs the file. Whether or not the person debugging is entitled to view this data would be one question. The other question would be whether or not any data of that nature would be able to be extracted by someone possessing the core file.
I feel that you may possibly be capable of vetting this type of data from a core file, however you would have to re-write the process which creates a core file, and either (A) take an existing core file, decode it, remove the data, and then re-write it, or (B) alter the overall generation of a core file by altering the libraries on the system running them.
Seems like option A would be the way to go here, but it clearly is a lot of work.
The obvious is that any core file containing potentially sensitive information of any type should be afforded the same levels of protection that the sensitive information receives. Why? Because with the correct support files, it is very easy to analyze a core file, and thus view any copies of data, sensitive or not, contained within the core file.
And, No. I'm not aware of any existing capability to do this.
I'm not really sure I understand that. How do you know about "all sensitive fields", how do you know about "newly added field"? What do you mean by variable at all? Do you know how the data handled by a [any] process stored in the memory at all? And what about shared memory, what about cache and other things (like database, remote drives....)
Sensitive fields I mean like email addresses, phone numbers, etc. If I know in my application these are sensitive fields and I choose type as private.
If my understanding is correct it is suggested that after the post core dump I use commands like sed to purge values of these sensitive variables.
If new sensitive field is introduced in the application then I need to again update sed command.
*************************************************************************************
Controlling which mappings are written to the core dump
Since kernel 2.6.23, the Linux-specific /proc/[pid]/coredump_filter
file can be used to control which memory segments are written to the
core dump file in the event that a core dump is performed for the
process with the corresponding process ID.
The value in the file is a bit mask of memory mapping types (see
mmap(2)). If a bit is set in the mask, then memory mappings of the
corresponding type are dumped; otherwise they are not dumped. The
bits in this file have the following meanings:
bit 0 Dump anonymous private mappings.
bit 1 Dump anonymous shared mappings.
bit 2 Dump file-backed private mappings.
bit 3 Dump file-backed shared mappings.
bit 4 (since Linux 2.6.24)
Dump ELF headers.
bit 5 (since Linux 2.6.28)
Dump private huge pages.
bit 6 (since Linux 2.6.28)
Dump shared huge pages.
bit 7 (since Linux 4.4)
Dump private DAX pages.
bit 8 (since Linux 4.4)
Dump shared DAX pages.
*************************************************************************************
I am not able to understand these mapping correctly, so wanted to take opinion is this something I can use for the need in the subject.
If I know in my application these are sensitive fields and I choose type as private.
Is the scope of this entirely for your own applications?
Just fix the crash bugs, and you won't have to worry about the core dumps.
If the use case here is what I think it is, on the other hand (you have a third party program that's crashing, and you want to scrub sensitive information from the core dumps before sending them to the vendor), then honestly, I'd say ask what the vendor says.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.