ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi All,
Hopefully this is the right forum for my question. In my application I have an mmap'd virtual address to a reserved portion of memory that was carved out using the mem= command in /etc/grub.conf. Everything works fine at run time, however when I step through the code with gdb and try to examine the address the system hangs.
Running Redhat:
"Linux version 2.6.9-prep (root@mymachine) (gcc version 3.4.3) #1 SMP"
d) The system hangs as far as user mode activity. I cannot telnet in, the console window does not respond. I can however ping the machine. So there is activity.
In the application I reserve memory that is not managed by the os and will not be swapped out. These are DMA buffers that must be easily accessible and contiguous. The memory space is managed by a driver and set of application routines. This has worked fine for quite a while, I only noticed this irritating phenomenon while stepping through the code last week and trying to examine the buffer address and contents that was returned from mmap().
It's not a show stopper, however it's irritating to have a user app (gdb) be able to bring the machine to its knees.
That's weird. As far as I understand it that reserved memory and the space returned by the mmap call should have nothing to do with each other.
The driver may copy stuff from the DMA buffers into the mmap area and keep it synced, but it is not the same physical memory. Hmm. Unless the driver is the "other" process with which it is "MAP_SHARE"ing the memory with.
Either way, you are right, a user level process should never be able to kill a machine. If it can, its a kernel level bug, probably in this weird driver you are using.
I see in linux 2.6... there is a dma_ API which is possibly the way to go. Look in /usr/src/linux/Documentation/DMA*
The driver is there mainly for the mmap call. It is shared memory but my app is the only one using it as this stage. The buffers are used by another driver that handles the DMA but I'm not using that either. Since it works fine outside the debugger, I'll have to chase down and see what gdb is doing that's funky.
Just wondering if anyone else had seen this anomally.
I still doubt that it is any problem with the kernel / gdb / mmap. That stuff all just works and works fine together,
Therefore it is either you hardware or the "interesting" driver you have. Do you have the source for it? It sounds like something is causing it to lock up in kernel mode. Probably a race condition and the presence of gdb in the mix is just triggering the race.
Here's the mmap routine. Pretty boiler plate. I agree, the mmap works and has been working. I recall having this problem a long time ago with memory mapped i/o, but the system would panic. Maybe I'll try just mapping to a memory region within the o/s and see if I get the same results.
static int mmap(struct file *fp, struct vm_area_struct *vma)
{
unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
unsigned long length = (vma->vm_end - vma->vm_start);
int minor, exten;
grepping around in the kernel source on my box I couldn't find remap_page_range at all.
So googling about on the subject I find it seems to be a depreciated API which was replaced by io_remap_page_range. Digging in the source I find this is just a #define around remap_pfn_range.
ie. I'm wondering if the boiler plate you are using comes from a older version of the kernel and the kernel has perhaps moved on to other things, possibly for the very reason you are encountering.
The kernel source I'm using is a bit non-standard 2.6.12-rc3-mm2 so your milage may vary.
There are some differances but they appear to be pre 2.6.9 and it looks like they're already in there. Make I'll step back and look at what exactly gdb is doing.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.