LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices

Reply
 
Search this Thread
Old 10-17-2010, 07:52 PM   #1
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Rep: Reputation: 0
Unable to correctly mmap DMA buffer in user space.


I have developed a driver for a PCI Express card which we also developed. The driver allocates a 16kB DMA buffer using pci_alloc_consistent() which the user space application will mmap().

The problem is the virtual address returned by mmap() in user space cannot seem to access the memory buffer. I test it by writing a known value to the user virtual address. I then have an ioctl() call to the driver which will read the same location from the DMA buffer. This ioctl() call is always reading 0, so the write never shows up in the DMA buffer.

Is this a possible memory cacheing or memory alignment problem?

The user space application will also mmap() device registers via PCI Base Address 0, and that is working. It is only the DMA buffer mmap() which has problems.

I am seeing this problem on two different x86_64 RHEL 4.7 hosts running kernel 2.6.9-89 (I haven't tried other versions of RHEL4). In the driver mmap() function, my VMA flags and protections look like this:

vma->vm_flags |= VM_RESERVED;

vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

/* Check if in I/O memory. */
if ((paddr >= virt_to_bus(high_memory)) || (filp->f_flags & O_SYNC)) {
vma->vm_flags |= VM_IO;
}

io_remap_page_range(vma, vma->vm_start, paddr, vma->vm_end - vma->vm_start, vma->vm_page_prot);

In RHEL 5.x (kernel 2.6.18), everything is working correctly with the same flags and protections. The difference in this case is io_remap_pfn_range() is used in the driver mmap() function for RHEL 5.x instead of io_remap_page_range(), which is now obsolete:

io_remap_pfn_range(vma, vma->vm_start, paddr >> PAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot);

Thanks!

Last edited by dkwan; 10-17-2010 at 07:54 PM. Reason: Fix error
 
Old 10-18-2010, 02:22 PM   #2
nini09
Senior Member
 
Registered: Apr 2009
Posts: 1,033

Rep: Reputation: 75
How many memory in your system, more than 4G?
 
Old 10-19-2010, 12:38 PM   #3
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Original Poster
Rep: Reputation: 0
Yes, it has 32GB memory. Full kernel version is 2.6.9-89.0.19.ELlargesmp.

I was thinking maybe it was mapping in the zero page, so I implemented the "nopage" VMA operation for mmap() and it seems I can access the memory buffer this way. However the host ends up hanging after several runs of the application.
 
Old 10-19-2010, 02:13 PM   #4
nini09
Senior Member
 
Registered: Apr 2009
Posts: 1,033

Rep: Reputation: 75
io_remap_page_range() can't remap correctly if memory address is above 4G.
 
Old 10-19-2010, 03:50 PM   #5
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Original Poster
Rep: Reputation: 0
Hi nini09, thanks for the information.

Do you know of any solution, workaround, or kernel patch for 2.6.9? Is there any official bug report I can check out for more information? I did a quick search but haven't turned up anything new.

I did try setting my DMA mask to 32-bits, but my driver allocates a relatively large amount of buffers. I observed the kernel first allocating in ZONE_DMA memory (addresses <16MB), but that memory quickly becomes exhausted, and the rest of my allocations are attempted at memory >4GB, since there is >4GB installed. However, the 32-bit DMA mask will reject the allocations at addresses >4GB, so pci_alloc_consistent() fails (This is not a problem with the newer ZONE_DMA32 memory which is in kernel 2.6.15, IIRC).

I would just stick with the newer kernel, but this solution is for a customer who insists on using RHEL4 due to their IT requirements.
 
Old 10-20-2010, 02:22 PM   #6
nini09
Senior Member
 
Registered: Apr 2009
Posts: 1,033

Rep: Reputation: 75
If you can control your driver code, you can replace io_remap_page_range() with io_remap_pfn_range(). The io_remap_pfn_range can remap correctly even if the address is above 4GB.
 
Old 10-20-2010, 06:42 PM   #7
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Original Poster
Rep: Reputation: 0
I wish I could, but io_remap_pfn_range() is not defined in 2.6.9 kernel.
 
Old 06-28-2011, 04:19 AM   #8
2raghu
LQ Newbie
 
Registered: Mar 2010
Posts: 7

Rep: Reputation: 0
Hello dkwan,
I'm trying to write the pcie fpga driver and planning to use the mmap method to dma transfer the data. can you please help in this? if you can provide me the reference code it would be helpful.

Im working on Linux 2.6.27 kernel version.

thanks a lot.
 
Old 06-30-2011, 06:25 PM   #9
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Original Poster
Rep: Reputation: 0
Hi 2raghu,

If you haven't already, first read "Linux Device Drivers, Third Edition" available at http://lwn.net/Kernel/LDD3. Read Chapter 15: Memory Mapping and DMA. This is how I got started programming drivers and it has useful code examples.

For basic mmap of the DMA buffer in your driver mmap() entry point, simply pass the physical address (paddr) of the DMA buffer (shifted by PAGE_SHIFT) to io_remap_pfn_range(). The rest of the arguments for io_remap_pfn_range() can be taken from the vm_area_struct *vma:

io_remap_pfn_range(vma, vma->vm_start, paddr >> PAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot);

Hope that helps!
 
Old 07-01-2011, 12:46 AM   #10
2raghu
LQ Newbie
 
Registered: Mar 2010
Posts: 7

Rep: Reputation: 0
Thanks for the reply dkwan.
From the application using ioctl i get the size of the write dma buffer and then i do the mmap() operation. can you pls review the code below:

*************************************************
/* driver mmap routine*/
static int pcie_mmap(struct file *filp, struct vm_area_struct *vma)
{
unsigned long pfn;
unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
unsigned long len = vma->vm_end - vma->vm_start;

printk(KERN_ALERT "\nRAGHU: pcie-axi: Inside pcie_MMAP");

if (offset >= PAGE_SIZE)
return -EINVAL;

if (len > (PAGE_SIZE - offset))
return -EINVAL;

vma->vm_flags |= VM_RESERVED;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

printk(KERN_ALERT "\nRAGHU: pcie-axi: %s: mapping %ld bytes of mem at offset %ld\n",
__stringify(KBUILD_BASENAME), len, offset);

/* need to get the pfn for remap_pfn_range --
ds->wr_buf_ptr is the virtual pointer returned from pci_alloc_consistent*/



pfn = virt_to_phys(ds->wr_buf_ptr + offset) >> PAGE_SHIFT;

if (remap_pfn_range(vma, vma->vm_start, pfn, len, vma->vm_page_prot)) {
return -EAGAIN;
}

printk(KERN_ALERT "\nRAGHU: pcie-axi: Exit pcie_MMAP");
return 0;
}

/* application routine */

int DmaWrite()
{

unsigned char *mmap_buf;

/*deviceHandle is the fd got from open */

if(ioctl(deviceHandle, PCIE_AXI_REGDMA_WR, &regdma))
return -pio.status;

/* got the size using ioctl */

mmap_buf = mmap(NULL, regdma.dmabuffsize, PROT_READ | PROT_WRITE, MAP_SHARED, deviceHandle, 0);

if (mmap_buf == (unsigned char *)MAP_FAILED) {
printf("MMAP FAILED\n");
return 0;
}
/* call the write function*/
write(deviceHandle,mmap_buf,regdma.dmabuffsize);

return 0;
}
*************************************************
In driver write routine I setup the descriptor chain and initiate the DMA.

Questions:
``````````
1) Is my code correct? Am I missing anything?
2) In the mmap() for write operation what should be the flag? PROT_WRITE or PROT_WRITE|PROT_READ.
3) Do I need to follow the same thing for the read part also? i.e get the size and mmap and pass the read buffer pointer(got from pci_alloc_consistent) as pfn arg to remap_pfn_range().

Please ignore the sanity checks as of now.

Appreciate your help.

Thanks a ton.
 
Old 07-11-2011, 02:17 PM   #11
dkwan
LQ Newbie
 
Registered: Oct 2010
Posts: 9

Original Poster
Rep: Reputation: 0
Hi 2raghu,

Sorry for the late reply. I was on vacation last week.

1. First thing I noticed was you probably shouldn't work with the virtual address of the DMA buffer (unless that was what you intended). DMA uses "bus" addresses instead of "physical" addresses, and they are not the same on all architectures. There might be some unexpected address translation occuring. If you intend to work with the virtual address, try virt_to_bus() instead. If not, pass the bus address returned by pci_alloc_consistent() to io_remap_pfn_range():

/* DMA buffer allocation */
vaddr = pci_alloc_consistent(dev, size, &paddr);

/* In Mmap */
io_remap_pfn_range(vma, vma->vm_start, (paddr + offset) >> PAGE_SHIFT, vma->vm_end - vma->vm_start, vma->vm_page_prot);

2. The needs of your application will determine if you need PROT_WRITE, PROT_READ, or both. Set PROT_WRITE if your application needs write access to the mmap'ed memory, and set PROT_READ if your application needs read accesses to the mmap'ed memory.

3. Read should work the same as write.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
linux driver how to mmap many kernel address to user space soararing Programming 1 05-22-2008 05:32 AM
How do you MMAP PCI Configurations Space to User Space jbreaka4lyfe Linux - Kernel 0 03-03-2008 01:15 PM
Mmap and munmap physical address to user space karttik Linux - Software 0 10-28-2007 02:10 AM
Accessing a kernel buffer from user space jan_capri Linux - Kernel 2 05-21-2007 11:44 PM
Lost in space: unable to turn DMA on for DVD mlindhout Linux - Hardware 6 05-05-2006 02:20 AM


All times are GMT -5. The time now is 04:11 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration