-   Linux - Embedded & Single-board computer (
-   -   How does Linux create a Transaction Layer Packet for PCI Express? (

jbreaka4lyfe 06-02-2008 09:02 PM

How does Linux create a Transaction Layer Packet for PCI Express?
When attempting to write to communicate with a PCIe card, a Transaction Layer Packet (TLP) is created and sent to the Root Complex. The Root Complex then routes it properly to the intended device. I'm aware that functions such as memcpy_toio facilitate the transfer of data from a device driver to the device. Where exactly is the TLP created? A TLP has many different fields (such as length, payload, type, transaction class) and I haven't seen where in the code this is managed. The bus driver is responsible for setting up a PCIe card, so I imagine it is also responsible for overseeing the communication between a device and its driver. Can anyone point me in the direction of how I can see the creation of the TLP or at least the management? Thanks in advance.

jbreaka4lyfe 06-03-2008 05:56 PM

It was recently suggested to me the BIOS and firmware on the PCIe controller split stuff up into TLPs. The operating system has no control over this at all from what he could tell.

jbreaka4lyfe 06-04-2008 10:27 AM

It turns out the PCIe Bus Controller is responsible for turning user accesses into TLPs. That's the answer. :-)

schekall 07-15-2008 04:24 PM

I too have been running into this problem.

I'm not able to get my pcie_driver to send more than a single word (32 bits)

PCIe is capable of handling upto 4K TLPs but I can't get the Kernel to
generate them,

My pcie_driver is a character device using mmap.

My question is, "Does the pcie driver need to be a block device"?


jbreaka4lyfe 07-16-2008 12:04 PM

From the driver's perspective, you have no control over how many bytes are sent across the bus to the PCIe card in a TLP. You cannot guarantee the payload size of a TLP. If you need to transfer data to the card, you really have 2 options. You can either push the data out to the card, by having your processor write to the IO Memory space (writing 4 bytes at a time per TLP payload), or you can have the card grab the data from some DMA buffer that you designate. Those were the options that I came across, while still keeping my driver a character device. I made a post to the kernel mailing lists about this, and the developers responded. The suggestions they made there, are the detailed version of the content I posted here. Good luck.

schekall 07-17-2008 07:29 PM

4 bytes per TLP is what I'm getting now, so I guess its DMA or the highway.

Since my board only gets TPL packets, from the PCIe complex.

I'm hoping that you say "grab the data from some DMA buffer", that this means
some sort of change to my Device Driver code.

could you suggest a book or howto that show how to do this??

BTW:jbreaka4lyfe, did you ever get your's to work?? if so how??

Any idea how I can setup a DMA buffer in my devicer driver.

Here is the __init for my module.

static int __init pcie_driver_init_module(void){
pcie_board_t *myd;
int err_code;
pci_dev_bar_t *p;
int i ;
u32 bar1;
char mem_name[80] = "" ;

printk(KERN_WARNING "pcie_driver init started\n");
myd = the_device = kmalloc(sizeof(pcie_board_t), GFP_ATOMIC);
if (the_device == 0) {
printk(KERN_WARNING "pcie_driver failed to get init dev struct.\n");
return -ENOMEM;

// allocate a Major device named "pcie_driver" with Minors=(0:-PCIE_DRIVER_NUM_MINOR_DEVS)
err_code = alloc_chrdev_region(&myd->ID, 0, PCIE_DRIVER_NUM_MINOR_DEVS, "pcie_driver");

err_code = pci_register_driver(&pcie_driver_driver);
if (err_code != 0) {
printk(KERN_WARNING "pcie_driver pci_register failed \n");
return err_code;
//-------------------- initialize the bar structures -------------------------

for (i = 0; i < NUM_BARS; i++)
p = &myd->Dev_BARs[i];
p->pci_start = pci_resource_start(myd->pci_dev, i);
p->pci_end = pci_resource_end(myd->pci_dev, i);
p->len = pci_resource_len(myd->pci_dev, i);
p->pci_flags = pci_resource_flags(myd->pci_dev, i);
if ((p->pci_start > 0) && (p->pci_end > 0))
printk(KERN_WARNING "pcie_driver addr %x : 0x%lx --> 0x%lx\n",
i, p->pci_start, p->pci_end);
// ++Board[Drvr_Num_Boards].numBars;
p->bar = i;
//p->len = p->pci_end - p->pci_start + 1;
p->pci_addr = (void *)p->pci_start;
p->memType = p->pci_flags; /* IORESOURCE Definitions: (see ioport.h)
* 0x0100 = IO
* 0x0200 = memory
* 0x0400 = IRQ
* 0x0800 = DMA
* 0x1000 = PREFETCHable
* 0x2000 = READONLY
* 0x4000 = cacheable
* 0x8000 = rangelength ???

sprintf( mem_name, "pcie_driver_mem%d", (int)i);
if(!request_mem_region(p->pci_start, p->len, &mem_name ) ) {
printk(KERN_WARNING "pcie_driver: request_mem_regino failed to get %s\n", mem_name);
return -ENOMEM;
// printk(KERN_WARNING "pcie_driver: request_mem_region %s\n", mem_name);

p->kvm_addr = (void *)ioremap_nocache(p->pci_start, p->len);


return 0;

jbreaka4lyfe 07-18-2008 12:24 PM

I apologize, when I was suggesting the DMA buffer, I should have been more detailed. That change would require a hardware change. You would allocate a DMA buffer (most likely in the 32bit addressable memory space). Then you would get the physical address of the buffer, and pass it to your PCIe card. There would have to be some communication to the card such that it becomes aware that data is present there, and how much. The card then does the work of grabbing the data, and there you go.
As for references, everybody and their grandma will always refer to Linux Device Drivers (latest version is 3rd edition). :) I can't tell you how many times people give a vague answer and tell you to just look it up in that book. I used that book religiously on my PCIe driver development. It's the O'Reilly series, and they do a spectacular job. It not only says what you should do, but it says what you should be aware of as well. Also, I highly recommend reading through the kernel documentation. For DMA specifically, I would recommend reading DMA-mapping.txt and DMA-API.txt. You'll find all sorts of helpful stuff in that kernel documentation. I've found that you really need to read the documentation and that LDD 3 edition book.
As for getting my stuff to work. My hardware team is a really great bunch of guys, and we decided to support both ideas. We do both the CPU pushes and the DMA pulls, for data transfer to the card. We started with the pushes as a security blanket to meet our deadlines. And then we're switching to the DMA pulls for performance. By the way, there are other limiting factors you will need to be aware of for data transfer. In the PCIe Configuration Space there are 2 4bit register components that designate lane limitations of reading and writing transfers. They are named something to the effect of the MAX Read/Write Transactions or something. These values are written into that space by the BIOS. Depending on your computer, the values will be different.
Those kernel documents will help walk you through the steps necessary to do the DMA, and it will show you good programming practices while doing it. Best of luck.

slkjas 11-05-2008 02:47 AM

Pcie Help

My masters project (through the University of Cape Town) was to design and build a low cost PCIe digitiser card under the GPL (x4 lane PCIe, with a dual channel 500MSPS National ADC). There has been much interest in the card and there has now been an allocation of funds for the development of a Linux driver for my card. The ultimate goal it to integrte the card into the GNU radio project.

I am a newbie to Linux drivers. Does anyone know of any resources that I could tap into or a piece of code that I can use as a template?


milobaik 01-22-2009 09:20 AM

I'm working on a PCIe driver and we are looking at using DMA to transfer data from processor memory to a PCIe device...The processor is the MPC8548 which has DMA controllers on the chip. If I use those DMA controllers to push the data to the PCIe device will the Bus Controller package the data in TLPs > 4 bytes? the discussion above only mentions DMA in the context of the PCIe device pulling the data over the PCIe bus...


jbreaka4lyfe 01-23-2009 11:33 AM

I don't know the answer to that. That is probably a question for the Linux Kernel mailing list. Good luck.

All times are GMT -5. The time now is 11:56 PM.