Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have been performing timing measurements on an application that communicates to a device by means of an I2C interface. I have discovered that single I2C read and writes are taking around 450us to execute. I have verified that the I2C interface is running at 400KHz, so I would expect I2C operations on a single byte to take around 170us. It therefore appears that there is an overhead of over 250us to use the I2C drivers within the kernel?
Is this typical of I2C operations using the linux kernel I2C devices? I am confident that I am running on a powerful enough hardware platform.
Thanks in advance.
Last edited by marcpolo; 10-31-2012 at 06:05 PM.
Reason: Fix typo.
This depends on the specifications for the device that you're communicating with. If you know the device, then check out the specification. For example, this is a device which uses I2C to perform temperature measurements:
If you look at the section titled "2-Wire Serial Data Bus" you'll see that the device supports clock rates of either 100 MHz or 400 MHz. This is typical for I2C communications and has nothing to do with the capabilities of your computer.
Thanks for the response. The actual I2C message complies with expected timing for a 400KHz, as shown on a scope. The problem is this additional >250us, which appears to be software overhead to actually use the I2C device driver.
I see that your question was different, sorry for missing that.
You're verifying with a scope that the device is operating at 400KHz by that I assume you mean that you've observed SCL and captured it to verify the rate.
How are you determining that the write of a byte is taking 450 uS?
If SCL is clocking at 400 KHz, then the write of a single byte has to take about 20 uS, because that is 8-bits at that data rate.
Have you captured SCL along with SDA and verified a whole command sequence? For instance, the device requires that you send a start sequence, send the address for the device, send your command byte(s), and then send the stop sequence. This can take up to 19 or 20 cycles, making a whole command sequence at 400 KHz take 50 uS. If you sent a one byte command to read a one byte register, that requires close to 40 cycles, because you have two address byte sections, a command byte, and a response byte, along with the start/stop/ack parts of the whole cycle. You can see this all on a scope trace. Similarly the example device I offered the datasheet for has a section, as they all do which describes the SDA/SCL sequences you can expect for commands, command/replies, etc.
If you're determining that the R/W takes this long via program debug, one other thing to test is to write either continuously or chunks of bytes. A write of 10 bytes takes either exactly 10 times the write time that you're experiencing, or it takes less time. It could be that the Linux driver doesn't deal with things as you'd like, but that it is sending single bytes over the interface at the specified rate, just not accomplishing the task for a single byte. That's where I'd try writing batches of bytes and also writing continuously and see if the driver is capable of sending the information at the fastest rate and what you're experiencing is a delay on the part of software to manage it accordingly. If you have any control over discrete digital lines, you could alter one or more lines high/low, and place that logic into the driver, and rebuild it, to determine where the delay is.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.