[SOLVED] Is it normal for python to be as fast or faster than C?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Is it normal for python to be as fast or faster than C?
I was tinkering with a random number generator in c.
Code:
#include <stdio.h>
#include <jmgeneral.h>
int main()
{
int a = 0;
for (int i = 0; i < 1000000; i++) {
a = random_gen(100);
}
}
// these from shared library
int
ret_nano_secs()
{
int retval = 0;
#ifdef _WIN32
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
retval = (unsigned int) now.QuadPart;
#else
struct timespec now;
clock_gettime(CLOCK_REALTIME, &now);
retval = (unsigned int) now.tv_nsec;
#endif
return(retval);
}
/*
seeds srand with nanoseconds then returns random within specified range
*/
int
random_gen(int high)
{
// seed rand with current nano second number
srand(ret_nano_secs());
return((unsigned int)rand() % high);
}
Running this 1M iterations
Code:
time ./a.out
real 0m1.237s
user 0m1.237s
sys 0m0.000s
Python however, same iterations with random.
Code:
#!/usr/bin/python3
import random
for i in range(1000000):
random.randrange(1, 100)
Code:
time ./test.py
real 0m0.603s
user 0m0.603s
sys 0m0.000s
While I make no claims about the efficiency of my c code I'm quite confused. I wouldn't remotely expect this result. In pure C it's twice as long to run where random is a library written in 1k lines of interpreted python, not compiled. Is this type of result common in all but the most extreme cases?
Last edited by jmgibson1981; 02-05-2024 at 04:58 PM.
Your test case is not really relevant to your question...
Quote:
Is it normal for python to be as fast or faster than C?
... it only tells you that the specific Python example with presumably well defined "randomness" is faster than the specific C example with poorly specified "randomness" in the unspecified runtime environment. Not a useful answer on which to base a general comparison between Python and C.
Your C implementation has multiple nested function calls per loop iteration, each with its own overhead, while the Python random implementation is likely to have been highly optimized and the runtime parsing may provide additional loop optimizations.
So, apples and oranges.
UPDATE: Just because I had a little spare time...
Code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int a = 0;
unsigned seed;
for (int i = 0; i < 1000000; i++) {
a = ((unsigned int)rand_r(&seed) % 100);
}
}
Your Python example performs no seed initialization other than the default (I don't know what that may be) and makes no use of the result, it only generates random values, so the above is a more realistic "functional equivalent" in C.
Code:
$ time ./a.out
real 0m0.005s
user 0m0.005s
sys 0m0.000s
$ time ./test.py
real 0m0.630s
user 0m0.616s
sys 0m0.011s
A little bit of improvement.
This is not intended to show how fast C is, but just to show how important it is to define the conditions of the test.
Additionally you measured not only the execution of the code, but the start of the application itself, which depends on a lot of external circumstances. You need to start the app, and measure the elapsed time inside. At least 10 times or even more and calculate an average. Probably that would have some meanings, a single run/measurement is pointless.
@EdGr: “Maybe you’re being just a little bit too harsh.” Today, we live with an embarrassment of hardware riches. Nearly all of the time, the “expensive time” that we’re trying to optimize isn’t the time of silicon chips. It’s the time spent by “computer programmers” who these days cost(!) upwards of $100,000 a pop! “Moore’s Law” will take care of the rest.
To me, it’s quite interesting where the Python language, in particular, has been successful. I know of several applications (and robots …) which adopted it as their internal scripting language. Statisticians use it to do a great many things. It has “list processing” capabilities which smack of LISP.
If you are seriously in a “speed contest,” it’s a given that you won’t be using any interpreter.
Astrogeek already answered, but given the various follow-up remarks, something to think about...
Quote:
Originally Posted by jmgibson1981
Is it normal for python to be as fast or faster than C?
First one must define "normal".
Next one must define "faster".
Third, one must ask, "does it matter?"
Walking in a circle a million times is not normal, and few will care how quickly one might do it.
Most people want to get from A to B, and comparing how long that takes, and at different times of day, is a more practical measure of speed.
There are times when B is somewhere that must be reached as fast as possible, but often one just needs to reach a shop before it closes, or a station before the last train, or a toilet whilst one can still hold it in, etc - for each of those there is "fast enough".
And on the C always being faster nonsense...
A Formula One car can get from 0 to 100km/h in 2.4 seconds, and has top speeds in excess of 360 km/h.
An average street car is significantly slower, taking 3-4 times longer to reach 100km/h, with top speeds around 200km/h.
So yes, on a race track with a professional driver in it, the F1 car will go faster and will easily win, but if I need to get around a city I'm jumping in the taxi, no matter how much "slower" the car might be.
And if I need to get to a distant airport before a plane leaves, my decision will be based less on the car, and more on the driver.
It's also worth noting that most of the things that computers commonly do are what we call "I/O-bound," not "Compute-bound." The completion speed of the activity depends more on how fast (and, how efficiently) it can "perform I/O," and not on the raw speed of the processor.
There are certainly exceptions: some forms of computer graphics rendering, video compression, simulations. But, they are "exceptions."
Most of the time, the "time" that you are most interested in "saving" – and, that is most profitable to save – is yours.
Last edited by sundialsvcs; 02-07-2024 at 01:07 PM.
Today's microprocessors are not designed to be programmed in "assembler." Their designers also work on optimizing compilers which will generate the anticipated instruction sequences. They provide their own compilers and assist other projects such as Microsoft and gcc. Assembler today is used only for asm{...} blocks which encapsulate specific instruction sequences that have no corollary in the language, or that are CPU-specific. (For instance, look at the Linux kernel source-code.)
Today's microprocessors are not designed to be programmed in "assembler.".... Assembler today is used only for asm{...} blocks
Interesting comment. I've written a few programs in ARM64 assembly just recently. It is actually fun to do so... Every processor can be easily programmed in assembler from the z80, 6502, x86, x86_64, ARM, ARM64, etc. So not sure where you are coming from .
Modern CPUs make very heavy use of "pipelining" to achieve their intended performance. Compilers automatically produce some sometimes-strange-looking instruction sequences to help achieve that. And, the CPU architecture is designed to expect it. Microprocessor designers work with compiler designers to achieve these results. This is why compilers sometimes have very CPU-model specific options. Because, "if you are writing a program where every nanosecond matters" ... it matters.
Last edited by sundialsvcs; 02-08-2024 at 10:58 AM.
Modern CPUs make very heavy use of "pipelining" to achieve their intended performance. Compilers automatically produce some sometimes-strange-looking instruction sequences to help achieve that. And, the CPU architecture is designed to expect it. Microprocessor designers work with compiler designers to achieve these results. This is why compilers sometimes have very CPU-model specific options. Because, "if you are writing a program where every nanosecond matters" ... it matters.
That is an interesting topic, but does not belong to the current thread.
If every nanosecond matters you ought to use some compiled [and optimized] language, not an interpreted one.
I agree, using assembler is not a simple case, these days compilers know CPUs better than human beings.
Also, the speed depends on the algorithm implemented, so there may be cases where it seems like something might be faster using python than using c code.
And as an example we know: there are a lot of different implementations available for the same string class (in c, c++), you can compare them even with perl, python, java. You will get interesting results.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.