Greetings!
First let me state my apologies if this is the wrong forum for this but as I'm referring to a thread that originated here, I thought I'd better ask in here.
And beware: multiple questions following.
After reading
this thread, I started wondering if high cpu optimization would make a significant (read 'noticeable') difference when building applications. Would it make sense to rebuild almost any program (including basic system programs) with enabled optimizations to get more performance out of one's box? Or is it sufficient to compile common used apps like this?
If it is, I'd like to know if there is a convenient way to enable these optimizations when compiling source code I got off sourceforge for example. The kernel configuration offers a possibility to select a certain cpu architecture as a target platform. But for common programs it doesn't seem to be that simple. The thread I mentioned above names a possible way but I'm not comfortable with makefiles as I'm not a programmer. Can I achieve the same thing by handing parameters to ./configure ? Or is there another way?
My last question is about a 'standard' compilation. Say I downloaded and untarred an application source. I do a ./configure, then a make. Is there a standard level of optimization? For example would that application be optimized for a 568 architecture by default? Or do developers usually set the optimization level and/or platform to be used?
As you might have realized, these are mostly yes/no questions... however I'd be grateful for any deeper insight any of you could give me!