LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 12-24-2008, 04:52 PM   #31
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454

Quote:
Originally Posted by jiml8 View Post
...
Actually, at the present time I am implementing a system where I have actually - gasp! - gone to lookup tables to handle both square roots and base 10 logarithms, because I don't want to incur the processor cost of computing them on the fly, and I can accept the slight loss of precision that the lookup tables cost me in exchange.
...
Have you benchmarked the straightforward implementation ?

The point is that modern x86 processors implement, say, sin, cos in HW (don't know about sqrt though), and table lookup may be slower.

With table lookup you have to first convert floating point argument into integer table index with range checking and with adjusting to, say, 0 .. 2 * pi interval, and this by itself takes time.


For, say, FFT related things, when in the end you are dealing with frequency bin numbers table lookup makes sense - because it's just a lookup, no floating point argument conversion/adjustment into integer index.
 
Old 12-24-2008, 05:54 PM   #32
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
Originally Posted by Sergei Steshenko View Post
Have you benchmarked the straightforward implementation ?

The point is that modern x86 processors implement, say, sin, cos in HW (don't know about sqrt though), and table lookup may be slower.

With table lookup you have to first convert floating point argument into integer table index with range checking and with adjusting to, say, 0 .. 2 * pi interval, and this by itself takes time.


For, say, FFT related things, when in the end you are dealing with frequency bin numbers table lookup makes sense - because it's just a lookup, no floating point argument conversion/adjustment into integer index.
This is FFT stuff, as it happens.

The latest generation of x86 processors are getting to be very fast in terms of clock cycles when doing square roots. Logs remain CPU intensive.

The conversion from float to int is quite fast when done using the lrint and lrintf functions. Typically, this is more than 10x faster than doing a cast in C because the cast from a float to an int in C causes the processor pipeline to be flushed, thus losing a lot of lookahead efficiency.
 
Old 12-24-2008, 05:55 PM   #33
ErV
Senior Member
 
Registered: Mar 2007
Location: Russia
Distribution: Slackware 12.2
Posts: 1,202
Blog Entries: 3

Rep: Reputation: 62
Quote:
Originally Posted by Sergei Steshenko View Post
The point is that modern x86 processors implement, say, sin, cos in HW (don't know about sqrt though), and table lookup may be slower.
OPCODE collision detection library has some interesting stuff about fast square roots:
Code:
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
 *	Contains FPU related code.
 *	\file		IceFPU.h
 *	\author		Pierre Terdiman
 *	\date		April, 4, 2000
 */
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Include Guard
#ifndef __ICEFPU_H__
#define __ICEFPU_H__

	#define	SIGN_BITMASK			0x80000000

	//! Integer representation of a floating-point value.
	#define IR(x)					((udword&)(x))

	//! Signed integer representation of a floating-point value.
	#define SIR(x)					((sdword&)(x))

	//! Absolute integer representation of a floating-point value
	#define AIR(x)					(IR(x)&0x7fffffff)

	//! Floating-point representation of an integer value.
	#define FR(x)					((float&)(x))

	//! Integer-based comparison of a floating point value.
	//! Don't use it blindly, it can be faster or slower than the FPU comparison, depends on the context.
	#define IS_NEGATIVE_FLOAT(x)	(IR(x)&0x80000000)

	//! Fast fabs for floating-point values. It just clears the sign bit.
	//! Don't use it blindy, it can be faster or slower than the FPU comparison, depends on the context.
	inline_ float FastFabs(float x)
	{
		udword FloatBits = IR(x)&0x7fffffff;
		return FR(FloatBits);
	}

	//! Fast square root for floating-point values.
	inline_ float FastSqrt(float square)
	{
#ifdef WIN32
			float retval;

			__asm {
					mov             eax, square
					sub             eax, 0x3F800000
					sar             eax, 1
					add             eax, 0x3F800000
					mov             [retval], eax
			}
			return retval;
#else
			return sqrt(square);
#endif
	}

	//! Saturates positive to zero.
	inline_ float fsat(float f)
	{
		udword y = (udword&)f & ~((sdword&)f >>31);
		return (float&)y;
	}

	//! Computes 1.0f / sqrtf(x).
	inline_ float frsqrt(float f)
	{
		float x = f * 0.5f;
		udword y = 0x5f3759df - ((udword&)f >> 1);
		// Iteration...
		(float&)y  = (float&)y * ( 1.5f - ( x * (float&)y * (float&)y ) );
		// Result
		return (float&)y;
	}

	//! Computes 1.0f / sqrtf(x). Comes from NVIDIA.
	inline_ float InvSqrt(const float& x)
	{
		udword tmp = (udword(IEEE_1_0 << 1) + IEEE_1_0 - *(udword*)&x) >> 1;   
		float y = *(float*)&tmp;                                             
		return y * (1.47f - 0.47f * x * y * y);
	}

	//! Computes 1.0f / sqrtf(x). Comes from Quake3. Looks like the first one I had above.
	//! See http://www.magic-software.com/3DGEDInvSqrt.html
	inline_ float RSqrt(float number)
	{
		long i;
		float x2, y;
		const float threehalfs = 1.5f;

		x2 = number * 0.5f;
		y  = number;
		i  = * (long *) &y;
		i  = 0x5f3759df - (i >> 1);
		y  = * (float *) &i;
		y  = y * (threehalfs - (x2 * y * y));

		return y;
	}

	//! TO BE DOCUMENTED
	inline_ float fsqrt(float f)
	{
		udword y = ( ( (sdword&)f - 0x3f800000 ) >> 1 ) + 0x3f800000;
		// Iteration...?
		// (float&)y = (3.0f - ((float&)y * (float&)y) / f) * (float&)y * 0.5f;
		// Result
		return (float&)y;
	}
Check out FastSqrt and related functions. It still need benchmarking, because in some cases it might be slower than default implementation.
As it said in file "don't use it blindly".
 
Old 12-24-2008, 08:35 PM   #34
Roflcopter
LQ Newbie
 
Registered: Dec 2008
Distribution: Windows XP / Ubuntu 8.10 / Fedora 10
Posts: 22

Original Poster
Rep: Reputation: 16
DLLs are just an example, I'm aware Linux doesn't use them. But do you think anything Windows-related will require VC++? I'd really like to avoid using VC++, not because it costs money but more because I'm afraid I'll come to depend on MFC and ATL and such and then if I need to program for *nix I'll be stuck.

BTW, I'm finding this speed comparison interesting.

Quote:
Small wonder you found only references to VC++ when you looked into DLLs; they are a windows feature (NIX uses shared objects instead). Which should be enough to stress the real drawback of compiled languages: you cannot trust that your code will run on any CPU or operating system - in fact, as a rule it will not.
It was my understanding that if I wrote C++ code, I could compile it on Windows or *nix as long as the way I wrote it was platform independent. It makes sense to me - if I make a C program using MinGW (just GCC repackaged), and say, only stdio.h (which I assume exists whatever distro you're using), like this:

Code:
#include <stdio.h>

int main( void ) {
    printf("Hello World");
    return 0;
}
Shouldn't I be able to compile that code on MinGW in Windows and GCC in Linux and get a compiled executable that does the same thing on either system?

Last edited by Roflcopter; 12-24-2008 at 08:51 PM.
 
Old 12-24-2008, 10:25 PM   #35
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
To use mingw on windows, you'll also usually have to include windows.h.

There are a lot of issues with compiling for Windows with mingw. I myself have been stuck for awhile trying to get a mysql interface to properly compile. It compiles fine in Linux, but there are all kinds of library issues associated with getting it to work in Windows, and so far it doesn't. This actually has a project of mine hung up to a certain extent.

But, yes, you can certainly do it. I have a gtk application (part of this project that needs the mysql interface) working the same on both Linux and Windows, using a cross-compiler.
 
Old 12-25-2008, 02:15 AM   #36
ErV
Senior Member
 
Registered: Mar 2007
Location: Russia
Distribution: Slackware 12.2
Posts: 1,202
Blog Entries: 3

Rep: Reputation: 62
Quote:
Originally Posted by jiml8 View Post
Speed is ALWAYS important - always!
It isn't always important. It depends on task only, and not all tasks produce full cpu load, even if unoptimized. If your program works one second within a day optimizing it will be a waste of time.

Quote:
Originally Posted by jiml8 View Post
You should always program as if your software was to run acceptably on a 1 MHz 8088 processor.
You will pay with development speed for that. The main reason why people use interpreted languages is faster application development, when compared to compiled languages. Sure, people can develop programs that will work lighting fast. But normally they prefer to spend a month to make a bit slower program, than to spend a year with assembler and write lighting-fast program.

Quote:
Originally Posted by Roflcopter View Post
DLLs are just an example, I'm aware Linux doesn't use them.
Linux has shared libraries that do same thing.

Quote:
Originally Posted by Roflcopter View Post
But do you think anything Windows-related will require VC++?
theoretically you can use winelib to compile application that uses WinAPI on Linux. It will require wine to work, though. See wine documentation.

Quote:
Originally Posted by Roflcopter View Post
I'd really like to avoid using VC++, not because it costs money
There is a free version of Microsoft Visual Studio, which is called Microsoft Visual Studio Express. And there are free compiler toolkits.

Quote:
Originally Posted by Roflcopter View Post
but more because I'm afraid I'll come to depend on MFC and ATL and such and then if I need to program for *nix I'll be stuck.
Then don't use MFC/ATL, and stick to cross-platform libraries from beginning.
 
Old 12-25-2008, 03:24 AM   #37
Bassy
Member
 
Registered: Oct 2004
Location: Venezuela
Distribution: Open SuSE v.-11.1.
Posts: 36

Rep: Reputation: 15
Wow! There's certainly not an easy answer for that one!

Listen (or read XD) in my opinion it strongly depends on what you wanna' do... there is no reason for not to learn something... except sometimes... time.

If you want to do theoretical applications like simulations or HPC for example, you should consider C but for larger applications including user interfaces sometimes C++ could make it easier for you. You said

Quote:
I read that it's absolutely not necessary to learn C before C++
but that's partially true because in my experience... to learn and use C helps you to get a better understanding of how computer architecture behaves and interact with your code (ASM is an extreme example too but I won't go that branch) and you also said:

Quote:
I do not want to learn Java. I've been down that road... shudder...
Buy hey! Its a nice technology to be used when you want to get a big app where HPC is not needed (a lot of I/O from user) or with Mobile programming...

That's it... IF its a matter of time THEN guide yourself toward your app's requirements and choose the one that suits you better... but if its just for learning purposes... you could do something like I did... C -> (ASM... optional) -> C++ and then Java...

That way you'll learn better what programming is about and you'll gain a pretty nice insights on techniques!


Last edited by Bassy; 12-25-2008 at 03:26 AM.
 
Old 12-25-2008, 09:41 AM   #38
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by ErV View Post
...
Linux has shared libraries that do same thing.
...

Well, as I recently learned - not quite.

Yes, Linux .so file are dynamically linked at runtime, and in this sense they are the same as DLLs.

However, in Windows at the moment you've loaded DLL into memory you can't load another one with the same name, and this is the ultimate DLL hell.

In Linux you can have different versions of the same .so file coexisting in memory simultaneously.
 
Old 12-25-2008, 09:46 AM   #39
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Roflcopter View Post
DLLs are just an example, I'm aware Linux doesn't use them.
Linux uses .so files instead. There are some fundamental differences (not just the file format and extension) so the concepts don't translate 100%. But enough of the concepts are in common that you can easily write software using .dll's on Windows and .so's on Linux with most of the source code portable and the differences isolated in small sections (that are rewritten for each platform).

Quote:
But do you think anything Windows-related will require VC++? I'd really like to avoid using VC++, not because it costs money but more because I'm afraid I'll come to depend on MFC and ATL
MFC and ATL and .NET and all the other ways MS pretends to be helpful to developers, were designed and function as traps to get you to write non portable applications. Those were obviously designed to lead to tangled non modular applications that would maximize the difficulty of porting later to a different platform. But even if you never wanted to port, the ways they push you toward non modular design also push you toward less maintainable code.

It is definitely worth the extra effort up front to start with some open source alternative.

I write very little GUI code and none recently. So I can't give you any specific advice on which portable GUI tool kit to choose. There are a few that are either completely free or free for non commercial use. That was probably the question you should have asked instead of C vs. C++. Which GUI tool kit should you start with?

The MS tools also provide various classes for purposes other than GUI (such as containers and strings, etc.). For those, you don't even need some competing tool kit. The STL containers that are compatible across all C++ platforms are at least as good as the MS ones. Using the MS ones is just stupid.
 
Old 12-25-2008, 09:51 AM   #40
ErV
Senior Member
 
Registered: Mar 2007
Location: Russia
Distribution: Slackware 12.2
Posts: 1,202
Blog Entries: 3

Rep: Reputation: 62
Quote:
Originally Posted by Sergei Steshenko View Post
Well, as I recently learned - not quite.

Yes, Linux .so file are dynamically linked at runtime, and in this sense they are the same as DLLs.
I was talking about basic usage. I didn't mean they (*.so and *.dll) are completely identical in all functionality.
 
Old 12-25-2008, 10:32 AM   #41
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by johnsfine View Post
In the majority of programming speed isn't important.
Quote:
Originally Posted by jiml8 View Post
Speed is ALWAYS important - always!
Quote:
Originally Posted by jiml8 View Post
Speed should NEVER be neglected in a design. At any point, even if that segment of code only runs once.
Quote:
Originally Posted by johnsfine View Post
If you put that focus into speed for every line of code, you would get only a fraction as much work done and you would produce an unmaintainable mess of code.
Quote:
Originally Posted by jiml8 View Post
More strawmen.
Unless we're using a very different meaning for the word "important", it wasn't a "strawman" argument at all.

If some user or future code maintainer would ever notice some result of the difference between two different ways I could code something, then it might be important. If they wouldn't ever notice it, then it isn't important.

The giant program I work on professionally has a LOT of code in it to catch rare unrecoverable errors and select the right messages to help the user diagnose the problem. Any given section of that code runs approximately never (not even once per thousand executions of the program) and then takes less than a microsecond to complete. In an expensive piece of software doing a complicated and important task, it is important that those sections of code exist and be correct. There is no reasonable definition of "important" for which the speed of those sections is "important". If they ran a thousand times slower (a millisecond instead of a microsecond) then even on the rare occasions that they run at all, no one could notice the difference.

When I design such code, I put extra attention into reliability and testability (the less likely code is to be used, the more challenging it is to make sure it is correct) and into keeping it small. I put no attention into making it fast.

That is of course all an anecdotal (rather than statistical) argument against your "always" claim. But it shouldn't take much to refute "always". My own long and varied experience in software engineering refutes far more than "always" in your claim.

My own statement about the "majority of programming" is harder to prove. I don't think anyone can have better than anecdotal evidence for that. I have a lot of anecdotal evidence to support the claim that the majority of lines of code in professional programs run so rarely per execution of the program that no reasonable definition of "important" could make their speed important. What I have read online contributes lots of other programmers' anecdotal evidence supporting the same claim.

Run some profiles on your own code. Set some reasonable cutoff for the fraction of the time that matters (such as 59.9 seconds of a one minute execution time). Now what fraction of the source code isn't included in a profile that identifies "hot spots" totaling 59.9 out of 60 seconds? I expect the majority (as in more than 50%) of the source code will be left out of that collection of hot spots. I work on performance critical programs where more than 99% of the source code would be left out of such a hot spot list. But all that code still has to get written, which means a lot of programmers, spending a lot of programming time, writing a lot of code for which execution speed isn't important.
 
Old 12-25-2008, 11:14 AM   #42
jiml8
Senior Member
 
Registered: Sep 2003
Posts: 3,171

Rep: Reputation: 116Reputation: 116
Quote:
But all that code still has to get written, which means a lot of programmers, spending a lot of programming time, writing a lot of code for which execution speed isn't important.
And again I will disagree. They spend a lot of time writing code for which speed is less important than other considerations.

I have in the past had to track down obscure bugs that occurred intermittently, and that seemed to have timing issues within them (I have NOT encountered this in Linux, though I have encountered it in a number of embedded systems, and in JOVIAL J73 code, and in ADA code...). Commonly, what I would find is that an uncommonly used error routine or an obscure branch was taking too much time to run, with the result that the mainline code did not have enough time to complete ITS job (typically a register won't get loaded in time). In the vast majority of cases, this came about because the author of the obscure branch said "speed isn't important" and spent no time on it. Thus, he used inefficient type transformations, or set up a loop without considering how much time that loop would take to run, and otherwise took more than his needed and fair share of processor.

So, he saved time coding, and we wasted time debugging a case that only occurred intermittently and was hard to set up to make it repeatable so we could find it. Such things never show up under debuggers; the debugger slows things down enough that the load of that register does finish before the mainline code tries to use the contents of the register.

You talk about time-critical code; that is pretty much all I do these days - embedded, signal processing, telecommunications type things. And again, I say unto you that speed IS important...always. It might be the least important of the criteria that are in front of you (schedule, budget, frequency that this code will run, space), but it is NEVER unimportant. And if you start with the assumption that speed is ALWAYS important, then when you code this branch you will say: "this loop might take awhile to run...I wonder if there is a more efficient way...how long would it take me to recode it that way..." or "this path will be taken if it is taken during a time critical operation, so what can I do to make it take less time to run..." At an absolute minimum, you could put a FIXME in the code, and send a memo to the boss: "I did it this way because of schedule pressure, but there is this potential problem down the line and if we encounter these symptoms we might want to look here".

I have seen loops in obscure paths in user code that would suck up 99% of the processor for milliseconds at a time, performing tests that didn't need to be performed. The person writing the code decided to handle "all cases" and "speed wasn't important".

When you say that speed is ALWAYS important, you are less likely to do such things and far more likely to have it at least flagged as a potential problem when you DO do it.

Speed is always, always, always important.
 
Old 12-25-2008, 01:22 PM   #43
Roflcopter
LQ Newbie
 
Registered: Dec 2008
Distribution: Windows XP / Ubuntu 8.10 / Fedora 10
Posts: 22

Original Poster
Rep: Reputation: 16
Quote:
That was probably the question you should have asked instead of C vs. C++. Which GUI tool kit should you start with?
Why? I'm not interested in only programming GUIs, though I'd like to be able to make them if I want too. I know you can do GUIs in both languages, and in many others as well. I was only using those libraries as examples of reasons I feared VC++.

Last edited by Roflcopter; 12-25-2008 at 01:28 PM.
 
Old 12-26-2008, 12:44 PM   #44
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Roflcopter View Post
Why? I'm not interested in only programming GUIs, though I'd like to be able to make them if I want too.
Because GUI's are the area where you have the greatest need for more than C++ and its standard libraries.

If you want to program GUI's at all, you should select a good GUI tool kit and find some tutorials associated with that tool kit and use those as a major part of your process of learning C++.

Quote:
I was only using those libraries as examples of reasons I feared VC++.
For other than GUI, everything you need beyond the C++ language itself should be in the STL (standard template library). When you read the documentation for whatever container classes you want to use, just make sure they aren't Microsoft-only.

If you use the free version of VC++, you'll obviously need to be much more careful of any associated documentation or tutorials. Likely they push you away from STL in favor of Microsoft versions. Even with MING, you may find some tutorials and examples pushing you toward doing things the MS way.

That's another advantage to starting with a portable open source GUI toolkit, even the non GUI portions of the examples and tutorials will use portable methods.

If you don't want GUI at all, maybe the Cygwin port of GCC to windows will be easier for you to use portably than the MING one. Both are free and they are pretty similar, but MING give you more support for writing Windows specific GUIs, while Cygwin GCC gives you more support for writing portable console applications. Either would need some GUI tool kit added in order to let you write portable GUIs. I think most of the available portable GUI packages expect MING, rather than Cygwin. So if you want to write GUI, select the GUI package first.

Last edited by johnsfine; 12-26-2008 at 12:49 PM.
 
Old 12-27-2008, 12:36 AM   #45
Roflcopter
LQ Newbie
 
Registered: Dec 2008
Distribution: Windows XP / Ubuntu 8.10 / Fedora 10
Posts: 22

Original Poster
Rep: Reputation: 16
Well, it sounds like C++ would be the best choice for what I'm after. Not ruling out C, but C++ sounds like a useful one to learn.

As far as GUIs, I'd been looking at wxWidgets. Do you have any suggestions?
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 08:20 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration