help: smartest way to create multiplatform apps in C/C++/asm
I have a bit of a quandary. For over 2 years I've been building applications that run on linux and windoze, and compile to both 32-bit and 64-bit executables. The largest of these applications is a 3D game/simulation engine. All my code is C/C++, plus a few small-to-modest-size assembly-language files (4 separate files for each to support 32-bit and 64-bit on linux and windoze).
A while back my winxp64 drive went up in flames and I had no choice but to switch to 64-bit win7. Admittedly this did provide a way to solve one annoyance --- I was not able to write and assemble a 64-bit version of my SIMD/SSE2/SSE3/SSE4/AVX/FMA code (with 16 * 256-bit registers versus only 8 * 128-bit registers in 32-bit mode). That's because winxp64 only saved 8 * 128-bit SIMD registers on context switches. Sheesh!
MASM syntax is fairly different than ATT/gas syntax, but at least the two 32-bit versions of each assembly-language file are pretty much only an instruction-for-instruction syntax translation.
No such luck in 64-bit mode. The 64-bit registers chosen to pass arguments, to preserve and to be volatile are very different in windoze and linux (independent of MASM versus gas syntax). So all the 64-bit assembly-language files necessarily need to be fairly different (especially registers).
Another huge annoyance in my attempt to figure out how to continue development on both platforms is that VS2005pro doesn't work on windoze7. Oh, some people claim VS2005 works on windoze7, but it clearly doesn't for me (after several attempts). Plus it may not be updated to support AVX/FMA and other newer SIMD instructions (if they have the same attitude about development tools as OS).
So, to summarize, my situation is this:
- my app builds and runs fine as both 32-bit and 64-bit apps on my 64-bit ubuntu 12.04 linux with gnu tools (and codeblocks as my IDE).
- when i was developing on winxp64, my app would build and run fine as a 32-bit app, but not 64-bit app (because the 16 * 256-bit SIMD registers were not saved across context-switches). So I didn't write MASM versions of my 64-bit gas-syntax assembly-language files.
- VS2005pro does not work on win7, and probably if somehow I managed to get past that, I suspect VS2005pro won't handle AVX/FMA level instructions [and will have other problems].
So now I need to decide the best way forward, to get both 32-bit and 64-bit versions of my apps working on both linux and windoze.
Switch to codeblocks on win7, throw away my 32-bit MASM-syntax files and build with gas-syntax files on both OS. I would need to create separate 64-bit gas-syntax files, because the function protocol on windoze is very different (and quite badly designed).
Buy VS2010 and write 64-bit MASM-syntax files to mirror my 64-bit gas-syntax files (except for function entry, function exit, and the need to adopt different registers throughout the routines because the 64-bit windoze specification preserves and passes-arguments-in different registers).
I am rather torn. It would be rather convenient to perform all my development work on only one IDE, and codeblocks is available for both OS. However, the codeblocks IDE is not as spiffy or powerful as the VS IDE, though the difference is not a big deal to me.
OTOH, the fact that UNICODE characters are 16-bits on windoze (UTF16) and either 8-bits or 32-bits on linux (UTF8 or UTF32) has been one enormous pain in my butt since the beginning. At this point I'd love to switch to UTF8 for everything, but windoze appears to have little or no support for UTF8. Apparently I can't just set a locale or something to UTF8, then do printf() or sprintf() of UTF8 character strings and expect to get the correct output. And windoze does not support UTF32 at all as far as I can tell. Though I'm not 100% sure, it appears working with the GNU (mingw) toolset on windoze gets me support for UTF8...... I think. I just hope the cairo/pango/freetype trio draws UTF8 strings into memory images. Like I said, UNICODE characters are one huge pain in the cross-platform butt!
I hate to spend another pile of money for VS2010 to continue development. I'm tired of buying more swimming pools for Bill Gates and friends every time I turn around, when I much prefer linux and linux tools anyway (overall).
Any thoughts? And if you know exactly what is the situation with UTF8 characters and strings on windoze, please explain. I won't like it, but I would be okay if my windoze implementations only run on windoze7 and newer OS (in case they added support for UTF8, which somehow I doubt, even though their browser supports UTF8).
I'm only an amateur asm coder, started with tasm on DOS back in the 90's and eventually made my way to nasm these days due to the intel syntax. Nasm is available on both linux and win so maybe you could convert your Masm syntax? I haven't written asm from scratch on windows recently but I would have thought you could write your code to preserve any affected registers on both platforms ? .. happy to be enlightened though.
As far as buying a new version of VS, wouldn't Eclipse do the job and also be cross platform?
A few macros could help cover up a decent fraction of the remaining differences.
I use VS2005pro on W7 for editing and debugging a 64-bit project (that is built with compiler and boost-build system outside VS). There are a few inconveniences of being unable to see things during debug that I ought to have access to. But generally it is usable.
On the main points you asked about, sorry I can't help. You seem to be more experienced yourself in the topics on which to base those decisions than those who are likely to see and reply to your post.
Edit: One extra possibility you didn't mention: On Windows you might compile with 64-bit MINGW, including gas syntax, but edit and debug with VS2005. I'm frustrated by a lot of flaws in VS2005, but I still find it a better editor than either CodeBlocks or VS2010 and I find it a better debugger than CodeBlocks.
the problem of reliable and up-to-date assemblers
The problem was, my assembly is mostly the latest and greatest 16 * 256-bit register AVX/FMA3/FMA4/other instruction sets. When I last checked, the various assemblers weren't completely up to date. Some were pretty close, others were far behind, and some said that the latest instruction sets "have bugs".
So it seemed back then at least that only gas and masm were rigorously kept up to date. In fact, the latest versions of those two assemblers seemed to gain the new instructions slightly before the CPUs that contained them became widely available.
I guess the bottom line is this. I have learned through difficult and very painful experience to NOT adopt anything that isn't solid, stable, and kept that way by active developers (open-source or otherwise). So even if one of the assemblers is up to date today, I'd hesitate to adopt it until it was diligently kept solid and up to date for a few years.
I know the ATT/gas syntax is rather lame. However, the syntax is much easier than MASM to write a program to process --- to convert from ATT/gas to MASM. And that's what I did for the 32-bit code, which is line-for-line translation (except for a few extra directives that MASM requires, which I put in comments in the ATT/gas version, then extract during processing).
Thanks for the tip though. I should remember to go back and check out alternate assemblers now and then. No way to know when someone will step up to the plate and make a first rate, up to date application.
As for eclipse, I adopted eclipse before I switched to codeblocks. I simply could not stand eclipse, mainly for two reasons. I was slow, slower, slowest... just incredibly, insanely slow. Furthermore, they had a very "authoritarian attitude" and kept forcing bogus practices and decisions upon me... until I couldn't stand it any more. Example: I want to enter text, period. I want to enter all tabs (no auto-tabbing), and I want to make all formatting decisions for myself. It drove me absolutely NUTS to have some pretend-sentient program jerk my text-insert cursor around, insert characters, move characters around, etc. Yes, I tried my best to find a profile I could live with, but gave up. I have learned better then to deal with anything or anyone with an authoritarian mindset, who decides for me how I must do things.
Of course codeblocks is far from perfect, but at least it doesn't often actively work against me.
win64 versus linux64
And yikes no! The win64 passing scheme is most certainly less efficient!
The biggest single absurdity is the way win64 skips over [CPU and SIMD] registers when win64 allocates function arguments. Plus, win64 only passes a maximum of 4 arguments in registers, no matter what.
Consider the following example: what happens when your program passes 6 integers (8-to-64-bit) and then 8 floating point vectors (1*f32, 2*f32, 4*f32, 8*f32 or 1*f64, 2*f64, 4*f64)?
win64 passes the first 4 argument in rcx, rdx, r08, r09... then passes all 8 floating point vectors on the memory stack! Say what?
Say yes, that is the completely crazy way win64 works:
- The 1st argument is passed in rcx or xmm0.
- The 2nd argument is passed in rdx or xmm1.
- The 3rd argument is passed in r08 or xmm2.
- The 4th argument is passed in r09 or xmm3.
- The 5th to 14th argument (and beyond) is passed on the memory stack.
So, is linux64 any better? You betcha!
The 1st six integer arguments (no matter what argument position they are in) are passed in rdi, rsi, rdx, rcx, r08, r09. The 1st eight floating scalars or vectors are passed in xmm0/ymm0 to xmm7/ymm7.
In other words, in this example, win64 passes exactly 4 integers in registers, and throws all 8 floating point vectors out onto the memory stack... while linux passes all 14 arguments in registers!
Holy smokes! win64 has 8 locations reserved for argument passing (4 int, 4 float/vector), but due to their insane, completely artificial allocation scheme, even the 4 floating-point vectors they could have stored in xmm0~xmm3 get shoved out into memory! Crazy!
But wait, the reality is even worse than that! What if you're passing 4-element double-precision floating point vectors? On win64, they're shoved out into memory no matter what argument they happen to be in! So if the first 4 arguments were f64vec4 vectors, all 14 arguments will be put on stack, while in linux all 14 arguments will be stored in registers, no matter what order arguments are passed in!
You think I'm kidding, don't you? I'm not. How does linux64 do better? Well, the answer is simple. In 64-bit mode each SIMD register holds up to four 64-bit floating point values or eight 32-bit floating point values (one f64vec4 or one f32vec8). So linux will store 256-bit vectors (f64vec4 or f32vec8) into the ymm SIMD registers. win64 doesn't acknowledge the existence of 256-bit wide ymm SIMD registers, and thus stores all such values on the stack no matter what order arguments are passed.
Furthermore, what if we change the order of arguments to assembly-language functions? In linux, a large variety of function argument orders put every argument into exactly the same register as before. For example, all the following (and several more) argument orders put the arguments (a, b, c, d, e, f, g) into the exact same register... and thus called assembly language functions need not be re-written:
int = func (int a, int b, int c, int d, f64 e, f64 f, f64 g, f64 h);
int = func (int a, f64 e, int b, f64 f, int c, f64 g, int d, f64 h);
int = func (int a, int b, f64 e, f64 f, int c, int d, f64 g, f64 h);
int = func (f64 e, f64 f, f64 g, f64 h, int a, int b, int c, int d);
Pretty nifty, huh? On win64, any and every change of argument order changes where arguments are passed to called functions. Not a very nice situation to assembly-language programmers, huh?
The fact is, win64 argument passing is massively worse than linux64. If I was assigned the job of inventing an argument passing scheme that was worse than win64, I'd have a difficult time (other than simply decree all function arguments get passed on the stack).
So, in trying to be "different", the morons at macroshaft screwed themselves. That's what happens sometimes when you have nefarious, diabolical commercial goals designed to harm others.
Your point about the behavior of MINGW assembler output in a VisualStudio work flow is interesting. You might possibly be correct that the VS2010 linker will accept object files generated by the MINGW compiler and assembler. That would be sweet, because at least I could write all assembler code in ATT/gas syntax, which is a lot easier than converting to MASM.
BTW, I'd have to try VS2010 because VS2005pro simply doesn't work, and I'm not about to spend yet another week trying to make it work. But that's still an interesting possibility.
Does anyone know whether the VS2010 linker will accept object files generated by the MINGW compiler and assembler?
We haven't actually tried the VS2010 linker, but I can't imagine that ability went away. We used VS2005 linker sometimes and VS2008 linker sometimes, with 64-bit mixed .obj files (VC++ and MINGW).
I'm not sure why you want to use MS linker, rather than MINGW linker (to create Windows 64-bit .exe and/or .dll). But if you want to, I expect there is no problem.
So I guess you're saying you get VS2010 to invoke mingw tools, huh? And perform intellisense and debug upon the output produced by mingw toos?
An object file generated by one tool is just the same as an object file generated by a different tool on the same platform, as far as I'm aware - the linker will just combine the equivalent sections and make the resulting binary relocatable.
Distinguish the following parts of the tool chain:
4) Make system
One of the projects I work on has two basic Windows 64-bit paths:
1) Editor : VS2005
2) Compiler : Intel
3) Linker : VS2005
4) Make system : Boost-build
5) Debugger : VS2005
1) Editor : none (program generated C++ source)
2) Compiler : MINGW
3) Linker : MINGW (mixing in Intel .obj files)
4) Make system : Custom
5) Debugger : VS2005
The other project also has two basic 64-bit paths:
1) Editor : VS2010
2) Compiler : VS2008
3) Linker : VS2008
4) Make system : VS2010
5) Debugger : VS2010
1) Editor : none (program generated C++ source)
2) Compiler : MINGW
3) Linker : MINGW (mixing in VS2008 .obj files)
4) Make system : Custom
5) Debugger : VS2010
We have mixed a few other ways. But we have not used a non VS compiler with a VS make system. I know you can, but we haven't.
Debug is always imperfect. The debugger has a fairly good idea of which source line is associated with which asm instruction, despite mixing compilers. The debugger tends to have a poor idea of which local variables are where in registers of stack frame at any given spot in the code. Using compilers other than the one the debugger was paired with seems to make that a little worse.
I often need to look at disassembly to figure out for myself where local variables are, then use register or memory windows to find them. Last time I used VS2005 debugger on Windows7, I couldn't get the register window to show me the basic floating point registers (bottom 64 bits of each of 16 XMM registers). I don't recall whether that had ever worked. I debug in too many different environments. I tend to look at doubles in memory windows more often, (based on pointers from the register window).
I've seen the VS messages about "intellisense" but never paid much attention to which feature that supports. Is that the "go to definition" etc. feature when you click on symbol? That works badly with the mix of VS2010 editor and VS2008 compiler, and pretty much doesn't work at all for most of our uses.
MINGW, Intel and MS all use the same basic .obj file format on Windows. Cygwin uses a different format, even though it is the same platform.
IMO, multi-platform / portable and asm do NOT belong in the same sentence.
I would only use asm if there were no other option, because it is obviously not portable / multi-platform.
There were a number of articles here and on Linux news sites that discussed how to use SIMD and how to make gcc auto-vectorize ... I think these qualify as the smartest things to do.
You and many other people may want to pretend that assembly-language and cross-platform don't belong together, but I disagree.
I will agree, however, that it is wise to keep the quantity of assembly-language to a minimum, but that's true for non-portable applications too.
BTW, I was very impressed at how good gcc and g++ have gotten at generating and optimizing the SIMD they generate. Nonetheless, my handed coded SIMD/AVX/FMA+ vertex-transformation routine is still over 3x faster.
To be sure, other routines don't speed up so much. For example, my 4x4 matrix multiply is only about 1.6x faster than compiler generated code.
Nonetheless, assembly can be great and very appropriate for many trivial routines. For example "return the 64-bit CPU clock register", which is one machine instruction plus a return instruction.
But assembly-language is also very appropriate for complex SIMD-heavy routines --- when those routines are executed frequently by an application.
I keep assembly-language to a minimum, but assembly-language benefits some of my applications very much. For me, ASM is a keeper!
|All times are GMT -5. The time now is 11:48 AM.|