LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-21-2012, 11:25 AM   #1
jarubyh
Member
 
Registered: Aug 2011
Location: $HOME
Distribution: Slackware, FreeBSD, Debian
Posts: 50

Rep: Reputation: 1
Which assembler should I learn to use first?


Hallo.

I've got a bit of C++ and various scripting language experience under my belt, so I've decided to pick up a bit of assembly language know-how. I've been using GAS so far, but I've heard quite a lot about NASM and various other assemblers, and I'm wondering if there is one assembler that is fairly standard/most relevant. Ease of use isn't really an issue. What do you recommend and why?
 
Old 04-21-2012, 11:37 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,830

Rep: Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308Reputation: 7308
I do not think there can be a standard. I suggest you to find a hardware and a goal and learn to use that system.
 
Old 04-21-2012, 11:40 AM   #3
jarubyh
Member
 
Registered: Aug 2011
Location: $HOME
Distribution: Slackware, FreeBSD, Debian
Posts: 50

Original Poster
Rep: Reputation: 1
I'm on an x86 machine, by the way.
 
Old 04-21-2012, 01:31 PM   #4
dugan
LQ Guru
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 11,220

Rep: Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319Reputation: 5319
Why not use yasm, which supports both syntaxes?
 
Old 04-21-2012, 02:29 PM   #5
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by jarubyh View Post
I'm on an x86 machine, by the way.
32 bit or 64 bit?

x86_64 asm is a much more sane assembler language than x86 32 bit. Learning it is more likely to have real use than learning 32 bit. So if you happen to be running 32 bit Linux on 64 bit hardware, I suggest switching to 64 bit Linux so you can learn x86_64 asm.

Very few tutorials (I don't know ANY good one) teach x86_64 asm. Most teach 16 bit DOS x86 asm, which is beyond obsolete and worthless as well as strange and unique, so learning it is a rotten introduction to asm.
Others teach 32 bit x86 asm, but not well enough to be worth learning that instead of 64 bit.

If you are running Linux, it is best to use gas syntax. That lets you have the same view of asm across the widest range of tools.

If you are running Windows, there is one very good reason for learning masm syntax. The best GUI debugger is Visual Studio (not that VS is particularly good, just that GUI debuggers based on gdb tend to be worse than VS).

One very important tool in learning asm is the ability to switch to disassembly view in a decent GUI debugger and step though the code your C or C++ compiler generated for some interesting function. Doing that for a function compiled without optimization is easier and less informative. Doing that for a function compiled with optimization but with the (often inaccurate) debug info that you can override back on in an optimized compile is much harder and much more informative. Do both and do them often.

When last I compared gdb based GUI debuggers, my favorite was insight
http://sources.redhat.com/insight/
I'm not sure of its current status since it has gotten rather little developer attention in the last few years. But it may still be best.
The gdb based GUI debugger in an IDE such as CodeBlocks or KDevelop is probably the next best bet. But there are a bunch of choices for gdb based GUI debuggers and I expect all have some ability to step through a disassembly view.

Most asm tutorials focus on writing either whole programs or stand alone code (direct boot from BIOS without an OS) in asm. Those are both lame ideas. I would be typing all day if I tried to list the reasons it makes far more sense to learn how to write asm functions designed to be called by C calling standard from programs written in C or C++ (or whatever, because many languages can call using the C calling standard). I hope you trust me on that. Your experience learning asm will be far better if you do.

In learning x86_64 asm, you will want to download the pdf for the x86_64 ABI (application binary interface, which includes that C calling standard) as well as the several PDFs for the asm instruction set. I found all that many times on AMD's web site, but the URLs tend to change.

Last edited by johnsfine; 04-21-2012 at 02:36 PM.
 
Old 04-21-2012, 02:59 PM   #6
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by dugan View Post
Why not use yasm, which supports both syntaxes?
'as --help' produces among other things:

Code:
  -mmnemonic=[att|intel]  use AT&T/Intel mnemonic
  -msyntax=[att|intel]    use AT&T/Intel syntax
- is there any other syntax ?
 
Old 04-21-2012, 03:36 PM   #7
sycamorex
LQ Veteran
 
Registered: Nov 2005
Location: London
Distribution: Slackware64-current
Posts: 5,836
Blog Entries: 1

Rep: Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251
Quote:
Originally Posted by johnsfine View Post
32 bit or 64 bit?

x86_64 asm is a much more sane assembler language than x86 32 bit.
I'm ignorant when it comes to assemblers, but this statement caught my attention. Could you elaborate on it (or provide a link)? Thank you
 
Old 04-21-2012, 04:11 PM   #8
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by sycamorex View Post
this statement caught my attention. Could you elaborate on it
16 bit x86 is a very quirky assembler language. There is a very limited number of general registers and very quirky rules for which registers can do what. There are very quirky addressing modes and very quirky rules for which addressing modes can be used with which instructions with which registers. There is also segmentation because 16 bits can only address 65KB but original x86 16 bit asm language addresses 1MB and later 16 bit models extend the per process address range with segmenting well beyond 1MB

32 bit x86 is much less quirky than 16 bit. Still has the same very limited number of general registers (though each is 32 bit instead of 16). Still is fairly quirky about which registers can do what, but much less so than 16 bit. Still pretty quirky about which combinations of instructions and registers and addressing modes you can use. It have a very ugly legacy floating point instruction set. It has a whole second set of instructions and registers called SSE that includes a much better floating point instruction set. But most compilers don't use SSE because different models of the chip have different subsets (from zero up) of the SSE instruction set. Since compilers need to be able to emit code for old chips, using SSE for new floating point conditional on the target chip supporting it, is a complicated mess so generally legacy floating point is used even if the target chip supports full SSE. 32 bit x86 still supports the extended version of segmenting inherited from 16 bit even though in 32 bit it accomplishes nothing. Without segmenting, you have 4GB address space per process. With segmenting you have no more than that. PAE paging allows the physical ram per machine to be higher than the maximum virtual addressing per process (which is a very practical but quirky idea itself).

64 bit is another step toward less quirky. There are twice as many general purpose registers (and each is 64 bit instead of 32 bit) and slightly fewer special exceptions on which instructions or addressing modes combine with which registers. There are twice as many SSE registers (identical to 32 bit x86, each SSE register is 128 bits). But every model x86_64 has the same SSE instruction set. So compilers for x86_64 typically use SSE for floating point and don't use legacy floating point (which is still in there but redundant and easily ignored). A tiny stub of segmentation is still in there but with virtually zero effect on anything and safe for you to completely ignore.

I very rarely write asm code. When writing asm code, the typical reason is that you can do a significantly better job than the best optimizing compiler at a performance critical function. Typically performance critical functions that justify that level of effort tend to show up in projects that for other reasons justify using 64 bit. So the skill of writing performance critical code better than a compiler can in 32 bit x86 is pretty much obsolete (though I still have that skill). In 32 bit x86 asm, a skilled human typically outperforms the compiler because a human is better at sorting through the interaction of the sequence in which steps are done against the constraints of having too few registers. In 64 bit x86 asm, a skilled human typically outperforms the compiler in those situations where the SIMD (single instruction multiple data) features of SSE can be put to good use.

I take great advantage of my asm knowledge every day, even though I almost never write asm code. The really big value in asm knowledge comes in diagnosing the trickiest defects that arise in C++ programs. I diagnose the hardest defects for a whole team of C++ programmers because asm knowledge is a very powerful tool in diagnosing C++ defects. Asm knowledge also lets you write more efficient C++ code once you have looked at the optimized asm output of C++ compilers enough times.

Last edited by johnsfine; 04-21-2012 at 04:14 PM.
 
1 members found this post helpful.
Old 04-21-2012, 09:32 PM   #9
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948
Quote:
Originally Posted by johnsfine View Post
I very rarely write asm code. [ ... snip ... ] In 64 bit x86 asm, a skilled human typically outperforms the compiler in those situations where the SIMD (single instruction multiple data) features of SSE can be put to good use.
I perfectly agree. Being able to read assembly is extremely useful to me too, although I use GCC builtins even when using SIMD SSE2/SSE3. I can write x86-64 assembly (and I daresay I'm pretty good at it, too), but I can get roughly the same performance using the GCC builtins.

It is not a skill most programmers will ever need, but it certainly is a useful one to have.

Consider a practical example: Lennard-Jones potential, where you evaluate
Code:
potential[i] = constant1 * ( (constant2 / distance[i])^12 - (constant2 / distance[i])^6 )
for a largeish number of distance[i] values, with ^ being the exponentiation operator (and not bitwise XOR).

On x86_64, a straightforward implementation such as
Code:
void lennard_jones_x86(double const vmin, double const rmax, size_t const n,
                       const double *const r, double *const v)
{
    double const    sigma = rmax * 0.8908987181403393047402262055905125079870;
    size_t          i = n;

    while (i-->0) {
        double const t1 = sigma / r[i];
        double const t3 = t1 * t1 * t1;
        double const t6 = t3 * t3;
        v[i] = 4.0 * vmin * (t6*t6 - t6);
    }
}
executes at about 17 clock cycles per element. GCC does unroll the loop and use SSE2; otherwise (-mno-sse2) it'd take about 23 clock cycles per operation.

Let us introduce GCC built-ins for SSE2 on x86-64. For clarity, in this context, I refer to the use of SIMD instructions as vectorization.
Code:
void lennard_jones_sse2(double const vmin, double const rmax, size_t const n,
                        const double *const r, double *const v)
{
    double             *vnext = v;
    const double       *rnext = r;
    const double *const rlast = r + (n & (~(size_t)1))); /* r + even n */
    const double *const rends = r + n;

    v2df const          vscale = { 4.0 * vmin, 4.0 * vmin };
    v2df const          sigmas = { rmax * 0.8908987181403393047402262055905125079870,
                                   rmax * 0.8908987181403393047402262055905125079870 };

    while (rnext < rlast) {
        v2df    a, b;

        a = sigmas;                         /* a = sigma */
        b = __builtin_ia32_loadupd(rnext);  /* b = r */
        a = __builtin_ia32_divpd(a, b);     /* a = sigma/r */

        b = a;
        a = __builtin_ia32_mulpd(a, b);
        a = __builtin_ia32_mulpd(a, b);     /* a = (sigma/r)^3 */

        b = a;
        a = __builtin_ia32_mulpd(a, b);     /* a = (sigma/r)^6 */

        b = a;                              /* b = (sigma/r)^6 */
        a = __builtin_ia32_mulpd(a, b);     /* a = (sigma/r)^12 */

        a = __builtin_ia32_subpd(a, b);     /* a = (sigma/r)^12 - (sigma/r)^6 */

        a = __builtin_ia32_mulpd(a, vscale); /* a = result */

        __builtin_ia32_storeupd(vnext, a);

        rnext += 2;
        vnext += 2;
    }
            
    while (rnext < rends) {
        double const t1 = rmax * 0.8908987181403393047402262055905125079870 / *(rnext++);
        double const t3 = t1 * t1 * t1;
        double const t6 = t3 * t3;
        *(vnext++) = 4.0 * vmin * (t6*t6 - t6);
    }
}
Granted, this version is nowhere as easy to read as the plain C version, but this executes at about 9 clock cycles per element -- nearly twice as fast as the plain C version. And the results are identical. (If you use floats, the speedup is almost fourfold.)

You may be able to shave a couple of clocks off by unrolling and interleaving the operations, but not much more than that on current processors. Typically these features tend to be interleavable in future processors, so unrolling the loop once or twice, and interleaving the operations may be worthwhile, as future processors can then execute more in parallel. However, that makes the code much more difficult to follow. I do that last, and only if it really needs optimizing.

The Lennard-Jones potential is trivial, and a sufficiently clever compiler should be able to vectorize it itself. It is a deliberately bad example.

Many algorithms are different. They need to be implemented in a way which allows efficient vectorization. Because I know how SSE2 works at the assembly level, I can apply that knowledge when implementing the algorithms.

For example, I know that if I have a large number of points, it is easier to calculate e.g. dot products if each coordinate axis is a separate array, rather than each point being consecutive in memory. (To sum vector elements, you need to shuffle them around in the registers, even if you use SSE3 horizontal additions. Summing vectors, on the other hand, is trivial.)

Indeed, if I were to write an MD simulator using the L-J potential, I would have to make a decision whether the data structures are vectorizable or not. In particular, coordinate and force vector components should be in separate arrays for each axis, with neighbor lists aligned on vector boundaries (16 bytes). With non-SIMD instructions, these have a significant penalty due to data nonlocality: it will be slower than the "normal" implementation if SIMD instructions are not available. And the Lennard-Jones potential is about the simplest one there is. For typical potentials, the situation is much more complex.

The compiler cannot make such decisions for you; they are for the programmer to decide. To decide such things in a way that yields the most efficient possible implementation, you need to know the relevant hardware -- in my case, SSE2/SSE3 instruction set, registers, and behaviour on Intel and AMD x86-64.

Currently, floating-point vectorization yields about a twofold speedup when using doubles, and fourfold when using floats, if the entire calculation can be vectorized. (I have not measured the integer side, but I do believe it is similar if not same.) This factor will only increase in the future. Both AMD (Bulldozer) and Intel (Sandy Bridge) support AVX, which adds support for 256-bit vectors, potentially doubling the speed gains yet again.

If you add threads into the mix, you will soon find out you really need to implement lockless data structures. For those, you'll need atomic operators. To apply them correctly, you need to have a pretty complete picture of the architecture at the instruction level, caches, and compiler/language options to enforce barriers. If you can understand assembly first, they are all much easier to understand, since they more or less fit like pieces of a puzzle together.

I know that most people expect these details to be taken care of by the compiler or the programming environment. That will never happen, not completely. Oh, absolutely there will be excellent high-level libraries that will handle the nitty-gritty details.. but somebody has to write those libraries first.
 
Old 04-22-2012, 04:00 AM   #10
sycamorex
LQ Veteran
 
Registered: Nov 2005
Location: London
Distribution: Slackware64-current
Posts: 5,836
Blog Entries: 1

Rep: Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251Reputation: 1251
Quote:
Originally Posted by johnsfine View Post
16 bit x86 is a very quirky assembler language. There is a very limited number of general registers and very quirky rules for which registers can do what. There are very quirky addressing modes and very quirky rules for which addressing modes can be used with which instructions with which registers. There is also segmentation because 16 bits can only address 65KB but original x86 16 bit asm language addresses 1MB and later 16 bit models extend the per process address range with segmenting well beyond 1MB

32 bit x86 is much less quirky than 16 bit. Still has the same very limited number of general registers (though each is 32 bit instead of 16). Still is fairly quirky about which registers can do what, but much less so than 16 bit. Still pretty quirky about which combinations of instructions and registers and addressing modes you can use. It have a very ugly legacy floating point instruction set. It has a whole second set of instructions and registers called SSE that includes a much better floating point instruction set. But most compilers don't use SSE because different models of the chip have different subsets (from zero up) of the SSE instruction set. Since compilers need to be able to emit code for old chips, using SSE for new floating point conditional on the target chip supporting it, is a complicated mess so generally legacy floating point is used even if the target chip supports full SSE. 32 bit x86 still supports the extended version of segmenting inherited from 16 bit even though in 32 bit it accomplishes nothing. Without segmenting, you have 4GB address space per process. With segmenting you have no more than that. PAE paging allows the physical ram per machine to be higher than the maximum virtual addressing per process (which is a very practical but quirky idea itself).

64 bit is another step toward less quirky. There are twice as many general purpose registers (and each is 64 bit instead of 32 bit) and slightly fewer special exceptions on which instructions or addressing modes combine with which registers. There are twice as many SSE registers (identical to 32 bit x86, each SSE register is 128 bits). But every model x86_64 has the same SSE instruction set. So compilers for x86_64 typically use SSE for floating point and don't use legacy floating point (which is still in there but redundant and easily ignored). A tiny stub of segmentation is still in there but with virtually zero effect on anything and safe for you to completely ignore.

I very rarely write asm code. When writing asm code, the typical reason is that you can do a significantly better job than the best optimizing compiler at a performance critical function. Typically performance critical functions that justify that level of effort tend to show up in projects that for other reasons justify using 64 bit. So the skill of writing performance critical code better than a compiler can in 32 bit x86 is pretty much obsolete (though I still have that skill). In 32 bit x86 asm, a skilled human typically outperforms the compiler because a human is better at sorting through the interaction of the sequence in which steps are done against the constraints of having too few registers. In 64 bit x86 asm, a skilled human typically outperforms the compiler in those situations where the SIMD (single instruction multiple data) features of SSE can be put to good use.

I take great advantage of my asm knowledge every day, even though I almost never write asm code. The really big value in asm knowledge comes in diagnosing the trickiest defects that arise in C++ programs. I diagnose the hardest defects for a whole team of C++ programmers because asm knowledge is a very powerful tool in diagnosing C++ defects. Asm knowledge also lets you write more efficient C++ code once you have looked at the optimized asm output of C++ compilers enough times.
Thanks a lot. I think I get the idea.
 
Old 04-22-2012, 08:55 PM   #11
amboxer21
Member
 
Registered: Mar 2012
Location: New Jersey
Distribution: Gentoo
Posts: 291

Rep: Reputation: Disabled
I started off with NASM for Ubuntu, 32bit. I DO NOT like AT&T's syntax. NASM uses Intel.
 
Old 04-22-2012, 09:20 PM   #12
tuxdev
Senior Member
 
Registered: Jul 2005
Distribution: Slackware
Posts: 2,012

Rep: Reputation: 115Reputation: 115
I actually have the opposite feeling. I dislike Intel syntax and prefer AT&T, because Intel syntax hides a lot of important details that a good ASM programmer *should* be aware of.
 
1 members found this post helpful.
Old 04-22-2012, 09:32 PM   #13
amboxer21
Member
 
Registered: Mar 2012
Location: New Jersey
Distribution: Gentoo
Posts: 291

Rep: Reputation: Disabled
Quote:
Originally Posted by tuxdev View Post
I actually have the opposite feeling. I dislike Intel syntax and prefer AT&T, because Intel syntax hides a lot of important details that a good ASM programmer *should* be aware of.
Never said i was good lol I am a complete novice! Just started 6 months ago. I tried both AT&T and Intel, and I loved Intel! Didn't like AT&T at all. I guess it's because Intel is easier for me to use. Easier for me to understand, easier on the eye as well. More beginner friendly I guess.

Quote:
Originally Posted by tuxdev View Post
Intel syntax hides a lot of important details that a good ASM programmer *should* be aware of.
Thanks good to know! Maybe i will try and move to AT&T's ASM in the future.

Last edited by amboxer21; 04-22-2012 at 09:34 PM.
 
Old 04-23-2012, 07:03 AM   #14
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by tuxdev View Post
I actually have the opposite feeling. I dislike Intel syntax and prefer AT&T, because Intel syntax hides a lot of important details that a good ASM programmer *should* be aware of.
I think I know what you mean, but if so I think that refers primarily to MASM (the main version of Intel syntax). MASM tries to hard to pretend to be a high level language, making it very awkward as an assembler language.

The central issue there is how to resolve the many ambiguous cases of data type in asm instructions. Traditional assembler has opcodes that unambiguously determine the data type. Operands typically don't imply a data type, and in many cases where the operand would imply the data type, one of the powers of asm is that it makes it easy to operate on an operand in a different data type than that operand implies (data type of the opcode overrides data type of the operand any time such an instruction is physically possible).

X86 has many situations in which the operation size is inferred from a register size.
1) If that contradicts the size of the other operand, MASM requires a cast like a high level language, AT&T doesn't.
2) If registers don't determine the size, AT&T allows an optional concise suffix on the opcode to resolve the abiguity. MASM requires a verbose cast on an operand.

NASM tries to pretend to be MASM syntax, but only manages that for simple beginner examples (at least years ago when I used NASM a lot as well as worked on it). The ugliest aspects of MASM syntax are not matched in NASM, which is a problem if you care about compatibility, but isn't a flaw in NASM as a language. So NASM does tie data type more to operand than to opcode but not in as obscure ways as MASM does.

I greatly prefer the AT&T design of tying operand size issues to the opcode vs tying it to any operand other than direct use of a register.

Where AT&T syntax bothers me are the places that it works too hard to be a universal syntax across many architectures, so details specific to X86 are done in a less obvious way in order to fit a more general universe that X86 asm programmers rarely care about. Those minor flaws all add up to far less grief than the stupid casts and data declaration nonsense in MASM.

The big difference you see first is source,destination vs. destination,source. But I don't see any right or wrong to either of those. It is easy to get used to whichever you work with.
 
Old 04-24-2012, 08:53 AM   #15
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,657
Blog Entries: 4

Rep: Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938
Notice, repeatedly, that most "assembly" code is being written, insofar as possible, within a tool such as gcc, through inline assembly statements. In the Linux Kernel, for example, there is a paucity of "pure-asm files," but quite a bit of embedded assembly-level directives within the /arch directories.

There is no better example of "best practices," and none so easily available to you, as the Linux Kernel source-code for your (and for that mater, for other) architecture(s). These are the definitive examples to be followed.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Assembler samjkd General 7 03-14-2005 01:52 PM
Assembler Firari Programming 9 12-10-2004 02:26 PM
assembler usr Programming 2 11-15-2003 05:15 PM
assembler tda Programming 4 08-21-2002 02:54 AM
Need an Assembler ChimpFace9000 Programming 1 07-18-2001 09:36 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 05:08 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration