LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 02-03-2008, 09:53 AM   #16
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197

Quote:
Originally Posted by MyHeartPumpsFreon View Post
Linux has FAR better memory management than Windows. I've had an up time in upwards of 3 days (currently) and my swap has not been touched ONCE.
There are many aspects to memory management. That is just one. I'm well aware Linux overwhelmingly beats Windows on that one aspect of memory management (see below) but I was wondering about other aspects of memory management, including the ability (that XP32 lacks) to be limited only by available memory (not precompiled data structures) when caching a large number of small disk files. XP64 still has stupid limits. They're just so much higher they don't matter yet (as XP32's limits didn't matter when it was new). If 64-bit Linux has stupid design limits there, but above the limits imposed by currently practical ram size, I guess I won't notice.

I know from painful experience not to run a single application larger than the swapfile in Windows no matter how much physical ram you have. Run a 1.5G application alone on a Windows system with 2G of Ram and 1G of swap space. The commit level never reaches close to 2G, but the application fails, apparently due to lack of memory. With 2G of swap space, it not only works, but has enough memory to run two 1.5G applications at once (even with nothing shared between them).

Run the same (recompiled of course) 1.5G application on a Linux system with 2G of RAM. The swap partition size doesn't matter. It doesn't even get used.

However, as you get into issues of an application deallocating and reallocating lots of varying size chunks of memory, the results are much more mixed. Very likely Windows is a little better. (I know major parts of that are linked in via the run time library and only some parts are in the OS, but what matters is taking it all together GNU/Linux vs. Microsoft). How much cpu time is spent in all the memory management routines for all that deallocating and reallocating? Usually Microsoft wins. How long before you fragment your 3Gb of virtual address space so badly that even with less than 2G in use, you can't allocate any moderately large chunk? Often Microsoft wins that one as well.

Last edited by johnsfine; 02-03-2008 at 09:55 AM.
 
Old 02-03-2008, 01:23 PM   #17
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
John

As to the allocating/deallocating issue: If the application is built properly, then there should be no issue. If the application has a bug (say forgetting to close out memory) that is a application bug and not a OS bug. One cannot hold a application bug against the OS, regardless of OS.

I think you are also a little unclear on how the linux handles memory. It tends to shuffle stuff around to allow the memory to be best allocated. I usually picture the memory as a pile of sand. When you pull a bucket of sand out of the pile (deallocate memory) the sand in the rest of the pile "fills in" the space. However the kernel directs this in an orderly fashion. I am not sure on the cpu requirements to do the memory shuffling but considering that more and more complex computing problems (movie effects, military flight simulators, decryption software, face recognition, etc) are moving to Linux, I do not think it can be a real issue.
 
Old 02-03-2008, 02:01 PM   #18
jay73
LQ Guru
 
Registered: Nov 2006
Location: Belgium
Distribution: Ubuntu 11.04, Debian testing
Posts: 5,019

Rep: Reputation: 133Reputation: 133
I tend to agree that Windows memory management is more flexible - considered purely from an out-of-the-box perspective. If you give Linux some really intensive task, it will perform it a lot faster (I have on occasion seen it running up to five times as fast performing the very same kind of task) but at the expense of taking over so many resources that anything else hardly gets a chance to start up. Depending on your perspective, that may good or bad. As far as I know, none of the world's super computers run windows, which is surely not a coincidence. Given the same hardware, Linux/Unix on the server will take your further than Windows. In short, if you need something dedicated, avoid windows; if you need something flexible and you don't want to set things up yourself, it may be windows that is the better option. But that proviso is fundamental. Windows may be more flexible in one specific way but as it is essentially a black box, you really don't have any control over it. Say you want to move over a large pile of files really fast and you are willing to allocate as many resources to that task as possible, how are you going to do that? You can tweak a little left, you can tweak a little right, but that's about it.

Last edited by jay73; 02-04-2008 at 01:08 AM.
 
Old 02-04-2008, 12:53 AM   #19
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
hanks, but: I've seen statements like that a lot, but never any convincing explanation. I think ordinary computer hardware is very reliable and stable already. Most malfunctions are software. I'm assembling a system for software experimentation, not for transaction processing. I don't believe the supposed connection between the amount of RAM and the need for ECC. The connection should be to system use (especially transaction processing).
Hard drives uses ECC and it uses all the time. They are reliable retrieving and saving data. If you do not think so, run spinrite on a new hard drive and compare the results as it ages.

If you are doing a software experiment, you want controls to get want you want in an experiment. By using ECC memory, you add additional controls to aid in your experiment. It will take several million erorrs to equal one error by using ECC. With out ECC memory, it will a take a few thousand errors to reach an error.

All data is handled in memory.

If I was creating programs, I would use ECC memory to make sure the final product does not have any errors.

Quote:
I'm sure setting it lower than its official specs sacrifices reliability. But what about paying extra for memory with faster official specs (as I asked in the first post)? Are you saying it is less reliable to run that memory at its official specs than to run cheaper slower memory at its official specs? Do you have evidence/links to support that? I'd definitely like to know before making that decision.

As for worrying about timing: In my regular job I develop software for some large data problems that have heavy miss rates on a 1Mb L2 cache. If the ram timing were 25% faster, the whole run would be 23% faster. I may want to do some things at home with similar timing characteristics and/or if I get faster ram at home than I have at work, bring some of the tests I would run at work home.
I am explaining about 5-5-5-15 VS 4-4-4-12. The 5-5-5-15 will provide better reliability and stability. Sure a 4-4-4-12 will give you faster performance at a cost of reliability and stability. More efficient coding schemes can be done or another piece of hardware could be used.

A CPU have a gigaflop of about 10 gigaflops and probably less. Graphic cards have a lot higher gigaflops. They range from tens of gigaflops to a hundreds of gigaflops. Again what this means compared to CPUs is more computing power. You can save your money selecting a slower CPU and spend the money on a GPU. nVidia GeForce8 and up has the ability to process several kinds of data much faster than any CPU.

A GPU will turn a 2% improvement into a kitten. A GPU can provide 400% or more compared to your thinking that faster memory will finish your experiment sooner. A GeForce8 has a precision rating of IEEE-745. Look up CUDA at nVidia.

Quote:
I'll select some WD drive because I trust them. But I'll select mainly on price/capacity, not speed. Most of my hard drive performance issues at work are caused by brain dead limits in Windows XP32's caching rules when hit with thousands of tiny files and directories (with lots of ram available, the number of small files cached is limited to far fewer than available cache memory would suggest).
Selecting hard drives based on price vs capacity when doing a job that accesses thousands of files is not smart. Selecting a hard drive based on the lowest accessing time will help retrieve thousand of files faster. Faster you get the files to the processor, less time it will take the computer to process data or less time you wait for the results.

Linux has different file systems that handles caching and buffering differently although they use the same file IO routines from the kernel. Also Linux is a virtual memory and multi-tasking OS. Every program gets its own environment. This gives reliability and stability that Linux is known for. Since Linux uses virtual memory, you can keep on giving it more memory that far exceeds Windows. Windows stops at 1.5 GB of memory per program. Mac OS X is the same. Linux can handle towards pentabytes or probably more for each program. Sure you can mess around with write-behind tasks, but you may hurt the stability and reliability of Linux.

lazlow, the movie industry uses Linux. They have been using it for years. The military probably uses too. The government uses Solaris with SPARC processors (I think).
 
Old 02-04-2008, 03:55 AM   #20
gr8scot
Member
 
Registered: Jun 2007
Distribution: Debian, kubuntu
Posts: 73

Rep: Reputation: 16
Cool

Quote:
Originally Posted by johnsfine View Post
I'm planning to assemble a computer to run large (data size) programs under a 64-bit kernel.

I'd like some advice, especially warnings about any pitfalls in my tentative plan.

I'm pretty sure I want the Athlon X2 6400+ Windsor 125W Dual Core.
No problems with processor. In 4 yrs. w/Pentium P4 3.2GHz, single core, AMD86_64 port works fine.

*Although the P4 is made by Intel not AMD, its architecture is most similar to AMD64, and runs with the same software.

Quote:
I may decide either 4Gb or 8Gb ram.

I'm considering the ASUS M2A-VM HDMI motherboard. (AMD 690G, ATI SB600, ATI Radeon Xpress 1250 on board).
Pick the mobo that *guarantees* most RAM in the future. Do not depend on LQ or any other forum to guess correctly that mobo #? will "probably" add #GB with BIOS update #?.

I would recommend worrying less about sound than video, but even ATI video is not as bad as the worst whiners make it sound; nVidia must be frigging easy is all I have to say about that, after making ATI 9000 & X800 both work [pretty easily] with Debian, and the 9000 also worked with various RHs, Mandrakes, & Mandrivas without trouble.

For sound, I recently got the "notorious" AC97 soundcard/chip to output on digital coax by switching output to "off" in Debian "volume control;" alsamixer was totally useless, and I don't know yet how to make "ON"=frigging "on," but with some futzing, it does work, if you really care.

Quote:
I need high resolution graphics (many pixels on screen) but NOT high performance graphics (fast screen updates, gaming, etc.) If I choose a different motherboard without integrated graphics, I'm considering a very low cost card such as Sapphire 100184L (Radeon x1300 PCIx16 with 128Mb ram). I never understood what display cards need with all that memory. The highest resolution graphics at 4 bytes per pixel is still just a fraction of the 128Mb.
No need for all that mem is right, and ATI cards are not *that* bad. Go to ati.com, in the worst-case scenario, for display drivers. No big deal. I'm posting at 1600 x 1200 with ATI 9000.

Quote:
Or am I missing some reason a non gaming high resolution display needs more display card memory?
Video gamers make more noise than anybody else on all forums. Not playing v-games? Then, missing NOTHING!!

Quote:
I want IEEE 1394 (which is on-board the M2A-VM HDMI). I want to be able to load movies from my camcorder through its firewire connection (but that won't be the primary use of the system). I haven't even started yet to investigate what software is required to do that in Linux. Warn me if that is hard.
I do not know; it might be.

Quote:
I am willing to pay a moderate amount extra for faster ram. But I don't understand what is compatible with what. Within DDR2-800, the memory with timing 4-4-4-12 costs a moderate amount more than 5-5-5-15. The M2A-VM motherboard I recently bought doesn't report nor give you access to adjust that timing. It wasn't clear that it understands anything other than 5-5-5-15 (which is what I bought for the lower performance system I just built). (I haven't tried any BIOS upgrades yet). I don't want to pay extra for performance I won't get. Also the voltages are higher for the 4-4-4-12. I don't know what ram voltages the motherboard supports, so the faster ram might not work at all.
Having chosen a mobo, you're committed now to CPU & RAM & video compatible with it; forget about RAM "timing," that maybe somewhat adjustable, but is next to nothing compared to the performance parameters you've already "selected" with your "choice" of mobo.

Quote:
Even more so for memory faster than DDR2-800. The motherboard manuals really don't make clear what they support (Asus forum seems to say that only works 2x2G not 4x2G, so I couldn't get 8GB).
Trial/Error from here on in.

DDR & DDR2 are totally incompatible with one another, but they must match the mobo & CPU; they do not necessarily have anything to do with overall system speed, although higher #s tend higher in performance.

Quote:
If I choose integrated graphics, does the load on main memory for graphics refresh slow down memory noticeably for the CPU? In which case, it doesn't make sense to save $20 by avoiding that cheap display card then spend much more than $20 incremental for faster ram.
Idono much about that, but I don't think the tradeoff is that bad. What you save in video RAM costs relatively little in system RAM, if I understand correctly.

Quote:
I want a low cost DVD burner. I have no clue about software compatibility for Linux. I don't care about speed. I won't be burning many DVDs. I just want to be able to.
NEC kicks the sh*t out of any other brand, last time I checked. It is manufactured mainly by Sony, and recognized automatically by Debian 4.0. No muss, no fuss, 16x DVD-RW & 40x CD-RW for < $40, just look for sales on OEM burners, any brand.

Quote:
I think I can decide on the SATA hard drive and the power supply, case, keyboard, mouse, etc. without any advice.
I think so, too. Other than the PSU, those are personal decisions, and the PSU is not difficult to match to the drain of your components.

Quote:
I'd prefer to use a Debian distribution of Linux. I'm still a Linux newbie and Debian is the one I'm using on another computer and the one I'm starting to understand. But if there are any Debian specific pitfalls in the above plan, or any other strong reason to choose a different distribution, please warn me.
With 18000-ish packages, Debian is easy, and doesn't add new packages before they're thoroughly tested. That's the right distro for n00bz, in this n00b's opinion.

Quote:
The motherboard I mentioned includes Realtek RTL8111b network port on board. Windows XP does not support that on first boot. You need to install the driver from the included CD-ROM after installing XP and before using the network. Does Debian need something similar? How do I insert that into the process? The only way I've installed Debian is booting a standard minimal image from a CD-ROM and letting it find what it needs over the network.
That should be fine; if the network install successfully finds your network adapter, that driver will certainly not be removed in subsequent steps. If it doesn't work easily, Intel NICs are a sure thing with Linux.
 
Old 02-04-2008, 07:32 AM   #21
MyHeartPumpsFreon
Member
 
Registered: Oct 2007
Location: The States, Florida
Distribution: Lonely Werewolf
Posts: 251

Rep: Reputation: 30
Electro, I have a friend of a friend that's in the military. They use Linux as well from what I understand.

Brandon
 
Old 02-04-2008, 08:30 AM   #22
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Electro View Post
I am explaining about 5-5-5-15 VS 4-4-4-12.
Claims and opinions aren't explanations.

I'll place the order for the parts pretty soon. I'll just have to guess on the 5-5-5-15 vs. 4-4-4-12, because nothing anyone has said about that has contained any actual information. But thankyou (any everyone else responding) for info on other aspects.

I hate to argue with experts, but ...

Quote:
Originally Posted by Electro
Hard drives uses ECC and it uses all the time.
Because hard drive designers have determined that the cost of making hard drives reliable without ECC is greater than the cost of including ECC in every hard drive. So the customer doesn't even get that choice.

Most ram doesn't have ECC and most people use such raw and without problems. Ram designers have determined that making ram reliable without ECC is cost effective.

Quote:
It will take several million erorrs to equal one error by using ECC. With out ECC memory, it will a take a few thousand errors to reach an error.
I don't know why you think it takes more than one low level error in non-ecc ram to reach one error in the final results. Sometimes a low level error won't matter, but usually it will (I'm not using the computer for games). So "few thousand" is nonsense.

I actually wrote disk drivers long ago when the ECC code was in the driver, not the firmware. So I know all the theory behind it. I know the basis for the estimates of what fraction of low level errors it will correct and I know those theories are always based on narrow (often incorrect) assumptions about the raw error characteristics of the underlying media. As the technology has matured, there are now a lot of good reasons to trust ECC reliability claims in hard drives. Those reasons don't translate into reasons to trust ECC in ram. I don't think I need more reliability than non ECC ram. If I did need it, I'd be very worried that ECC ram was giving me only an illusion of extra reliability.

Quote:
A CPU have a gigaflop of about 10 gigaflops and probably less. Graphic cards have a lot higher gigaflops. They range from tens of gigaflops to a hundreds of gigaflops. Again what this means compared to CPUs is more computing power. You can save your money selecting a slower CPU and spend the money on a GPU.
I'm NOT playing games. So it does not matter how powerful the GPU is. None of that power can be used for the real work.

Quote:
Selecting hard drives based on price vs capacity when doing a job that accesses thousands of files is not smart. Selecting a hard drive based on the lowest accessing time will help retrieve thousand of files faster.
But the same set of thousands of files are accessed over and over. And those files average only a few thousand bytes long, so all of them combined are way under a gigabyte. So an OS using over a GB for disk caching ought to read them just once and then find them in cache all the subsequent times. Windows XP 32 can't do that because of rotten software design. I hope/expect 64-bit Linux gets that right. If it doesn't, maybe I can fix it (open source).

Even a disk that seeks ten times faster than typical couldn't make the same performance difference as changing a 50% hit rate on the file cache to a 98% hit rate (on a task I run a lot at work that is the difference between the 2GB XP32 on my desk and the 8GB XP64 I only got to try once. Hopefully, I'll get a decent XP64 system at work soon and a better Linux system at home).

Quote:
Windows stops at 1.5 GB of memory per program.
2GB per program normally. 3GB per program if you set a simple option in boot.ini. 4GB per 32-bit program in XP 64. Far more for 64-bit programs in XP-64.

Quote:
Linux can handle towards pentabytes or probably more for each program.
I think all the practical limits are about the same. I think the only interesting Linux/Windows difference in maximum memory per program is:
Microsoft charges more money for XP 64 than for XP 32.
 
Old 02-04-2008, 10:43 AM   #23
MyHeartPumpsFreon
Member
 
Registered: Oct 2007
Location: The States, Florida
Distribution: Lonely Werewolf
Posts: 251

Rep: Reputation: 30
In my limited experience in programming, Linux, and Windows... this is by far one of the most informative discussions I've seen.

Regards,

Brandon
 
Old 02-04-2008, 06:13 PM   #24
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
Because hard drive designers have determined that the cost of making hard drives reliable without ECC is greater than the cost of including ECC in every hard drive. So the customer doesn't even get that choice.

Most ram doesn't have ECC and most people use such raw and without problems. Ram designers have determined that making ram reliable without ECC is cost effective.
Like you said in previous post. You want proof. I gave you proof and you did not like it.

Hard drives uses ECC period. Hard drive manufactures include ECC because it provides reliability of getting the data that their customers depend on. With out ECC, the data will be corrupted most of time. Do you really want your images or data be corrupted most of the time or do you want to retrieve the files intact with no problems. The cost of implementing ECC in hard drives is a lot cheaper than replacing the hard drive when it is not reliable retrieving data. I always recommend people to select ECC when ever possible. People are moving towards SSD. SSD have either non-ECC or ECC. The SSD with ECC will be more reliable.

I have computers with non-ECC and computers with ECC. Computers with ECC rarely come up with errors or does not have corrupted data. Computers with non-ECC memory always have errors. I upgraded the memory in my computers to ECC. It helps a lot.

From the link http://www.crucial.com/kb/answer.aspx?qid=3692. They state computer errors are rare. This is incorrect. Since computers are designed by us, humans, errors comes up a lot. Also ECC and parity is not the same. Crucial is wrong on this. ECC uses a special algorithm to double check if there are no errors. A parity provides a bit mask. Both ECC and parity can not be used interchangeably.

There are many kinds of algorithms for ECC. With using a certain ECC algorithm, it provide the same performance as non-ECC, but include the resistance of data corruption.

Quote:
I'm NOT playing games. So it does not matter how powerful the GPU is. None of that power can be used for the real work
You did not read my post. You think that high end graphic cards are still used for games. You are wrong. They are now designing today's and tomorrow graphic cards to process other kinds of data. For example the folding@home project has a program to use ATI X1600 cards to process work units. With a nVidia GeForce8 8800, it will give you a lot more computing power than what any CPU can give you.


If you are dealing with tasks that resemble database management, you should look in to MySQL or PostgreSQL. Both are open source but PostgreSQL is designed to handle very huge database.

If you are designing something for a client and not for the new 2.7 kernel release, I suggest stop wasting time here. Since you are already set in stone of what you want, begin your work on your own.


FYI, 80x86 systems are not reliable. The competition between Intel and AMD has made the 80x86 based systems become crap. Reliable processors that do not use any magic to handle memory or other forms of processing data are PowerPC and SPARC. NASA uses Motorola 68000 series in satellites and robots for their robust and reliability. Zilog and Parallax also has reliable and robust processors.
 
Old 02-12-2008, 08:42 AM   #25
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Electro View Post
I would go for GIGABYTE GA-M68SM-S2. It comes with nVidia graphics which is a lot easier to work with in Linux. You do not need to find a motherboard with HDMI connector because you just need an DVI to HDMI adapter. IMHO, ASUS are becoming Windows dependent and are not that good any more.
That's exactly the board I chose. I weighed all the partially understood pros and cons myself. I didn't just take your advice.

It was at least a big mistake. It will take longer to know whether it was a very big mistake. The BIOS is really terrible.

The BIOS has no support at all to configure or even tell you the memory timing. According to memtest86+ the BIOS has configured the DDR-II-800 memory at 200Mhz (half its correct clock) and at 7-4-7 instead of the correct 5-5-5. The utility Gigabyte claims to have available for adjusting memory timing seems to be Windows only. Contrary to what you said, my son's ASUS board doesn't seem to be Windows dependent in any way.

Can anyone tell me what Linux programs exist for tweaking or at least viewing DDR-II memory timing in an AM2 socket motherboard?

I also got a SATA DVD burner instead of an IDE. That might be the cause of another problem. But I'm pretty sure that problem is also motherboard. If I restart and/or power off-on with media in the DVD drive, I can't open the DVD drive until after the OS is started on the hard disk.

On the ASUS motherboard with IDE DVD drive, I left media in when I powered off several times. Just power on and press the drive button and it opens. On the Gigabyte, with media in the SATA DVD drive and the BIOS running (booting or stopped in the boot menu or stopped in BIOS setup) the button on the DVD drive does nothing. As soon as the OS is running on the disk drive, the DVD drive button works.

If I corrupt the OS on hard drive while I have wrong media in the DVD drive, I'm in serious trouble. I think I'd need to take the DVD drive to a different computer to get the media out so I could put a Linux liveCD in to fix the corrupted OS.

The integrated video fails entirely with the "nv" driver. I pretty much expected that after reading other threads. It works fine with the "vesa" driver. I switched to the closed source "nvidia" driver (Mepis install for that is very smooth once you figure out where in the menu they hide the X assistant program that does that). I have no idea what the "nvidia" driver does better than the "vesa" driver, since I only have one monitor at the moment. But it seemed like a good idea.

Edit: Now the BIOS's behavior is even worse and I have no clue what changed. It has no settings that should affect this. On power-up, it instantly decides there is no hard drive and (assuming no media in the DVD drive) goes immediately to the disk boot failure message before there is a chance to see anything else (all in the fraction of second it takes the monitor to realize that a video signal has started). A BIOS is supposed to wait some decent fraction of a second for the hard drive to turn on. It doesn't. I need to press Ctrl-Alt-Delete after power-up to get it to see the hard drive. Adding insult to injury, I found in their FAQ: Why does new BIOS sometimes fail to detect IDE? ... To solve your problem, please go to BIOS to reset "Boot delay time" or "IDE delay time" to a longer one. I'm not an idiot. If there were any BIOS option "Boot delay time" or "IDE delay time" or anything like that, I would have tried that. This BIOS lets you configure basically nothing.

Last edited by johnsfine; 02-13-2008 at 01:27 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Advice on webcam to purchase with FC5 gimmee Fedora 1 12-12-2006 05:27 AM
UK Laptop purchase advice Plz (Where to buy) The Bad Penny General 5 08-21-2004 06:06 AM
purchase advice for laptop/pda gizmogadgetus Linux - Laptop and Netbook 1 08-03-2004 03:47 AM
AMD 64 bit vs Dual AMD MP 2400+ GATTACA Linux - Hardware 2 06-02-2004 04:54 PM
Advice on IDE RAID card purchase... tisource Linux - Hardware 9 01-18-2003 09:55 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 04:29 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration