64-bit AMD purchase advice
I'm planning to assemble a computer to run large (data size) programs under a 64-bit kernel.
I'd like some advice, especially warnings about any pitfalls in my tentative plan. I'm pretty sure I want the Athlon X2 6400+ Windsor 125W Dual Core. I may decide either 4Gb or 8Gb ram. I'm considering the ASUS M2A-VM HDMI motherboard. (AMD 690G, ATI SB600, ATI Radeon Xpress 1250 on board). I need high resolution graphics (many pixels on screen) but NOT high performance graphics (fast screen updates, gaming, etc.) If I choose a different motherboard without integrated graphics, I'm considering a very low cost card such as Sapphire 100184L (Radeon x1300 PCIx16 with 128Mb ram). I never understood what display cards need with all that memory. The highest resolution graphics at 4 bytes per pixel is still just a fraction of the 128Mb. Or am I missing some reason a non gaming high resolution display needs more display card memory? I want IEEE 1394 (which is on-board the M2A-VM HDMI). I want to be able to load movies from my camcorder through its firewire connection (but that won't be the primary use of the system). I haven't even started yet to investigate what software is required to do that in Linux. Warn me if that is hard. I am willing to pay a moderate amount extra for faster ram. But I don't understand what is compatible with what. Within DDR2-800, the memory with timing 4-4-4-12 costs a moderate amount more than 5-5-5-15. The M2A-VM motherboard I recently bought doesn't report nor give you access to adjust that timing. It wasn't clear that it understands anything other than 5-5-5-15 (which is what I bought for the lower performance system I just built). (I haven't tried any BIOS upgrades yet). I don't want to pay extra for performance I won't get. Also the voltages are higher for the 4-4-4-12. I don't know what ram voltages the motherboard supports, so the faster ram might not work at all. Even more so for memory faster than DDR2-800. The motherboard manuals really don't make clear what they support (Asus forum seems to say that only works 2x2G not 4x2G, so I couldn't get 8GB). If I choose integrated graphics, does the load on main memory for graphics refresh slow down memory noticeably for the CPU? In which case, it doesn't make sense to save $20 by avoiding that cheap display card then spend much more than $20 incremental for faster ram. I want a low cost DVD burner. I have no clue about software compatibility for Linux. I don't care about speed. I won't be burning many DVDs. I just want to be able to. I think I can decide on the SATA hard drive and the power supply, case, keyboard, mouse, etc. without any advice. I'd prefer to use a Debian distribution of Linux. I'm still a Linux newbie and Debian is the one I'm using on another computer and the one I'm starting to understand. But if there are any Debian specific pitfalls in the above plan, or any other strong reason to choose a different distribution, please warn me. The motherboard I mentioned includes Realtek RTL8111b network port on board. Windows XP does not support that on first boot. You need to install the driver from the included CD-ROM after installing XP and before using the network. Does Debian need something similar? How do I insert that into the process? The only way I've installed Debian is booting a standard minimal image from a CD-ROM and letting it find what it needs over the network. |
Well, for Radeon cards you mentioned you will need the proprietary fglrx drivers from catalyst. They are available from the ATI/AMD website, but still not very good, you might say. Though, they work.
As to the amount of memory for videocards, you can think of double and even tripple buffering, different other buffers, all sorts of texturing and vertex work a GPU has to do... That would explain the amount of memory it needs... |
Yes, but for most folks, it makes more sense to have a faster GPU chip than a load of RAM on the card.
As for the Realtek, I have one of those. It has been supported by Linux for about a year now and CentOS is the only big distribution I can think of that still requires one to compile the driver oneself. I'm not sure about Etch, though, it depends on the kernel. Anything 2.6.18 or up should have it. Also I would recommend a SATA dvd burner, they have never let me down while I do remember a few occasions where a kernel update caused the IDE drives to misbehave. They aren't any more expensive and you can finally get rid of those ugly IDE cables. RAM. Yes, it's true, ASUS isn't very clear about what works and what doesn't. I upgraded to 4 GB some months ago but only after doing plenty of research to find whether it worked for others with the same motherboard. Anyway, that motherboard you mention has a 400Mhz max FSB so it should be pc6400 RAM (i.e. 800Mhz). Oh yes, and another thing, if you will be running Windows on that board, you'll almost certainly have to flash your BIOS to get it to work properly. |
Just stay away from ATI in Linux. It is far, far, far, easier to use Nvidia in Linux.
Just look and see what speed you motherboard manual calls for. When you buy memory think ahead. If there is any chance that you may need to upgrade, make sure you leave open slots. If you fill all your slots to get to 4GB then decide you need 8GB, you will have to get rid of all the memory you bought the first time (I see this almost every week). |
Quote:
Quote:
Quote:
Quote:
Quote:
My son is using M2A-VM motherboard, 2.5 Ghz Athlon X2 Brisbane, 2Gb ram (5-5-5-15), running Windows XP. No obvious problems. I haven't tried any BIOS upgrade. What did you expect to fail without flashing the BIOS? What BIOS would I flash to? Latest does not seem to mean greatest in BIOS's for that board. I'm considering an M2A-VM-HDMI similar motherboard for myself for Linux. If I were less of a newbie at Linux I would do some testing with a bootable DVD: Create some .iso image while running on my really lame (hardware) Debian Linux box. Copy .iso over home network to my son's XP system. Burn DVD, Boot DVD. Test something. If you can point me at directions a newbie could follow for creating that .iso file, I would love to test before purchase the Linux 64-bit kernel support for all the parts that might be in common between my son's XP box and my planned Linux box. |
OK, here are the specs that are relevant to RAM:
Quote:
Quote:
Quote:
Quote:
|
Quote:
Quote:
I was assuming I wouldn't find what I needed that way and would need to build something. Minimally I would want a liveCD or liveDVD with a 64-bit kernel, with KDE, with a display driver that understands the built-in Radeon 1250 well enough to get to 1280x1024 desktop, with a driver that understands that network port, etc., with other basic things I would assume are on any liveCD. That should let me test enough to estimate whether I'll be OK with the similar motherboard. |
I would say, the 64 bit live/installer CDs of Mepis or Ubuntu, those come closest to Debian Lenny.
|
Quote:
But I'm typing this on that M2A-VM system now, running (32-bit, one-year-old version) Knoppix from DVD (I finally got my son to go to bed). I thought Knoppix had failed when the screen flashed on and off for a long time when it was starting X. On other computers that has meant the xorg.conf file isn't good enough and needed tweaking before X could start. But on this system it got past that and started X. So I guess good enough support for this Radeon 1250 and this network port has been in Linux for over a year. BTW, here are some of the more significant details as detected by Knoppix: (--) PCI:*(1:5:0) ATI Technologies Inc unknown chipset (0x791e) rev 0, Mem @ 0xf0000000/27, 0xfdbf0000/16, 0xfda00000/20, I/O @ 0xcc00/8 (II) VESA(0): Total Memory: 256 64KB banks (16384kB) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) Subsystem: ASUSTeK Computer Inc. Unknown device 81aa 00:14.2 Audio device: ATI Technologies Inc SB600 Azalia Subsystem: ASUSTeK Computer Inc. Unknown device 8249 Edit: Now I tested Mepis-CD_7.0-rel_64 as well. Everything looks basically OK. The above hardware detect info is all duplicated by Mepis, even the "unknown chipset" part, despite Mepis being much newer. But none of those unknowns seem to stop the device from working. There are a lot more minor glitches than I expected (seg faults list in dmesg, things break if you try to do anything before KDE is all the way up, etc) but nothing that would be a show stopper. To the extent that Mepis has more of those than Knoppix, I think it is more likely an issue of 64 bit vs. 32 bit than something wrong with the Mepis distribution. So my major worries about compatibility with this motherboard are all gone. But I'd still like to get a little more advice and/or warnings on the basic plan and questions from my first post of this thread. |
I have more questions about this topic (in addition to those items from my first post that still haven't gotten responses).
I think I will want to be able to run both 64-bit and 32-bit (PAE?) kernels. How should I partition for that? Or are the issues already covered by directory structures and paths within one partition? I expect I will want some 64-bit builds of standard programs when using the 64-bit kernel. I assume those won't run with a 32-bit kernel. I assume (correct me if I'm wrong) that most programs aren't available in 64-bit unless I compile them myself. So I'd want the same (32-bit) version for use with both kernels. I don't mind the waste of disk space to duplicate all that. But if there is a cleaner way, please suggest it. I expect I'll want /home separate from root and have a single /home partition used from either kernel. But what else should be in common between the two roots? What do you think of using aufs instead? Assuming I mainly use 64-bit, I could have a normal directory structure with the mix of 64-bit and 32 bit content one would normally use with a 64-bit kernel, then have a second structure placed in front of it by aufs with 32-bit versions of all those things that are 64-bit in the normal version. But I don't know how the package manager keeps track of what is installed. Would it get badly confused from having aufs turned on and off as implied above? |
At least for Fedora, all the packages that are available in 32bit are available in 64bit via yum(package manager). You can run 32bit apps on a 64bit OS, if you have to(but not the other way around).
Just to be clear there are a number of apps that are reported not to like PAE. Unless you have a lot of linux experience already I would just stick with the Ext3 file system. Later, if you want you can do a full backup and then switch. |
Quote:
Ithink you just need to install a multilib distribution. Actually, almost all of disros are multilib now, only a few are pure 64-bit. Quote:
And no, most programs _are_ available in 64-bit, only a very small number of packages don't have a 64-bit version (like Adobe flash). I'm using a pure 64-bit distro, I don't experience any shortage of software... :) Quote:
Quote:
|
I suggest using ECC when ever possible. The benefits of using ECC memory is a more reliable and stable computer. It becomes required if you are going very high capacity. Yes, AMD processors support ECC memory even though the motherboard manufacture does not state it.
Do not worry too much on timing. IMHO, keep it high for increase stability and reliability. When setting it low, you will sacrifice stability and reliability. Selecting higher model graphic cards such as a GeForce8 8800 Ultra, you can process more data quickly. These new graphic cards are now able to process other kinds of data besides graphics. One problem you have to create the software to process the data from the graphic card. Just select models with a lot of video memory to do this. At least 768 MB or more. The amount of gigaflops is between 10 for lower end models to a few hundred gigaflops for the very high end models. What this means near super computer speeds. Integrated graphics uses main memory for video memory. Also it takes up memory bandwidth. I would go for GIGABYTE GA-M68SM-S2. It comes with nVidia graphics which is a lot easier to work with in Linux. You do not need to find a motherboard with HDMI connector because you just need an DVI to HDMI adapter. IMHO, ASUS are becoming Windows dependent and are not that good any more. The Gigabyte board is better and cheaper. It can handle 16 GB of memory while the ASUS board can only handle 8 GB. For the processor, make sure it does not have an odd multiplier. The following link explains about it. http://www.anandtech.com/cpuchipsets...px?i=2762&p=10 Not all programs benefit when they are compiled with 64-bit instruction architecture. Mostly multimedia programs and some games will work faster, but daily programs will not. Some programs may perform worst. Be careful selecting the power supply. All power supplies are not created equal. The saying you get what you paid for does prove in the buying decision of power supplies. You pay more, you get a quality power supply. You pay less, you get crap. Spend some time at xbitlabs.com and other review sites that have tested power supplies thoroughly. I suggest either Seasonic or Enermax. I suggest Western Digital 'Raptor' series hard drive. The reason is because they are fast. They have an accessing time of 5 ms and they perform well in file serving which means great for general task or daily tasks of loading programs. I prefer IDE/ATAPI optical drives because they are more reliable. Also a lot easier to deal with than SATA optical drives. In the past SATA optical drives are not reliable. I suggest Gentoo Linux instead of Debian. Sure Debian is OK, but they use pre-compiled programs. People said Gentoo goes for speed. This is wrong. Gentoo goes for reliability and stability. |
Quote:
Quote:
Quote:
I don't know what "multi-lib distribution" even means. Maybe I'll do some searches. But since I installed the 32-bit Mepis on my junk hardware Linux system, and tried 64-bit Mepis liveCD on my son's new Windows XP computer, I'm convinced that is the distribution to use when I assemble the next computer. I like to do as much sys admin in GUI mode as possible and go to command line only when necessary. Mepis seems to support that better than other distributions. Also its documentation is much better, even for the command line sys admin tasks. Quote:
Quote:
As for worrying about timing: In my regular job I develop software for some large data problems that have heavy miss rates on a 1Mb L2 cache. If the ram timing were 25% faster, the whole run would be 23% faster. I may want to do some things at home with similar timing characteristics and/or if I get faster ram at home than I have at work, bring some of the tests I would run at work home. Quote:
Quote:
Quote:
Quote:
Quote:
I expect/hope that Linux (especially 64 bit) won't have any similarly stupid limits. So I expect in a large ram system to barely touch the disk in an edit/compile/test/repeat cycle with small or moderate size test cases. I may want some advice on how to tune (maximize) write-behind behavior for data (such as object files) where the cost to recreate them in case of a crash is low. |
Linux has FAR better memory management than Windows. I've had an up time in upwards of 3 days (currently) and my swap has not been touched ONCE. I'm using a 32-bit SMP kernel. I only have 1GB of RAM running Fedora 8. If you're using 4GB of RAM, I imagine you'll have the same experience as me... a good one. (I'm sure you know, but remember, Linux has the same 3GB limit in a 32-bit kernel as Windows. Obviously, 64-bit supports higher)
I agree with the others to go with an Nvidia card. It's just much more Linux friendly than ATI is. ATI has promised better support, but I'm not holding my breath. . As for the Asus board that only supports 8GB... I didn't catch the model, but that sounds like a hardware limitation. I'd refer to the site. Normally what they list on their site is what the BOARD can take, not an OS or otherwise. If you have capital, go for the future proof board. We're not that far out from OEM desktops supporting 64GB of RAM. So, it's only a matter of time before all of that stuff is reasonably priced. Regards, Brandon |
Quote:
I know from painful experience not to run a single application larger than the swapfile in Windows no matter how much physical ram you have. Run a 1.5G application alone on a Windows system with 2G of Ram and 1G of swap space. The commit level never reaches close to 2G, but the application fails, apparently due to lack of memory. With 2G of swap space, it not only works, but has enough memory to run two 1.5G applications at once (even with nothing shared between them). Run the same (recompiled of course) 1.5G application on a Linux system with 2G of RAM. The swap partition size doesn't matter. It doesn't even get used. However, as you get into issues of an application deallocating and reallocating lots of varying size chunks of memory, the results are much more mixed. Very likely Windows is a little better. (I know major parts of that are linked in via the run time library and only some parts are in the OS, but what matters is taking it all together GNU/Linux vs. Microsoft). How much cpu time is spent in all the memory management routines for all that deallocating and reallocating? Usually Microsoft wins. How long before you fragment your 3Gb of virtual address space so badly that even with less than 2G in use, you can't allocate any moderately large chunk? Often Microsoft wins that one as well. |
John
As to the allocating/deallocating issue: If the application is built properly, then there should be no issue. If the application has a bug (say forgetting to close out memory) that is a application bug and not a OS bug. One cannot hold a application bug against the OS, regardless of OS. I think you are also a little unclear on how the linux handles memory. It tends to shuffle stuff around to allow the memory to be best allocated. I usually picture the memory as a pile of sand. When you pull a bucket of sand out of the pile (deallocate memory) the sand in the rest of the pile "fills in" the space. However the kernel directs this in an orderly fashion. I am not sure on the cpu requirements to do the memory shuffling but considering that more and more complex computing problems (movie effects, military flight simulators, decryption software, face recognition, etc) are moving to Linux, I do not think it can be a real issue. |
I tend to agree that Windows memory management is more flexible - considered purely from an out-of-the-box perspective. If you give Linux some really intensive task, it will perform it a lot faster (I have on occasion seen it running up to five times as fast performing the very same kind of task) but at the expense of taking over so many resources that anything else hardly gets a chance to start up. Depending on your perspective, that may good or bad. As far as I know, none of the world's super computers run windows, which is surely not a coincidence. Given the same hardware, Linux/Unix on the server will take your further than Windows. In short, if you need something dedicated, avoid windows; if you need something flexible and you don't want to set things up yourself, it may be windows that is the better option. But that proviso is fundamental. Windows may be more flexible in one specific way but as it is essentially a black box, you really don't have any control over it. Say you want to move over a large pile of files really fast and you are willing to allocate as many resources to that task as possible, how are you going to do that? You can tweak a little left, you can tweak a little right, but that's about it.
|
Quote:
If you are doing a software experiment, you want controls to get want you want in an experiment. By using ECC memory, you add additional controls to aid in your experiment. It will take several million erorrs to equal one error by using ECC. With out ECC memory, it will a take a few thousand errors to reach an error. All data is handled in memory. If I was creating programs, I would use ECC memory to make sure the final product does not have any errors. Quote:
A CPU have a gigaflop of about 10 gigaflops and probably less. Graphic cards have a lot higher gigaflops. They range from tens of gigaflops to a hundreds of gigaflops. Again what this means compared to CPUs is more computing power. You can save your money selecting a slower CPU and spend the money on a GPU. nVidia GeForce8 and up has the ability to process several kinds of data much faster than any CPU. A GPU will turn a 2% improvement into a kitten. A GPU can provide 400% or more compared to your thinking that faster memory will finish your experiment sooner. A GeForce8 has a precision rating of IEEE-745. Look up CUDA at nVidia. Quote:
Linux has different file systems that handles caching and buffering differently although they use the same file IO routines from the kernel. Also Linux is a virtual memory and multi-tasking OS. Every program gets its own environment. This gives reliability and stability that Linux is known for. Since Linux uses virtual memory, you can keep on giving it more memory that far exceeds Windows. Windows stops at 1.5 GB of memory per program. Mac OS X is the same. Linux can handle towards pentabytes or probably more for each program. Sure you can mess around with write-behind tasks, but you may hurt the stability and reliability of Linux. lazlow, the movie industry uses Linux. They have been using it for years. The military probably uses too. The government uses Solaris with SPARC processors (I think). |
Quote:
*Although the P4 is made by Intel not AMD, its architecture is most similar to AMD64, and runs with the same software. Quote:
I would recommend worrying less about sound than video, but even ATI video is not as bad as the worst whiners make it sound; nVidia must be frigging easy is all I have to say about that, after making ATI 9000 & X800 both work [pretty easily] with Debian, and the 9000 also worked with various RHs, Mandrakes, & Mandrivas without trouble. For sound, I recently got the "notorious" AC97 soundcard/chip to output on digital coax by switching output to "off" in Debian "volume control;" alsamixer was totally useless, and I don't know yet how to make "ON"=frigging "on," but with some futzing, it does work, if you really care. Quote:
Quote:
Quote:
Quote:
Quote:
DDR & DDR2 are totally incompatible with one another, but they must match the mobo & CPU; they do not necessarily have anything to do with overall system speed, although higher #s tend higher in performance. Quote:
Quote:
Quote:
Quote:
Quote:
|
Electro, I have a friend of a friend that's in the military. They use Linux as well from what I understand.
Brandon |
Quote:
I'll place the order for the parts pretty soon. I'll just have to guess on the 5-5-5-15 vs. 4-4-4-12, because nothing anyone has said about that has contained any actual information. But thankyou (any everyone else responding) for info on other aspects. I hate to argue with experts, but ... Quote:
Most ram doesn't have ECC and most people use such raw and without problems. Ram designers have determined that making ram reliable without ECC is cost effective. Quote:
I actually wrote disk drivers long ago when the ECC code was in the driver, not the firmware. So I know all the theory behind it. I know the basis for the estimates of what fraction of low level errors it will correct and I know those theories are always based on narrow (often incorrect) assumptions about the raw error characteristics of the underlying media. As the technology has matured, there are now a lot of good reasons to trust ECC reliability claims in hard drives. Those reasons don't translate into reasons to trust ECC in ram. I don't think I need more reliability than non ECC ram. If I did need it, I'd be very worried that ECC ram was giving me only an illusion of extra reliability. Quote:
Quote:
Even a disk that seeks ten times faster than typical couldn't make the same performance difference as changing a 50% hit rate on the file cache to a 98% hit rate (on a task I run a lot at work that is the difference between the 2GB XP32 on my desk and the 8GB XP64 I only got to try once. Hopefully, I'll get a decent XP64 system at work soon and a better Linux system at home). Quote:
Quote:
Microsoft charges more money for XP 64 than for XP 32. |
In my limited experience in programming, Linux, and Windows... this is by far one of the most informative discussions I've seen.
Regards, Brandon |
Quote:
Hard drives uses ECC period. Hard drive manufactures include ECC because it provides reliability of getting the data that their customers depend on. With out ECC, the data will be corrupted most of time. Do you really want your images or data be corrupted most of the time or do you want to retrieve the files intact with no problems. The cost of implementing ECC in hard drives is a lot cheaper than replacing the hard drive when it is not reliable retrieving data. I always recommend people to select ECC when ever possible. People are moving towards SSD. SSD have either non-ECC or ECC. The SSD with ECC will be more reliable. I have computers with non-ECC and computers with ECC. Computers with ECC rarely come up with errors or does not have corrupted data. Computers with non-ECC memory always have errors. I upgraded the memory in my computers to ECC. It helps a lot. From the link http://www.crucial.com/kb/answer.aspx?qid=3692. They state computer errors are rare. This is incorrect. Since computers are designed by us, humans, errors comes up a lot. Also ECC and parity is not the same. Crucial is wrong on this. ECC uses a special algorithm to double check if there are no errors. A parity provides a bit mask. Both ECC and parity can not be used interchangeably. There are many kinds of algorithms for ECC. With using a certain ECC algorithm, it provide the same performance as non-ECC, but include the resistance of data corruption. Quote:
If you are dealing with tasks that resemble database management, you should look in to MySQL or PostgreSQL. Both are open source but PostgreSQL is designed to handle very huge database. If you are designing something for a client and not for the new 2.7 kernel release, I suggest stop wasting time here. Since you are already set in stone of what you want, begin your work on your own. FYI, 80x86 systems are not reliable. The competition between Intel and AMD has made the 80x86 based systems become crap. Reliable processors that do not use any magic to handle memory or other forms of processing data are PowerPC and SPARC. NASA uses Motorola 68000 series in satellites and robots for their robust and reliability. Zilog and Parallax also has reliable and robust processors. |
Quote:
It was at least a big mistake. It will take longer to know whether it was a very big mistake. The BIOS is really terrible. The BIOS has no support at all to configure or even tell you the memory timing. According to memtest86+ the BIOS has configured the DDR-II-800 memory at 200Mhz (half its correct clock) and at 7-4-7 instead of the correct 5-5-5. The utility Gigabyte claims to have available for adjusting memory timing seems to be Windows only. Contrary to what you said, my son's ASUS board doesn't seem to be Windows dependent in any way. Can anyone tell me what Linux programs exist for tweaking or at least viewing DDR-II memory timing in an AM2 socket motherboard? I also got a SATA DVD burner instead of an IDE. That might be the cause of another problem. But I'm pretty sure that problem is also motherboard. If I restart and/or power off-on with media in the DVD drive, I can't open the DVD drive until after the OS is started on the hard disk. On the ASUS motherboard with IDE DVD drive, I left media in when I powered off several times. Just power on and press the drive button and it opens. On the Gigabyte, with media in the SATA DVD drive and the BIOS running (booting or stopped in the boot menu or stopped in BIOS setup) the button on the DVD drive does nothing. As soon as the OS is running on the disk drive, the DVD drive button works. If I corrupt the OS on hard drive while I have wrong media in the DVD drive, I'm in serious trouble. I think I'd need to take the DVD drive to a different computer to get the media out so I could put a Linux liveCD in to fix the corrupted OS. The integrated video fails entirely with the "nv" driver. I pretty much expected that after reading other threads. It works fine with the "vesa" driver. I switched to the closed source "nvidia" driver (Mepis install for that is very smooth once you figure out where in the menu they hide the X assistant program that does that). I have no idea what the "nvidia" driver does better than the "vesa" driver, since I only have one monitor at the moment. But it seemed like a good idea. Edit: Now the BIOS's behavior is even worse and I have no clue what changed. It has no settings that should affect this. On power-up, it instantly decides there is no hard drive and (assuming no media in the DVD drive) goes immediately to the disk boot failure message before there is a chance to see anything else (all in the fraction of second it takes the monitor to realize that a video signal has started). A BIOS is supposed to wait some decent fraction of a second for the hard drive to turn on. It doesn't. I need to press Ctrl-Alt-Delete after power-up to get it to see the hard drive. Adding insult to injury, I found in their FAQ: Why does new BIOS sometimes fail to detect IDE? ... To solve your problem, please go to BIOS to reset "Boot delay time" or "IDE delay time" to a longer one. I'm not an idiot. If there were any BIOS option "Boot delay time" or "IDE delay time" or anything like that, I would have tried that. This BIOS lets you configure basically nothing. |
All times are GMT -5. The time now is 09:53 PM. |