Quote:
I know from painful experience not to run a single application larger than the swapfile in Windows no matter how much physical ram you have. Run a 1.5G application alone on a Windows system with 2G of Ram and 1G of swap space. The commit level never reaches close to 2G, but the application fails, apparently due to lack of memory. With 2G of swap space, it not only works, but has enough memory to run two 1.5G applications at once (even with nothing shared between them). Run the same (recompiled of course) 1.5G application on a Linux system with 2G of RAM. The swap partition size doesn't matter. It doesn't even get used. However, as you get into issues of an application deallocating and reallocating lots of varying size chunks of memory, the results are much more mixed. Very likely Windows is a little better. (I know major parts of that are linked in via the run time library and only some parts are in the OS, but what matters is taking it all together GNU/Linux vs. Microsoft). How much cpu time is spent in all the memory management routines for all that deallocating and reallocating? Usually Microsoft wins. How long before you fragment your 3Gb of virtual address space so badly that even with less than 2G in use, you can't allocate any moderately large chunk? Often Microsoft wins that one as well. |
John
As to the allocating/deallocating issue: If the application is built properly, then there should be no issue. If the application has a bug (say forgetting to close out memory) that is a application bug and not a OS bug. One cannot hold a application bug against the OS, regardless of OS. I think you are also a little unclear on how the linux handles memory. It tends to shuffle stuff around to allow the memory to be best allocated. I usually picture the memory as a pile of sand. When you pull a bucket of sand out of the pile (deallocate memory) the sand in the rest of the pile "fills in" the space. However the kernel directs this in an orderly fashion. I am not sure on the cpu requirements to do the memory shuffling but considering that more and more complex computing problems (movie effects, military flight simulators, decryption software, face recognition, etc) are moving to Linux, I do not think it can be a real issue. |
I tend to agree that Windows memory management is more flexible - considered purely from an out-of-the-box perspective. If you give Linux some really intensive task, it will perform it a lot faster (I have on occasion seen it running up to five times as fast performing the very same kind of task) but at the expense of taking over so many resources that anything else hardly gets a chance to start up. Depending on your perspective, that may good or bad. As far as I know, none of the world's super computers run windows, which is surely not a coincidence. Given the same hardware, Linux/Unix on the server will take your further than Windows. In short, if you need something dedicated, avoid windows; if you need something flexible and you don't want to set things up yourself, it may be windows that is the better option. But that proviso is fundamental. Windows may be more flexible in one specific way but as it is essentially a black box, you really don't have any control over it. Say you want to move over a large pile of files really fast and you are willing to allocate as many resources to that task as possible, how are you going to do that? You can tweak a little left, you can tweak a little right, but that's about it.
|
Quote:
If you are doing a software experiment, you want controls to get want you want in an experiment. By using ECC memory, you add additional controls to aid in your experiment. It will take several million erorrs to equal one error by using ECC. With out ECC memory, it will a take a few thousand errors to reach an error. All data is handled in memory. If I was creating programs, I would use ECC memory to make sure the final product does not have any errors. Quote:
A CPU have a gigaflop of about 10 gigaflops and probably less. Graphic cards have a lot higher gigaflops. They range from tens of gigaflops to a hundreds of gigaflops. Again what this means compared to CPUs is more computing power. You can save your money selecting a slower CPU and spend the money on a GPU. nVidia GeForce8 and up has the ability to process several kinds of data much faster than any CPU. A GPU will turn a 2% improvement into a kitten. A GPU can provide 400% or more compared to your thinking that faster memory will finish your experiment sooner. A GeForce8 has a precision rating of IEEE-745. Look up CUDA at nVidia. Quote:
Linux has different file systems that handles caching and buffering differently although they use the same file IO routines from the kernel. Also Linux is a virtual memory and multi-tasking OS. Every program gets its own environment. This gives reliability and stability that Linux is known for. Since Linux uses virtual memory, you can keep on giving it more memory that far exceeds Windows. Windows stops at 1.5 GB of memory per program. Mac OS X is the same. Linux can handle towards pentabytes or probably more for each program. Sure you can mess around with write-behind tasks, but you may hurt the stability and reliability of Linux. lazlow, the movie industry uses Linux. They have been using it for years. The military probably uses too. The government uses Solaris with SPARC processors (I think). |
Quote:
*Although the P4 is made by Intel not AMD, its architecture is most similar to AMD64, and runs with the same software. Quote:
I would recommend worrying less about sound than video, but even ATI video is not as bad as the worst whiners make it sound; nVidia must be frigging easy is all I have to say about that, after making ATI 9000 & X800 both work [pretty easily] with Debian, and the 9000 also worked with various RHs, Mandrakes, & Mandrivas without trouble. For sound, I recently got the "notorious" AC97 soundcard/chip to output on digital coax by switching output to "off" in Debian "volume control;" alsamixer was totally useless, and I don't know yet how to make "ON"=frigging "on," but with some futzing, it does work, if you really care. Quote:
Quote:
Quote:
Quote:
Quote:
DDR & DDR2 are totally incompatible with one another, but they must match the mobo & CPU; they do not necessarily have anything to do with overall system speed, although higher #s tend higher in performance. Quote:
Quote:
Quote:
Quote:
Quote:
|
Electro, I have a friend of a friend that's in the military. They use Linux as well from what I understand.
Brandon |
Quote:
I'll place the order for the parts pretty soon. I'll just have to guess on the 5-5-5-15 vs. 4-4-4-12, because nothing anyone has said about that has contained any actual information. But thankyou (any everyone else responding) for info on other aspects. I hate to argue with experts, but ... Quote:
Most ram doesn't have ECC and most people use such raw and without problems. Ram designers have determined that making ram reliable without ECC is cost effective. Quote:
I actually wrote disk drivers long ago when the ECC code was in the driver, not the firmware. So I know all the theory behind it. I know the basis for the estimates of what fraction of low level errors it will correct and I know those theories are always based on narrow (often incorrect) assumptions about the raw error characteristics of the underlying media. As the technology has matured, there are now a lot of good reasons to trust ECC reliability claims in hard drives. Those reasons don't translate into reasons to trust ECC in ram. I don't think I need more reliability than non ECC ram. If I did need it, I'd be very worried that ECC ram was giving me only an illusion of extra reliability. Quote:
Quote:
Even a disk that seeks ten times faster than typical couldn't make the same performance difference as changing a 50% hit rate on the file cache to a 98% hit rate (on a task I run a lot at work that is the difference between the 2GB XP32 on my desk and the 8GB XP64 I only got to try once. Hopefully, I'll get a decent XP64 system at work soon and a better Linux system at home). Quote:
Quote:
Microsoft charges more money for XP 64 than for XP 32. |
In my limited experience in programming, Linux, and Windows... this is by far one of the most informative discussions I've seen.
Regards, Brandon |
Quote:
Hard drives uses ECC period. Hard drive manufactures include ECC because it provides reliability of getting the data that their customers depend on. With out ECC, the data will be corrupted most of time. Do you really want your images or data be corrupted most of the time or do you want to retrieve the files intact with no problems. The cost of implementing ECC in hard drives is a lot cheaper than replacing the hard drive when it is not reliable retrieving data. I always recommend people to select ECC when ever possible. People are moving towards SSD. SSD have either non-ECC or ECC. The SSD with ECC will be more reliable. I have computers with non-ECC and computers with ECC. Computers with ECC rarely come up with errors or does not have corrupted data. Computers with non-ECC memory always have errors. I upgraded the memory in my computers to ECC. It helps a lot. From the link http://www.crucial.com/kb/answer.aspx?qid=3692. They state computer errors are rare. This is incorrect. Since computers are designed by us, humans, errors comes up a lot. Also ECC and parity is not the same. Crucial is wrong on this. ECC uses a special algorithm to double check if there are no errors. A parity provides a bit mask. Both ECC and parity can not be used interchangeably. There are many kinds of algorithms for ECC. With using a certain ECC algorithm, it provide the same performance as non-ECC, but include the resistance of data corruption. Quote:
If you are dealing with tasks that resemble database management, you should look in to MySQL or PostgreSQL. Both are open source but PostgreSQL is designed to handle very huge database. If you are designing something for a client and not for the new 2.7 kernel release, I suggest stop wasting time here. Since you are already set in stone of what you want, begin your work on your own. FYI, 80x86 systems are not reliable. The competition between Intel and AMD has made the 80x86 based systems become crap. Reliable processors that do not use any magic to handle memory or other forms of processing data are PowerPC and SPARC. NASA uses Motorola 68000 series in satellites and robots for their robust and reliability. Zilog and Parallax also has reliable and robust processors. |
Quote:
It was at least a big mistake. It will take longer to know whether it was a very big mistake. The BIOS is really terrible. The BIOS has no support at all to configure or even tell you the memory timing. According to memtest86+ the BIOS has configured the DDR-II-800 memory at 200Mhz (half its correct clock) and at 7-4-7 instead of the correct 5-5-5. The utility Gigabyte claims to have available for adjusting memory timing seems to be Windows only. Contrary to what you said, my son's ASUS board doesn't seem to be Windows dependent in any way. Can anyone tell me what Linux programs exist for tweaking or at least viewing DDR-II memory timing in an AM2 socket motherboard? I also got a SATA DVD burner instead of an IDE. That might be the cause of another problem. But I'm pretty sure that problem is also motherboard. If I restart and/or power off-on with media in the DVD drive, I can't open the DVD drive until after the OS is started on the hard disk. On the ASUS motherboard with IDE DVD drive, I left media in when I powered off several times. Just power on and press the drive button and it opens. On the Gigabyte, with media in the SATA DVD drive and the BIOS running (booting or stopped in the boot menu or stopped in BIOS setup) the button on the DVD drive does nothing. As soon as the OS is running on the disk drive, the DVD drive button works. If I corrupt the OS on hard drive while I have wrong media in the DVD drive, I'm in serious trouble. I think I'd need to take the DVD drive to a different computer to get the media out so I could put a Linux liveCD in to fix the corrupted OS. The integrated video fails entirely with the "nv" driver. I pretty much expected that after reading other threads. It works fine with the "vesa" driver. I switched to the closed source "nvidia" driver (Mepis install for that is very smooth once you figure out where in the menu they hide the X assistant program that does that). I have no idea what the "nvidia" driver does better than the "vesa" driver, since I only have one monitor at the moment. But it seemed like a good idea. Edit: Now the BIOS's behavior is even worse and I have no clue what changed. It has no settings that should affect this. On power-up, it instantly decides there is no hard drive and (assuming no media in the DVD drive) goes immediately to the disk boot failure message before there is a chance to see anything else (all in the fraction of second it takes the monitor to realize that a video signal has started). A BIOS is supposed to wait some decent fraction of a second for the hard drive to turn on. It doesn't. I need to press Ctrl-Alt-Delete after power-up to get it to see the hard drive. Adding insult to injury, I found in their FAQ: Why does new BIOS sometimes fail to detect IDE? ... To solve your problem, please go to BIOS to reset "Boot delay time" or "IDE delay time" to a longer one. I'm not an idiot. If there were any BIOS option "Boot delay time" or "IDE delay time" or anything like that, I would have tried that. This BIOS lets you configure basically nothing. |
All times are GMT -5. The time now is 05:37 PM. |