Visit Jeremy's Blog.
Go Back > Forums > Linux Forums > Linux - Hardware
User Name
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?


  Search this Thread
Old 08-01-2003, 01:01 AM   #1
Registered: Feb 2003
Location: Phoenix, AZ - USA
Distribution: RedHat 8, Micro$haft
Posts: 33

Rep: Reputation: 15
Max Possible IDE items, Max IDE interfaces?

Is there any hard max as to how many IDE devices I can cram into a server? I don't want to do raid for other reasons. I have access to a biiiiig pile of the new Western Digital 250GB drives. The larger boards I'm finding have 2 PC133 controllers on the board, but would be nice to find a board with 4 controllers.

Main question - I see those expansion cards costing about $40 that you plug into your PCI slot and they supposidly provide you two more conrollers which may have an additional master/slave combo on each.

So if I have a board with 5 open PCI slots.... the way I figure it.... if I juse JUST masters (no slaves, just one drive to a cntroller) I can add 2 drives with the onboard PCI interfaces, plus 2 for each expansion card for a total of 5 cards... that's 10 extra drives right there, plus the 2 on the motherboard itself.

So this makes room for TWELVE IDE ATA133 drives. Using 250G ATA133 for each of these.... and I'd slam this box out with obout 3 Tetrabites of data storage.

Will this really work? Is there any serious drawback to this? Especially for the price? I could build this into a standard home built (large) server system off the shelf. I'd of course put a 3Ghz processor and probably about a gig of RAM on it - and it would serve its function.

The function.... to store some huuuuuuuuuuuuuuuuuuuuuge databases and a cubic sh*tload of image files and media (mp3) files.

So am I really loosing any serious performance by building a system like this? I figure I'm better off going this way than to build 10 low end systems and tie them all together to get the same effect.

??? Thoughts??? Thanks!!!
Old 08-01-2003, 05:07 AM   #2
Registered: Jul 2003
Location: Brighton, Michigan, USA
Distribution: Lots of distros in the past, now Linux Mint
Posts: 748

Rep: Reputation: 31
First, you should probably judge your actual need. Yes, we'd all love that amount of drive space, but I'd probably just add drives as I need them. There will probably be a point (with 250gig drives, I'm guessing a normal 4 drive system (with 1 terabyte, theoretically) might be a decent stopping point. Unless you're storing loads of full lenght movies, doing professional movie editing, or something similar, thats a lot of space. The number of music files alone would probably overwhelm any usefulness of it. (Just think of how long it will take to move them one day when you discard the old drives.)

So, I'd probably set them up normally, then add expansion slots as needed. One, you may not want them in the long run. Two, they'll probably get cheaper the longer yo uput them off.

A bigger problem is bus throughput. Depending on how you set up the drives (raid?) , and how much data you plan to pull off of each drive at a time, you might find yourself actually slowing your system to a crawl as the system tries to juggle all those drives and filesystems.

The biggest problem I see, however, is power and space. Because HD's have moving parts, they are going to draw a lot of power. Plan on not only installing the largest power supply you can, but taking the cover off, and stacking drives and extra power supplies (and ups's) all around your box. Also, if you decide to go ahead with this project, remind me to invest in your local power company.

Of course, setting up a system like that is assuming that there aren't any issues with your Motherboards acceptance of that many cards/drives, that you can alter your system to access them, and that you're using a new enough kernel to access that amount of drive space. There are a lot of factors to consider, depending on how you want it set up. Personally, I'd look into some kind of raid setup, beecause after you finish loading enough information to fill all those drives, it would be a real bummer to lose a drive or two for whatever reason. Then again, if it's just stuff you have readily available, no big deal, right?

Oh, and it probably wouldn't hurt to work on a good database so you can find stuff later, no?
Old 08-01-2003, 06:58 PM   #3
Registered: Feb 2003
Location: Phoenix, AZ - USA
Distribution: RedHat 8, Micro$haft
Posts: 33

Original Poster
Rep: Reputation: 15
Originally posted by scott_R

A bigger problem is bus throughput. Depending on how you set up the drives (raid?) , and how much data you plan to pull off of each drive at a time, you might find yourself actually slowing your system to a crawl as the system tries to juggle all those drives and filesystems.

That's really what I'm asking about. "Would this work".

I'll clairify the use. We currently put out a linux based software system for archiving huge catalogs of image/media/video. We build boxes (currently 1.7Ghz, 128Mb RAM, 80-250 Gb drives depending) then we put our software on the box.

The software uses PHP as a web search/retrieval front end, and uses mySql to store huge databases with all the image data, exif data, and lots of other parameters. The data is put on the drives via direct download from image media, via firewire straight from DV video, or of course over a network.

The software is done and works very well. We have put out quite a few units over the last 6 months and the system has been very usable, stable, and cost effective comared to other solutions on the market.

We then provide these systems to pro photographers and videographers who use them as a pre-built "plug and play" file servers. These artists can just dump files on thier windows PC's, but there are major limits with FAT file systems, and even NTFS. Our current system has (to date) held a maximum of 5 million files. Five million image files take a up a lot of space. Burn to DVD or CD-R for backup, but for for day to day use, these people can't shuffle a pile of 2000 CD-R's in and out of a drive, just too slow.

So that brings me back to my question - There are people who would need to archive (concievably) up to 3TB of data. We can build a rack full of our current units, or we could build it all into a single, mamoth unit. That's what I'm asking, is whether or not this will work in practice, are there any actual limits in Linux or otherwise?

Case size isn't a problem - there are plenty of large server cases that can easily hold up to 15 drives as designed. We would probably use 2 600 watt power supplies in the cases, liquid cooling, and about 15 case fans. So I'm not too worried about that. As for power draw - this single unit would draw much less power than 10 individual boxes linked together.

Thanks for the feedback though. Any other thoughts? Anyone?



Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
too many IDE interfaces, no room in table humbletech99 SUSE / openSUSE 2 11-11-2005 01:02 PM
max numbers of ide & partition davidhk Debian 4 08-28-2005 05:21 PM
RH9 &9.0 Benq CDRW 4012a IDE + IDE CDROM - Install - how to cgtueno Linux - Hardware 6 05-30-2004 02:43 PM
What is the max number of drive interfaces? tells Linux - Hardware 1 12-04-2003 02:42 PM
3rd / 4th IDE Interfaces pczegle Linux - Newbie 4 04-09-2002 11:18 AM > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 03:07 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration