LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 02-26-2005, 09:39 AM   #1
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Rep: Reputation: 0
What HW does my users need?


have been asked to set up a network, the full part.
It will be used by close to 500 users, and need to be able to scale to about 200+ more users during a year. This is a non-profit project for me, and I did get involved to help out the local school and the children there. And I have untill summer to figure things out, because this project will start in summer and be completed in the end of august. There is actually two schools locally and one more further out in our district that will using this network. The shoolboard estimate there will be occations where there will be close to 500 users at the same time, and do not care for lag on servers.
Ok I have let myself into deep water, not having any real life experiance with manageing networks. My knowledge is limitied to doing DB's and webdesing/Vb.net+asp.net programming, e-commerce and intranet. I did study some network in university but that was some years ago, and I am sure things have changed.(they all used base-10 network then)

This is their base requiremetns:
*Centralized managment
*Cetralized storing
*Connect to the network from thinclients
*Role Based permissions (for the intranet, but I have sorted that one)
*Data Redundancy
*Hot swap raid 5 disks
*GBLan or 100 mb lan
*Easy managed backups (automated?)
*Must handle 500 users, and scale up to about 200+ users
*Each user get own logon+desktop via remotedesktop or eqiuvalent.
*Print server
*Sql server
*Mail server
*Linux based

Broken down to servers it will look like this:
*Print Server
*Firewall Server
*Database Server
*Mail Server
*File Server
*Backup Server
*User-Management Server
*Application Server

If we say that the users logged in at the same time is largly around 350 users, and they all use one openoffice app, also they use mozilla + some chat app and maybe around 20% (70 people) are using gimp and another 35 people are using some stathistical application at any give time of day. Also there are 10 theachers logged in working with their groupware at any given time. Now what kind of hardware would one need to get to run this without lag? Also remember that the system need to scale to about 200+ users extra!

Anyone brave enough to start a debate?


For software they will be using opensource, and I would guess openoffice, gimp and some statisthic tool ++. As you see they will use pretty much off the shelf applications. And of course they will be using the internet also.
Ok, so everyone gets a remotedesktop on the server. And can access the needed application from their thinclient. All storage is done centralized.

How do one determine what kind of hardware they need? How many servers, what router/switches and all that?
I got a tip to look into openmosix, so I have read dozens of articles about clusters on linux and open-mosix. But will a network of 500 users not doing extreme calculations get any benefits from a cluster, or is it better ways to go for loadbalancing etc? What I mean is we could set up a cluster of 16 or so machines, but will openmosix be able to handle this amount of users, and user input? They will run all applications from the main-node, and the node will ship-of-tasks to the rest of the nodes. But how does that work for a fairly highvolume users and more standard applications? Or is it better to set up the required amounts of servers and then get some tool to manage the loadbalancing?

*How does one calculate how much cpu/ram is needed for this?
*What kind of server(s) will do? (sizewise #of cpu etc)
*What is the "best practise" route to choose? (multi servers, load balancing etc)
*Will openmosix work for this type of network? (not too many cpu intensive tasks and few hard work calculations/simulations)

I guess there must be a "rule of thumb" in the industry. A sort of a standard guideline how much cpu-power and ram to add for say each 10 users, 20 users and so on. Any tips will do here, I don't know much about this subject today but intend to learn before summer. Any links or thoughts on this matter is jolly.
Remeber that I will be happy for any tips, what books to read, what links to look into and real life magic. I will do what it takes to make sure the system will be ok, even take all the late nights of studying books etc.
 
Old 02-27-2005, 02:19 AM   #2
dwight1
Member
 
Registered: Feb 2005
Posts: 42

Rep: Reputation: 15
Personally, I think you are headed for a "learning experience".

What I would strongly suggest is not trying to throw it all together at once, but rather, bring it up in stages. I have seen companies trying to do everything at once, and it's usually disastrous.

What does work well is bringing something up which works and is scaleable. This also allows you to judge by your clients' needs as to what their resource requirements will be.

Say, target 10-50 users for the first phase. This will allow you to get the basic foundation in place first. And it had better be rock solid if you want to expand upon it.

I wouldn't put too much trust in a rule of thumb. Rather, develop your own for your own situation.

Also, there will be problems, whatever you do. That's why people have IT staffs.

There's really a whole lot of stuff here which needs to go on. Aside from installing the servers, you need to be able to reinstall them (including all the configuration settings); or at least recover them if something goes wrong.

Basicially, if you have experience in this sort of stuff already, you can implement a system rather smoothly. One which not only handles your immediate needs, but those which always happen later on.

E.g. People often start off with a Class A network, and then as they grow they cut over to a Class C network. Wise planning beforehand would've saved them the trouble; but the Class A was so much easier when they were small.

This is just one thing; there are lots more. Be wary of the automounter under Linux; everyone has trouble with that, especially as the load increases.

I'm also of the opinion that you'd be in far bigger trouble trying to do this with Windows. Your costs would be substantially greater.
 
Old 02-27-2005, 03:28 AM   #3
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
I'm not a network expert but I'd suggest getting an idea how the network would be structured before thinking about what hardware would be required. For example, you may have the administrators on their own subnet. The number of administrators would determine what you would need for a server for that subnet. You may have a computer, either the server or another one act as a gateway.

Using thin clients would put an extra strain on the network. 10BaseT just won't cut it. Are the clients the same, or diverse. What type of clients are we talking about? For example, if these amount to graphic terminals, and the applications are running on the server, then there is a lot of X-forwarding traffic.

There may be a potential problem in the requirements. Printing may occupy a lot of bandwidth on the network and effect everyone else. Also, if streaming media will be used in the classroom, this could occupy a lot of bandwidth. Having a group of clients mount most of the directories, such as /bin, /usr ,/sbin etal., on a server may allow for central management without having to have the application running on the server. The Linux Filesystem Hierarchy Standard on the TLDP.org website may provide guidance on which directories could be shared on the server and with, shouldn't be. There is also a type of file system that can be used that acts in layers, so for example, you would have the /etc mount both local and shared. If a configuration file exists on the local drive, this is what is accessed. Otherwise the same pathname refers to a shared resource.
This would allow for different dll or setup files for diverse clients. My main point here is that with graphic terminals displaying x-windows, the application runs on the server, and so a lot of traffic needs to be on the network to handle just the graphics. This also puts an extra load on the server, because it is handling both a lot of network traffic like a file server, but as well running the applications for a number of users.

While your question deals with hardware only, I think it would be a good idea to look in the samba website for example large configurations.
Here is a link from their documentation on a 500user network. http://ca.samba.org/samba/docs/man/S...g500users.html .
However If all of the clients are Linux, then you might not use Samba. However, I would recommend this book also: http://www.net-security.org/review.php?id=138 . It includes examples for up to a 2000 user network.

With the large number of thin clients, I believe that you need to structure the network into subnets to control the number of clients, and thus the amount of network traffic on each segment. The network traffic will be the bottleneck, so I don't feel a cluster is the solution, because that will concentrate the traffic in one area. Also, some subnets need to be protected from others. Especially with students on line. The administrators may have access to information about students and teachers that must remain confidential, even if it is just social security numbers. Plus, breaking up the network into parts also means breaking the design problems into smaller pieces which may be easier to configure and install.

One last thing that I thought that I should mention. If you have students using computers and they have access to the web, you need to be able to restrict what they can access. I mention this because it may effect parts of the network design. You will need to set up a web proxy which will block access to over 100,000 pornographic and hate sites, as well as actively filter web pages. Another thread on this site dealt with that:

http://www.linuxquestions.org/questi...hreadid=261217

Good Luck!

Last edited by jschiwal; 02-27-2005 at 03:34 AM.
 
Old 02-27-2005, 06:39 AM   #4
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Original Poster
Rep: Reputation: 0
Thx 4 the reply!
Will coment more later, just checking in before dinner:-))

@jschiwal
Great links, thx. Will read up right away.

"You will need to set up a web proxy which will block access to over 100,000 pornographic and hate sites, as well as actively filter web pages."

Will a guess we all agree keeping the youngsters away from all the Bad internett site is not possible. The kids are so clever, and will just find one new search-string as soon as we block one, so the filter part is hard, but to not give acess to the most common porn-sites etc is a start. Will look more into the link later.

Anyway a proxy server is needed anyway to act as a middle server. Thay way we put some ease on the bandwith use. Loading from proxy rather then actual site is faster.

Last edited by slope; 02-27-2005 at 09:27 AM.
 
Old 02-27-2005, 02:56 PM   #5
win32sux
LQ Guru
 
Registered: Jul 2003
Location: Los Angeles
Distribution: Ubuntu
Posts: 9,870

Rep: Reputation: 380Reputation: 380Reputation: 380Reputation: 380
Quote:
Originally posted by slope
Will a guess we all agree keeping the youngsters away from all the Bad internett site is not possible. The kids are so clever, and will just find one new search-string as soon as we block one, so the filter part is hard, but to not give acess to the most common porn-sites etc is a start. Will look more into the link later.

Anyway a proxy server is needed anyway to act as a middle server. Thay way we put some ease on the bandwith use. Loading from proxy rather then actual site is faster.
i've had dansguardian running on a few LANs with very good results... it's a really nice content filter... what i like most about it is that it actually analyzes the CONTENT of the pages, instead of relying solely on a list of URLs to block and stuff like that (although it can do that too)...

on my LANs, dansguardian works hand-in-hand with squid, which is the most popular acceleration proxy solution for *NIX... i believe it also works with other proxy solutions also...

http://www.dansguardian.org/

http://www.squid-cache.org/

just my two cents...
 
Old 02-27-2005, 06:52 PM   #6
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Original Poster
Rep: Reputation: 0
Why subnet?

I have been given advise to make many subnets. With only about 50 or so users per subnet. Could need some more info around subnet, and why limit the subnet to around 50 users.
Also I been told to drop remote desktop. I was planning a thinclient without Os, booting from nic's. But if this will put significant strain on the network, maybe better to use a stripped freebsd or similar and a CF-card? Does anyone now how much strain OS less thinclients will put on the system? Will it be worth using own OS for all thinclients?

Still open for any tips

Last edited by slope; 02-27-2005 at 07:02 PM.
 
Old 02-28-2005, 01:36 AM   #7
dwight1
Member
 
Registered: Feb 2005
Posts: 42

Rep: Reputation: 15
I don't think you understand.

The advice I gave was to start small, and see how your configuration goes. Once you have a scalable solution, then you can expand based upon what you've learned from your environment.

What I would do is to first set up your firewall, and then set up your email, web, and dns servers. Then set up a single subnet, with a server for your users' home directories, and see how well that works for your users.

Again, I'd plan on starting small with the user base, and work your way up. What you'll find from this is how much memory, disk space and bandwidth people are using. If you can run all 700 people off of that, well, your job is over. Most likely, you won't be able to.

Again, the idea is to get an understanding of the resource requirements as your user base grows. That will give you an understanding of what kind of hardware and network topology you'll need.
 
Old 02-28-2005, 10:13 AM   #8
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Original Poster
Rep: Reputation: 0
@dwight1
We will not be able to do this partial, we have to do the whole part at once. This is due to the fact that the hardware will not be availeble before summer, and the school will be out by summer then. So we must make as stable and fast system as possible during summer with a less the perfect test situation. When we will test the system we can get maybe 20 or so users at the same time, unless we find some good network testing software.

Ok, so far I have come up with this:

Thinclients with internal HD and OS.All with the same hardware so we can make mirrors to go on the HD, makes for easy install/setup for the thinlients. Prefferable the thinclients will not hold all the programs.If we mount the root filesystem as NFS I would prefer to have all programs stored and maintained on a single machine but they will run on the clients. This way maintaining it will be easier, and whenever new programs/applications are installed they are installed in one machine and not all clients. This could be one way to set it up:

*clients with NFS-root to login server
*login server with /home mounted via NFS to the file server
*file server with nightly backups to one or more backup server
*all logging going to write-only log server


I will post more when I have ideas for the rest of the servers. But as always ideas, tips and links will be well recived.

Btw thx to *_never_* from germany for helping me out and of course you here at this great forum
 
Old 02-28-2005, 07:34 PM   #9
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682Reputation: 682
I think that desktop sharing would use more bandwidth then using thin client x terminals logging into a session on a server. And even that would use up a lot of network bandwidth. Plus, since the applications themselves would be running on the server, you would need a very expensive server array to run these applications.

You could instead follow the guidelines of the Linux Hierarchy Standard to mount most of the partitons from a central server(s). This would give you the ability to centrally administer and backup the system, because everyone would be loading in programs and libraries from the same directory. There are a few directories that can't be shared if you are using different hardware, ( probably a few /etc files ) but software would be installed and upgraded in one place. Also, some directories such as /tmp wouldn't make sense having on a central server. Although even a diskless terminal may have the /tmp partition mounted in the ram disk.

This way, the programs and libraries will be loaded of the server, but will run on the users machine.

With around 500 users, you are around 1/2 the limit of the number of possible users on an Ethernet network. But bandwidth concerns will play into it long before you reach that limit. Besides, there are different classes of users. You have administrators, teachers and students. Also, you need to look at the physical setup. If the administrators are on their own subnet, you can set up the gateway to keep out student traffic. This will improve security, and reduce network traffic. Also, there are fewer administrators than teachers or students, so setting up and maintaining this part would be easier. Plus for security reasons, you may need to have only one person be able to perform backups, to maintain confidentiality. Also, you probably want to block all Internet traffic in this segment.
Perhaps for this segment, If you only have a few people working in administration, you could use thin client x-terminals running off a dedicated server. You might consider a mirrored or backup second running server because administration would be a critical function, and if the primary server goes down, you want to be up and running a quick as possible.

I hope this has been of some help. You do need to find working examples, which should provide a starting point for you.
 
Old 02-28-2005, 09:19 PM   #10
dwight1
Member
 
Registered: Feb 2005
Posts: 42

Rep: Reputation: 15
"With around 500 users, you are around 1/2 the limit of the number of possible users on an Ethernet network."

With all due respect, Ethernet places no limit whatsoever on the number of users on the network. The bandwidth available, and how the users use that bandwidth, is what places a practical limit. But as far as the RFC's go, there is no limit as far as users go.
 
Old 03-02-2005, 10:06 AM   #11
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Original Poster
Rep: Reputation: 0
Possible Path

Hmm just a thought.
Will the remote users (the school connected via broadband) be able to run applications from our "mainframe"?
Or does that require a guaranteed bandwidth or a minimum bandwidth?

I plan to have all applications innstalled on one server, and then have the thinclients run and execute the program, this will make for easier managment. But will the users that log in from a remote school need to run applications locally on the (fat)thinclient, or will they be able to run from our app server?
 
Old 03-05-2005, 12:39 PM   #12
slope
LQ Newbie
 
Registered: Feb 2005
Posts: 11

Original Poster
Rep: Reputation: 0
Possible to Run all apps from one server with NX?

Have been getting a tip aboute a wonderful remote desktop app. It is supposed to be faster then x11 and citrix. and will work fine on 10base-T network. Even larger usermasses are supposed to be able to connect to their desktop via thinclient and run all their applications from server. And all this without putting much strain on the server. I belive this is due better comparto a much smaller number of calls back and forth.

Ok, I would like to have all applications stored on one server for easy maintainance. But I have been told that running the apps from one server on a network with 500+ users will drown the server. But NX is supposed to fix this.
Is this possible? Any one used it?

NoMachine NX

Any one think this will solve my problems? Or have any qxperiance with this?

Last edited by slope; 03-05-2005 at 12:41 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
how to authenticate external users but bypass prompt on local LAN users? taiwf Linux - Security 5 07-13-2005 09:01 AM
How to remove Mandrake Galaxy Theming for all users and all new users. Zombie_Ryushu Mandriva 0 01-04-2005 05:38 PM
[FC2]Why does the users command display multiple users? Harkov Fedora 1 07-10-2004 09:24 PM
How to set a Gnome theme as default for all users and future users ? MDesigner Linux - Newbie 1 06-28-2004 11:12 AM
copying kde configuration for 2 users OR 2 users on 1 x session for vnc blackphiber Linux - General 0 02-26-2004 08:57 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 09:13 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration