Realistically, how many concurrent users do you think this server could handle?
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm hoping I can set this up as a webserver and handle around 300 concurrent connections. I'm flexible on the web server, but am familiar with the default LAMP stack that Ubuntu provides but was thinking nginx seems pretty good also. Apparently Apache2 leaks memory?
The internet connection is 15mbit. The website will be a custom scripted forum that makes every attempt to be lean on resources.
I know it's difficult to give an absolute yes or no, but figured it might be worth it to know ahead of time if there is no chance at all it could do this.
The website will be a custom scripted forum that makes every attempt to be lean on resources.
Any reason to not use an actively developed, well-maintained existing product? And did you create it? If so: are you a prolific coder with a keen eye for security as well as performance? If not: 0) what does the developer say wrt performance? And 1) will you have enough support to get at least security problems fixed stat?
Quote:
Originally Posted by wh33t
I know it's difficult to give an absolute yes or no, but figured it might be worth it to know ahead of time if there is no chance at all it could do this.
300 concurrent users "doesn't seem much". But unless you actually test a Real Life setup you won't know and you won't know what to tune. I'd say install VirtualBox in your current workstation (if you don't mind the extra software layer) to run the headless server setup as you designed it. (Or use a cloud VM somewhere?) Ensure the OS provides performance information each component provides enough (debug?) logging. Then use any network stress testing tool from your laptop and pound the heck out of it in a realistic way (meaning use URIs an actual user might use) while keeping various components /state-like URIs and top-like terminal windows open to see if bottlenecks turn up during the test. Analyse data, tweak, test the result, document changes. Rinse. Repeat.
The question is fairly defult one and there is no perfect answer until you find one yourselves with your setup.
However,from my experience with that hardware with niginx you could have 150-200 connections for sure. So do stress Testing as suggested above, i will advise to do that from different ips.
Any reason to not use an actively developed, well-maintained existing product? And did you create it? If so: are you a prolific coder with a keen eye for security as well as performance? If not: 0) what does the developer say wrt performance? And 1) will you have enough support to get at least security problems fixed stat?
300 concurrent users "doesn't seem much". But unless you actually test a Real Life setup you won't know and you won't know what to tune. I'd say install VirtualBox in your current workstation (if you don't mind the extra software layer) to run the headless server setup as you designed it. (Or use a cloud VM somewhere?) Ensure the OS provides performance information each component provides enough (debug?) logging. Then use any network stress testing tool from your laptop and pound the heck out of it in a realistic way (meaning use URIs an actual user might use) while keeping various components /state-like URIs and top-like terminal windows open to see if bottlenecks turn up during the test. Analyse data, tweak, test the result, document changes. Rinse. Repeat.
I am the coder, I've written content management systems in PHP for about 10 years. I dunno if that instills a hope of faith in me but there it is. When I script I always try to think about security and have had issues in the past where hackers got into my sites so I have put some effort and focus into security. But security is a journey right? Not a destination. It's something that you have to continually keep an eye out for by checking logs and do regular audits etc. Feel free to suggest to me any other measures you think I should take. I know that I will do as much server hardening as I can before I even take it live.
As for WRT, not sure what that is. I've considered trying to put a stress test on a server and a connection but how would I do that with out having a bot farm? I considered perhaps writing a script and launching it from a VPS with a 100mbit connection that would open X number of scripts in every few seconds and then watch my HTOP output from a shell. Do you think that's a good idea?
The question is fairly defult one and there is no perfect answer until you find one yourselves with your setup.
However,from my experience with that hardware with niginx you could have 150-200 connections for sure. So do stress Testing as suggested above, i will advise to do that from different ips.
Any tips for stress testing? Are you familiar with Apache? Do you think it would be wise to switch to Nginx right away? or deal with that later if the machine starts getting bogged down?
I'm hoping I can set this up as a webserver and handle around 300 concurrent connections.
Do you really mean concurrent connections or concurrent users?
There's a big difference.
300 "concurrent users" will not necessarily be holding TCP connections open during their sessions (also depends on what you class as a concurrent user).
I dunno if that instills a hope of faith in me but there it is.
It does. (Especially if you made mistakes and learned from them but then again the rest of your reply kind of implies that already...)
Quote:
Originally Posted by wh33t
Feel free to suggest to me any other measures you think I should take.
Can anyone download the code for free and audit it?
Quote:
Originally Posted by wh33t
As for WRT, not sure what that is.
Lower case as in "with respect to".
Quote:
Originally Posted by wh33t
I've considered trying to put a stress test on a server and a connection but how would I do that with out having a bot farm? I considered perhaps writing a script and launching it from a VPS with a 100mbit connection that would open X number of scripts in every few seconds and then watch my HTOP output from a shell. Do you think that's a good idea?
Asserting your know your own products inside out you know what you have to optimize database / PHP / caching-wise and what the potential bottlenecks of the product are, right? So maybe a bit of an explanation of what you're looking for could help? Personally I like siege because I can take just any web server access log, awk '{print}' the request field and use that. Of course there's tools and more tools...
300 users hitting php hard can use a lot of resources. The best thing to do will be to do some testing of the box on your lan first to see how it scales.
But the question's very broad, so it's hard to give specific answers with details.
Any tips for stress testing? Are you familiar with Apache? Do you think it would be wise to switch to Nginx right away? or deal with that later if the machine starts getting bogged down?
Oh dear, one google search will give you what you want.
Oh dear, one google search will give you what you want.
Members ask questions relying on theoretical and practical knowledge of fellow LQ members. So please don't do that: either answer the question if you can or feel free to skip it if you can't.
Quote:
Originally Posted by 24x7servermanagement
I prefer kali linux tools
Look at stress testing tools
And which of those tools have you successfully used before? Which ones would you recommend?
Members ask questions relying on theoretical and practical knowledge of fellow LQ members. So please don't do that: either answer the question if you can or feel free to skip it if you can't.
And which of those tools have you successfully used before? Which ones would you recommend?
Hmm
Used successfully t50, inviteflood, iaxflood, slowhttptest for testing stress, server load, firewall, waf applications and so
"300 concurrent users" is not particularly much, especially not for HTTP, which is a "stateless" protocol anyway.
It is relatively unlikely that 300 requests would arrive at the same instant, and, even if they did, Apache would process the requests "as fast as it was able," using the pool of worker-processes that it had allotted to the task. This pool is of variable size and has an upper limit.
If you use "ordinary CGI" (which actually works very well on modern hardware ...), the workers are constantly recycling themselves. If you use FastCGI (which I also very much like), you can arrange for the workers to "commit hari-kiri" after a certain number of requests to avoid problems with memory leaks.
Of most concern to you will be precisely what the various requests are doing ... in particular, what shared-resources are they use, and how many milliseconds a request takes to complete under un-obstructed conditions. Then, what sort of obstructions might slow the requests down. (Anyone can sail through the streets of a city at 2 in the morning ... much faster than they can at rush-hour.)
Apache HTTP server used to come with apachebench or was it called abench? Can't remember now and don't know if it is still a part of ApacheHttpd package. Check your distro. It is a blunt instrument but it can be useful for generating some known quantity of traffic. It is not feature full.
I have the server in my possession now. I'm currently in the process of getting it configured. So far it's been challenging. I feel very weak when it comes to general linux system administration and would love any links that can point me to a "every linux admin needs to know these essential facts/techniques" because I just feel lost, but I am absolutely determined to get this working.
And for clarification, I live in a very small city. It only has a population of ~5k max. I do feel it likely that 300 people max may be actively logged in on the system at the same time, and with persistent mysql connections. This would qualify as 300 concurrent connections, correct?
I will look into adjusting it's CGI mode as well, thank you.
And of course once I do get it up and running I will try to break it. I look forward to it Thank you all for the links and suggestions.
The first thing to know is when to pay someone else to host your hardware or give you diskspace and bandwidth.
Don't mothball your box but if you're really just starting the learning curve to put a box on the internet and keep it secure is steep.
It might be better to pay for hosting, learn from what they do (fail2ban, etc.) then all the while work on your box and keep the goal of administering that your self some day.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.