ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a large subscriber e-newsletter. I want to mail this from my own server, using my own software. I have enough experience with PHP for this to be doable (in fact, I've already done it...but now I want to do it efficiently).
Based on my reserch, the best method I've come across is to use exec() and have each individual email address mailed from its own process.
Thinking about it, I'd imagine each process would consume system resources. And that the script could consume all resources if left to its own devices.
How can I query the system (Fedora 4) so I can code in some intelligence to the controlling script? What I'd like to be able to do, is have the script know it needs to wait until resources are freed up.
It may not be a good idea to fork a new process for every mail as the process that is forking the new process may just keep on forking new processes too fast and the system may not be able to handle it. I generally let sendmail do the all the things and the php process sends the mails deferred i.e. all the mails go to the sendmail queue and sendmail sends them later. This is how the the sendmail command line will look:
The simpler soln would be to just limit how many emails you send in 1 go, then sleep for a while.
Of course this would require a bit of testing/guesswork on your part.
Alternatively, i think you are looking at calling eg the 'top' cmd (assuming Unix/Linux) via the shell and analysing the results (eg every 10 emails).
See http://au3.php.net/manual/en/function.shell-exec.php, specifically the example by rustleb in 'User Contributed Notes' after the fn def.
It may not be a good idea to fork a new process for every mail as the process that is forking the new process may just keep on forking new processes too fast and the system may not be able to handle it. I generally let sendmail do the all the things and the php process sends the mails deferred i.e. all the mails go to the sendmail queue and sendmail sends them later. This is how the the sendmail command line will look:
Code:
sendmail -O DeliveryMode=d
Thank you for your reply.
I'm using PHP's mail() command to send each individual email, so I'm not calling sendmail myself. As a result, I'm not sure I can take advantage of your suggestion.
I will look into this further, because I like this approach.
If I can find a way to do this, it seems like it would get around the issue of the script waiting on a slow mail server before continuing with the next person (meaning the script wouldn't take 'forever' to finish).
Even better, I wouldn't have to write management code to stop the script from starting thousands of sub-processes.
Last edited by 60s TV Batman; 03-30-2007 at 02:36 AM.
The simpler soln would be to just limit how many emails you send in 1 go
Thank you for your reply.
Sure, but I need something that is able to determine how many sub-processes are still running. For example, if I decide that 100 is the maximum number I'll allow, I need the script to know that one just completed so there's a free spot to issue another.
Some mail servers are just plain slow. For example, 2 days ago I ran my first mailing to the entire list announcing the server change.
There are still 66 undelivered mailings sitting in /var/spool/mqueue. There were literally thousands only one hour after initiating the mailing. Guess-work isn't enough.
Thanks for the reference to the PHP manual. I have everything working already, including nohup and nice to ensure the sub-processes don't completely take over.
I've been experimenting with "top -u username," and suspect this may give me the answer I need.
Last edited by 60s TV Batman; 03-30-2007 at 06:04 AM.
I think passing the following string "-O DeliveryMode=d" in the $additonal_parameters variable would do the job
UPDATE: Yes, you're right. I've just tried it on the entire list, and it works perfectly.
I inserted a half-second delay in the while loop to ensure the deferred queue didn't fill up too quickly. After keeping an eye on top during this first test, I'm going to try 1/10th of a second as everything went well.
As a result of the delay, the script completed in 2 hours 10 minutes (would have been much faster otherwise).
For my first (direct, without deferred option) attempt, I aborted the mailing script manually after 7 hours. So this is definitely a better way to go. And as a bonus, I haven't had to write and test code to manage subprocesses.
Thank you for the suggestion.
Last edited by 60s TV Batman; 04-01-2007 at 07:04 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.