Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, I have a server running RH Linux Enterprise, on this box we need to print more than 4000 .ps files. The command we have been using is lpr *.ps (and even lp *.ps).
The problem is we get the error "lpr: error - too many files -filename" repeated over and over.
If we break the job into batches of 500 or less they print, but we want to do them all in one go.
Now for some explaining... xargs reads a list of arguments from standard input, splits them into groups and then executes the specified command (in this case lpr) for each group.
find simply prints a list of files/directories on standard output (which we pipe into the standard input of xargs).
The arguments to find are:
. means start finding files in the current directory.
-name '*.ps' tells find to only print files whose name matches the pattern '*.ps'. The reason this doesn't trigger the long argument list problem which the original command causes is that the pattern is quoted and send to find as a literal string. In the original command, the shell tries to expand the pattern before passing the result to lpr.
-maxdepth 1 don't go into sub-directories - just find things in the current directory.
-print0 instead of printing one result per line of output, separate the results with the ASCII NUL character. This can prevent problems when there are weird characters in the file names, and is generally a good idea, even if it isn't strictly necessary in most cases.
The -0 argument to xargs tells it to expect NUL delimited input (normally it expects arguments to be separated by whitespace or new lines).
I have to confess that this "argument list too long" thing is an irritation, and the work around I propose here is rather convoluted. I can't remember exactly where the limitation comes from (maybe it's just historical), but I wish it would be removed.
Thanks for the reply. I ran the command find . -name '*.ps' -maxdepth 1 -print0 |xargs -0 lpr
but still got the same error lpr: error - too many files - filename
Have tried changing maxjobs setting in cups.conf but it didnt help either.
Ah, so cups has some limit of number of jobs? Didn't know that.
If you cannot raise the limit enough to cope with all of them, you might need to submit them in smaller groups, waiting for one group to finish before starting the next group.
One way to do this would be to create a list of files, split that into chunks, and then run lpr for each group of files, one group at a time.
e.g.
Code:
find . -name '*.ps' -maxdepth 1 > full_list
split --lines=100 full_list toprint_[/CODE]
This will create a bunch of files with the prefix "toprint_", each containing the names of 100 files to print.
This command will send one of the files to the print system, and prompt you to press return when that group is done:
Code:
for f in toprint_*; do cat "$f" |xargs lpr ; echo "press RETURN to continue" ; read r ; done
How about a 30 second sleep to make it less tedious ;}
It would probably be better not to assume how long it is going to take - what if the paper runs out? Then it will be a massive pain in the backside to work out which ones have printed and which have not.
If it must be totally automatic, then it would be better to replace the manual interaction with something like:
Code:
for f in toprint_*; do
cat "$f" |xargs lpr
echo "waiting for print queue to clear"
waiting=yes
while [ "$waiting" = "yes" ]; do
sleep 10
grep -q "your_printer_id is ready" && waiting=no
done
done
...where your_printer_id is set according to what your printer is called. Just do lpq in a terminal to find out the ID.
The reason I didn't suggest it right away is that I'm not sure when this "your_printer_id is ready" message is appearing... if it does this as soon as there is even 1 space in the print queue, then you should set the number of files per group to half the maximum to prevent overflows...
It's all rather fiddly. For a one time task, as little manual labour is probably less hassle.
Look at the manpage for xargs. There are 2 or 3 options like -L to limit the number of arguments to handle at a time. (-l or -n) You can also limit the number of processes launched at once if the command launches in a subshell.
Most printers/print queues just block when the printer runs out of paper, so no jobs, not even the current one is lost.
I've certainly had times (at work) when my job didn't print, checked printer, put new paper in, had to wait for the other 20 jobs piled up in front of mine...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.