Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am using screen to start a LONG running script on my server over an ssh connection. This works fine. I can see that the script is continuing to run after I disconnect from the screen session using Ctrl-A d. When the script is complete I can reconnect to the screen session and manually terminate it.
I would like to be able to terminate the screen session at the end of the script. I tried issuing an exit command at the end of the script. That did NOT work. Any ideas?
I think that would be rather easy for one script running. My plan is to initiate 4 parallel scripts so I will have to do some investigation. If I KNEW which script would end last I could have it iterate and kill all "screen" processes. Or I could run the scripts serially. I am not sure which would be faster. Basically I am using dd if=/dev/urandom of=(partition I want to clear)/bigfile to do a quick and dirty wipe of free space. Well at least quick compared to using sfill -l -l which I computed would take 12 DAYS! I guess I need to fill up more of the 5 GB of storage on my server with crap so I have less free space to wipe
You could conceivably fire of all processes; check them in a while loop w/
a 1-second sleep, and kill screen if they're all gone ....
something like
Code:
/path/to/script1 &
/path/to/script2 &
...
# after the last of the four
while pgrep "script1|script2|..." >/dev/null 2>&1
do
sleep 1
done
pkill -9 "SCREEN"
Thanks folks! I allowed two processes to run overnight (14 hours) and based on the progress they made I think I will have to run sequentially. That will make things easier.
In answer to your curiosity... The system consists of:
A Dell Poweredge 400 SC server (Pentium 4; 2.33 GHz; 3 GB RAM) with the following hard drives (all SATA):
Western Digital Caviar Black 1 TB
Western Digital Caviar Green 1 TB
Western Digital Caviar Green 1 TB
Western Digital Caviar Green 2 TB
The green drives are low power consumption and a little slow but not that slow. I think the limiting factor is the CPU which is consumed 100% by dd.
I have had better results creating a file of say 1 GB with dd if=/dev/urandom and then copying the 1 GB files to the drive to be wiped from within a loop (cp to an incremented file name each time) until it runs out of space.
You could (to help the CPU) use a decent blocksize with dd ...
dd if=/dev/urandom of=/... bs=4096
Of course the problem may be the urandom usage.
[edit]
Scrap that - slow as. From urandoms man-page
Quote:
The kernel random-number generator is designed to produce a small
amount of high-quality seed material to seed a cryptographic pseudo-
random number generator (CPRNG). It is designed for security, not
speed, and is poorly suited to generating large amounts of random data.
Users should be very economical in the amount of seed material that
they read from /dev/urandom (and /dev/random); unnecessarily reading
large quantities of data from this device will have a negative impact
on other users of the device.
So really, you probably just want to generate a 4096 or 2048
byte file, and slap that over the devices in the loop ;}
[/edit]
Thanks! Good point. urandom is really slow. I had at one time played with copying various size files of random data to fill free space. Larger files seemed to be faster.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.