Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
09-05-2003, 05:41 PM
|
#1
|
LQ Newbie
Registered: Feb 2003
Posts: 14
Rep:
|
Weird question... Fastest way to create large arbitrary files?
Often I find myself running tests (such as transfer speed) where I need large files as testers. Doesnt matter whats in them... just use them for the data transfer or whatever.
Normally I do a:
head -c SIZE /dev/urandom > filename
where SIZE is the size of the file I need in bytes.
The problem is that for files over a few megs this method is slow because you have to wait on the machine to fill it with random information.
So my question... Is there a way to create files of arbitrary sizes instantly. Like... some way to manipulate the file descriptor so that a file is allocated a large amount of space. ???
Someone told me there is a way.. but I have no idea.
Thanks 
|
|
|
09-05-2003, 05:50 PM
|
#2
|
Member
Registered: Apr 2002
Location: Los Gatos, CA
Distribution: boring redhat 9
Posts: 163
Rep:
|
[root@jebus root]# time head -c 100000000 /dev/zero >filename
real 0m2.174s
user 0m0.090s
sys 0m0.830s
[root@jebus root]#
P3 1ghz machine w/256 MB ram.
|
|
|
09-05-2003, 06:10 PM
|
#3
|
LQ Newbie
Registered: Feb 2003
Posts: 14
Original Poster
Rep:
|
awesome!
thats fast enough...
what is /dev/zero?

|
|
|
09-05-2003, 06:12 PM
|
#4
|
Member
Registered: Apr 2002
Location: Los Gatos, CA
Distribution: boring redhat 9
Posts: 163
Rep:
|
It's one of those "black hole" special files. If you use it to make a ten meg file, then try to cat that file, you'll see that there's not a damn thing in the file.
Think of it as a filesize BS machine. 
|
|
|
09-05-2003, 07:01 PM
|
#5
|
LQ Newbie
Registered: Feb 2003
Posts: 14
Original Poster
Rep:
|
takes about 30 secons for a gig file on my dual processor machine.
I wonder if there is a way to just manipulate a file so this process can be instant?
|
|
|
09-06-2003, 01:32 AM
|
#6
|
LQ Veteran
Registered: Mar 2003
Location: Boise, ID
Distribution: Mint
Posts: 6,642
Rep:
|
If you want the process to be instant, why don't you just create the huge file(s) once, then not delete it? Sure, you'll waste a chunk a space, but since it sounds like this activity is something you do frequently, why not save yourself the trouble of having to create these bogus files each time. -- J.W.
|
|
|
09-06-2003, 02:27 AM
|
#7
|
LQ Newbie
Registered: Feb 2003
Posts: 14
Original Poster
Rep:
|
because thats not an interesting solution
And a sysadmin told me it was possible but not obvious. (and he couldnt remember either)
|
|
|
09-06-2003, 02:38 AM
|
#8
|
LQ Veteran
Registered: Mar 2003
Location: Boise, ID
Distribution: Mint
Posts: 6,642
Rep:
|
A fair enough assessment. At the same time though, which is better: A non-interesting solution that actually works exactly the way you want it to, or spending a lot of time trying to track down rumors about a potentially faster solution, which few people, if any, seem to be able to recall? -- J.W.
|
|
|
09-06-2003, 03:05 AM
|
#9
|
LQ Newbie
Registered: Feb 2003
Posts: 14
Original Poster
Rep:
|
they have actually done it for some project.. ill make him look it up and ill post the solution
for now doing a head on /dev/zero is pretty darn good 
|
|
|
09-06-2003, 03:37 AM
|
#10
|
LQ Guru
Registered: Mar 2002
Location: Salt Lake City, UT - USA
Distribution: Gentoo ; LFS ; Kubuntu ; CentOS ; Raspbian
Posts: 12,613
Rep:
|
FWIW, I believe /dev/zero is actually the writing of zero's to the HD.
Cool
|
|
|
All times are GMT -5. The time now is 03:35 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|