Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm quite new to linux, but I've managed to grasp some basics.
Now my intention here to create a virtual directory, which I resorted to creating an Image File so that I can mount it and have my folder have a dedicated storage. I will mount this image as a loop device.
Well it's not much of a problem, but I would like to know whether this is suitable
Is this recommended? I'm using block size as 1G which is really huge, so I was wondering, if this is actually recomended. From what I read, some said that it's only advisable to use 4096k or lower, but waht I found was that these suggestions are very dated (year 2003), and it is now 2010, so I would like to know if it makes any big differences.
I think you misunderstood the question. The OP was talking about the dd command to make the file, nothing about the file system mentioned.
That dd command will fill zeros, 1G at a time, 25 times. I can't see a problem with that, though it might not be the most efficient in terms of speed. You'll only be doing it once anyway.
I read what he said not what he put in the code section.
His command will work (if it's written correctly), but the assumption that he can use 1GB blocks is still incorrect.
Unless he doesn't know what block size is ...
I'm really sorry but I guess I do not understand block size well. All I know is that a block is always 512 bytes. That's all I know, and I can't really find any article on the web that explain clearly what a block size is.
I'm still searching at the moment for a clear explanation and what it should mean....
I stumbled upon a 4096k BS that's why I came here to ask and to get myself clarify.
Yes, there is obviously some confusion there over the difference between file system block size and the block size parameter in the dd command. The title was 'Disk Dump' though so it seemed a reasonable assumption that that was the question.
I suspect that the large bs would have to be small enough to be capable of being accomodated in RAM to be of any benefit.
My test just took 6 mins to complete that dd.
Zepx, a block is just a group of things. Unless you put it into context, that is all it is.
(It's no trouble)
Also check man dd or info dd for more info, you'll see there that block size just means 'read BYTES bytes at a time'
Alright, so smoker, does that mean that the dd I'm using is just to create the size of the image? But for a real ext3 to work, the block size should be a valid number of 1024, 2048 and 4096 bytes?
the bs in your dd command is not setting the block size.
I guess we'll just have to agree to disagree, smoker!
The info page for dd starts with:
`dd' copies a file (from standard input to standard output, by default)
with a changeable I/O block size, while optionally performing
conversions on it.
So the 'bs' does change the block size, in dd.
(but obviously has nothing to do with the file system block size, the disk format block size, or any other block size)
Maybe the OP does not realize that he won't be able to do anything creative with the mounted image until it has a file system?
I do realise that I cannot mount the image if it does not have a file system... Now I understand things a little...
The filesystem ext3 requires a blocksize that is valid like 1024, 2048, and 4096 or a multiple of it? This is done automatically or optionally via mkfs?
The dd I'm performing is just create an image which is blank/empty since I'm using /dev/zero. So it really has got nothing to do with the filesystem right?
If you read the original message, the OP talks about reading somewhere that the block size should be 4k or lower. That is filesystem block size.
I/O block size in dd is a separate thing, which is where everything gets confused. It has nothing to do with filesystem block size which is what the OP thought he was affecting, and what I was responding to.
@Zepx
Yes that's correct. Max default block size for ext3 is 4096 bytes. dd does not create a filesystem.
You can make bigger blocks but the kernel has to support it, and it does waste space with smaller files.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.