LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Disk Dump (https://www.linuxquestions.org/questions/linux-server-73/disk-dump-802090/)

Zepx 04-14-2010 10:25 PM

Disk Dump
 
Hello,

I'm quite new to linux, but I've managed to grasp some basics.

Now my intention here to create a virtual directory, which I resorted to creating an Image File so that I can mount it and have my folder have a dedicated storage. I will mount this image as a loop device.

Well it's not much of a problem, but I would like to know whether this is suitable

Say I want to create a 25GB Image.
Code:

dd if=/dev/zero of=/home/disk-img/25GB.ext3 bs=1G count=25
Is this recommended? I'm using block size as 1G which is really huge, so I was wondering, if this is actually recomended. From what I read, some said that it's only advisable to use 4096k or lower, but waht I found was that these suggestions are very dated (year 2003), and it is now 2010, so I would like to know if it makes any big differences.

Regards,
Zepx

smoker 04-14-2010 11:35 PM

So if you want to store a file that's 25MB you are happy to waste the other 975MB of disk space on it are you ?

http://linux.about.com/od/lsa_guide/a/gdelsa35t04.htm

In other words, storing 25 tiny 1k files will fill that whole filesystem.

Zepx 04-14-2010 11:37 PM

What do you mean that I would waste 975MB when I store 25MB? The image disk would definitely be utilized and used up all 25GB.

In fact after mkfs.ext3 on the image, I got about 24.6GB of Free space.

smoker 04-15-2010 12:17 AM

Each file uses at least one block. No matter how small the file is.
It's academic anyway because ext3 doesn't support bigger than 4k blocks.

bakdong 04-15-2010 12:36 AM

I think you misunderstood the question. The OP was talking about the dd command to make the file, nothing about the file system mentioned.

That dd command will fill zeros, 1G at a time, 25 times. I can't see a problem with that, though it might not be the most efficient in terms of speed. You'll only be doing it once anyway.

smoker 04-15-2010 12:50 AM

I read what he said not what he put in the code section.
His command will work (if it's written correctly), but the assumption that he can use 1GB blocks is still incorrect.
Unless he doesn't know what block size is ...

Zepx 04-15-2010 12:55 AM

I'm really sorry but I guess I do not understand block size well. All I know is that a block is always 512 bytes. That's all I know, and I can't really find any article on the web that explain clearly what a block size is.

I'm still searching at the moment for a clear explanation and what it should mean....

I stumbled upon a 4096k BS that's why I came here to ask and to get myself clarify.

Sorry for the trouble.

bakdong 04-15-2010 01:00 AM

Yes, there is obviously some confusion there over the difference between file system block size and the block size parameter in the dd command. The title was 'Disk Dump' though so it seemed a reasonable assumption that that was the question.

I suspect that the large bs would have to be small enough to be capable of being accomodated in RAM to be of any benefit.

My test just took 6 mins to complete that dd.

Zepx, a block is just a group of things. Unless you put it into context, that is all it is.

(It's no trouble)

Also check man dd or info dd for more info, you'll see there that block size just means 'read BYTES bytes at a time'

smoker 04-15-2010 01:03 AM

the bs in your dd command is not setting the block size.
mkfs.ext3 will use default values (4k)
http://linux.die.net/man/8/mkfs.ext3

Zepx 04-15-2010 01:07 AM

Alright, so smoker, does that mean that the dd I'm using is just to create the size of the image? But for a real ext3 to work, the block size should be a valid number of 1024, 2048 and 4096 bytes?

bakdong 04-15-2010 01:12 AM

Quote:

Originally Posted by smoker (Post 3936022)
the bs in your dd command is not setting the block size.


I guess we'll just have to agree to disagree, smoker! :)

The info page for dd starts with:

`dd' copies a file (from standard input to standard output, by default)
with a changeable I/O block size, while optionally performing
conversions on it.

So the 'bs' does change the block size, in dd.

(but obviously has nothing to do with the file system block size, the disk format block size, or any other block size)

Maybe the OP does not realize that he won't be able to do anything creative with the mounted image until it has a file system?

Zepx 04-15-2010 01:17 AM

@bakdong,

I do realise that I cannot mount the image if it does not have a file system... Now I understand things a little...

The filesystem ext3 requires a blocksize that is valid like 1024, 2048, and 4096 or a multiple of it? This is done automatically or optionally via mkfs?

The dd I'm performing is just create an image which is blank/empty since I'm using /dev/zero. So it really has got nothing to do with the filesystem right?

Please correct me if I'm wrong.

smoker 04-15-2010 01:21 AM

@bakdong

If you read the original message, the OP talks about reading somewhere that the block size should be 4k or lower. That is filesystem block size.

I/O block size in dd is a separate thing, which is where everything gets confused. It has nothing to do with filesystem block size which is what the OP thought he was affecting, and what I was responding to.

@Zepx

Yes that's correct. Max default block size for ext3 is 4096 bytes. dd does not create a filesystem.

You can make bigger blocks but the kernel has to support it, and it does waste space with smaller files.

Zepx 04-15-2010 01:22 AM

Thank you smoker and bakdong. I'm all cleared up.


All times are GMT -5. The time now is 07:41 PM.