LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 08-15-2016, 08:44 AM   #1
hack3rcon
Senior Member
 
Registered: Jan 2015
Posts: 1,432

Rep: Reputation: 11
Post How can I determine "bs" parameter in "dd" command?


Hello.
How can I determine "bs" parameter in "dd" command?

Tnx.
 
Old 08-15-2016, 08:55 AM   #2
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
You don't have to.

the "bs" option specifies a size for block I/O - basically, just a buffer. If you are copying disks you can use a "bs=1M" and things go fairly fast. If you have lots of memory, use a "bs=10M".

Getting it "wrong" only causes a short record message at the end (either for reading or writing).

The only time the bs option is really of any use is if you have to do odd things like covert data. Sometimes you have to specify the "ibs" (input block/buffer size) as different than the "obs" (output block/buffer size). But these are rarely used these days (I used to use it to covert EBCDIC to ASCII now and then, and the block sizes were used to determine the size of the records to be processed, and that adds a "cbs" option for the conversion).

By default, specifying the "bs" option causes dd to use the same value for "ibs" and "obs".

If you ignore the block size, dd will use 512 bytes, which tends to make copies rather slow.

Last edited by jpollard; 08-15-2016 at 08:57 AM.
 
1 members found this post helpful.
Old 08-15-2016, 09:28 AM   #3
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
The default block size is 512 (bytes). That will be pretty slow.

You can use the system BUFSIZ, 8192 (bytes) which is the default size the system uses for buffering data. That will be pretty fast... well, faster than 512.

You can check what your BUFSIZ is -- it most likely is 8192 -- by
Code:
grep  BUFSIZ /usr/include/*.h
which, on a 64-bit system, will show you
Code:
grep BUFSIZ /usr/include/*.h
/usr/include/_G_config.h:#define _G_BUFSIZ 8192
/usr/include/expect.h:#ifndef BUFSIZ
/usr/include/ldap.h:#define LDAP_OPT_X_SASL_MAXBUFSIZE		0x6109
/usr/include/libdevmapper.h:#define DM_FORMAT_DEV_BUFSIZE	13	/* Minimum bufsize to handle worst case. */
/usr/include/libio.h:#define _IO_BUFSIZ _G_BUFSIZ
/usr/include/pi-error.h:	PI_ERR_DLP_BUFSIZE		= -300,	/**< provided buffer is not big enough to store data */
/usr/include/pi-palmpix.h:   /* This callback should read record #RECNO into BUFFER and BUFSIZE, and
/usr/include/stdio.h:#ifndef BUFSIZ
/usr/include/stdio.h:# define BUFSIZ _IO_BUFSIZ
/usr/include/stdio.h:   Else make it use buffer BUF, of size BUFSIZ.  */
(the boldface one is it).

You can experiment and double or triple the BUFSIZ value and use that in dd but you may have diminishing returns, best to try it and see what happens. You can set it it some megabytes but you can overdo it.

Hope this helps some.

Last edited by tronayne; 08-15-2016 at 10:03 AM. Reason: Fumble finger 512 into 5120. Arrgghh!
 
1 members found this post helpful.
Old 08-15-2016, 09:40 AM   #4
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by tronayne View Post
The default block size is 5120 (bytes). That will be pretty slow.
It is 512 bytes. The system call overhead with that is a killer. A long time ago, when drives and machines were a lot slower, I found that there was little improvement in increasing the blocksize above 64K. These days, with Gigabytes of memory available, I typically use 256K, 512K, or 1M, as suits my mood at the time (i.e., for no particular reason).

Unless you are accessing a tape drive or other device where the blocksize directly affects the I/O operation, it really doesn't matter except for the performance penalty at small block sizes.
 
Old 08-15-2016, 10:06 AM   #5
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
Quote:
Originally Posted by rknichols View Post
It is 512 bytes.
Yes, indeedy-do, tis 512 not 5120.

I hate being old and half blind and fumble-fingered.

Thanks for pointing that out.
 
Old 08-15-2016, 02:58 PM   #6
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,980

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
The reasons one might wish to know block size could be for some things.
One may be a type of media that you wish to get exact block as it is used in device. Some devices won't properly copy if you don't use block for block copy.
Another is to speed up the use of dd on an operation.
Another may be to use dd to extract a small portion of media as opposed to the entire drive.
 
Old 08-16-2016, 04:02 AM   #7
hack3rcon
Senior Member
 
Registered: Jan 2015
Posts: 1,432

Original Poster
Rep: Reputation: 11
OK. This it is kind of optional.
For an external HDD with 1TB capacity which "bs" size is good?
 
Old 08-16-2016, 05:57 AM   #8
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Depends on how much memory you have, and the I/O controllers you have.

1 MB works fairly well. 8MB can work better. Why - lower controller contention when reading large blocks. But for your specific platform, experimentation is the best way to find out.
 
Old 08-16-2016, 06:28 AM   #9
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,572
Blog Entries: 19

Rep: Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451
According to the dd info page:

Quote:
Any block size you specify via ‘bs=’, ‘ibs=’, ‘obs=’, ‘cbs=’ should
not be too large—values larger than a few megabytes are generally
wasteful or (as in the gigabyte..exabyte case) downright
counterproductive or error-inducing.
 
1 members found this post helpful.
Old 08-16-2016, 07:45 AM   #10
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Again, "too large" is determined by the local hardware. If I'm copying a 32GB SD card (for a Raspberry Pi) I just might use 1GB, though using 512MB works just fine.. It isn't "too large", after all, I have 8. But more than that? nope - that WOULD be "too large".

It also helps to have only one device on that USB controller...
 
Old 08-16-2016, 09:02 AM   #11
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by jpollard View Post
Again, "too large" is determined by the local hardware. If I'm copying a 32GB SD card (for a Raspberry Pi) I just might use 1GB,
That's getting into "counterproductive" territory. Why? Because writing to the destination cannot begin until that first 1GB block has been completely read from the source. You are losing some of the benefit of overlapped input and output operations, though that's not going to matter if both devices are on the same controller (no overlap possible) or if there is a vast difference in speed between the two devices (the slower device will dominate). For the case you mentioned, the difference isn't going to be huge (at most, 1/32 of the total time), but about the only benefit of going that large is that "Because I can" good feeling.
 
Old 08-16-2016, 09:03 AM   #12
hack3rcon
Senior Member
 
Registered: Jan 2015
Posts: 1,432

Original Poster
Rep: Reputation: 11
Thank you a lot. I understand.
 
Old 08-16-2016, 10:44 AM   #13
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by rknichols View Post
That's getting into "counterproductive" territory. Why? Because writing to the destination cannot begin until that first 1GB block has been completely read from the source. You are losing some of the benefit of overlapped input and output operations, though that's not going to matter if both devices are on the same controller (no overlap possible) or if there is a vast difference in speed between the two devices (the slower device will dominate). For the case you mentioned, the difference isn't going to be huge (at most, 1/32 of the total time), but about the only benefit of going that large is that "Because I can" good feeling.
Actually, no. The first GB does have to be read... but after than the reads/writes are overlapped - and with less I/O interaction. Since the SATA controller is faster than the USB you now get to keep the USB going.

But again, it depends on the controllers. If you think it is going slow with a big block size, use a smaller.

The elapsed time remains about the same - but the system overhead gets bigger with the smaller block sizes.

For most controllers, 512MB is sufficient - and the overhead is a don't care when you are the only user.
 
Old 08-16-2016, 12:49 PM   #14
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,779

Rep: Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212Reputation: 2212
Quote:
Originally Posted by jpollard View Post
Actually, no. The first GB does have to be read... but after than the reads/writes are overlapped
I think that's pretty much what I said.

Going from SATA to a much slower USB device qualifies as a vast difference in speed. Delaying the start of output by the 5 to 10 seconds it takes to read that first 1GB block is insignificant. And BTW, unless you use "iflag=direct" and "oflag=direct" the kernel is going to be buffering all your data anyway.

Last edited by rknichols; 08-16-2016 at 12:52 PM.
 
Old 08-16-2016, 02:50 PM   #15
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,980

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
There are a few web pages out there that have a quick way to test the bs sizes and gives you a fairly good number to use. You could wing it from guesses based on some quick tests. Make the command with small bs sizes and wait 60 seconds and stop it then see how much it moved.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How could a computer technician use the "top" command with "ps" and "kill" to investigate how a system is misbehaving? geckono1 Linux - Newbie 13 07-03-2016 07:51 AM
[SOLVED] X: "loading extension glx" "no screens found" "fatal server error" (w/ nvidia driver) Geremia Slackware 7 12-29-2014 11:00 AM
[SOLVED] "net rpc" "failed to connect to ipc$ share on" or "unable to find a suitable server" larieu Linux - General 0 11-09-2014 12:45 AM
getting "parameter has incomplete type" and "conflicting types" error yogi.aash Linux - Newbie 2 05-31-2010 06:25 PM
Standard commands give "-bash: open: command not found" even in "su -" and "su root" mibo12 Linux - General 4 11-11-2007 10:18 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 01:04 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration