This two programs are in fact pretty useful when you need to analyse raw data. Don't get me wrong, if the according man pages didn't make sense to you, this programs are most likely not for you, as they are quite self explaining. Anyway, here is the gist:
A byte is an 8 bit dual number. That means it can take 256 different values. Two bit hexadecimal numbers can take 256 vales too. And because the common programmer strongly preferes to work with values like "0F:11:A3" instead of "00001111 00010001 10100011" (which is, by the way, quite the same information, just once hex and once binary) we tend to convert raw data into "readable" hexadecimal notation. That is what this two tool are made for.
xxd inputfile outputfile
This will read the file "inputfile" byte by byte, convert the binary data to its hexadecimal representation and will then print it in ascii into the file "outputfile". Thus you can use xxd to make a raw data file "readable hex"...
xxd -r inputfile outputfile
This will do the reverse. It reads the file "inputfile", which is supposed to contain hex data in plain ascii, and writes a file that contains the according raw data.
This will print the hex representation of the data in "inputfile" to the console. Quite similar to
which does almost the same...
Note that there is some "help" generated when transforming from raw data to hex. Both xxd and hexdump will print a first column with a hex value indicating the byte offset of the following row. The second column is the hex data. Both xxd and hexdump will group two bytes each, separating the groups by spaces. That is for readability issues only. The third column is only generated by xxd and shows, which 7bit ascii characters would match that raw data. This is very useful if you try to analyse a program or protocol and you are looking for string constants.
Whew, hope that made any sense to you.