[SOLVED] Recover files from hfsplus partition, Mac file recovery
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Recover files from hfsplus partition, Mac file recovery
Hi:
A coworker says they 'lost' their Mac laptop drive and had no backup of media files. He gave me the drive. I would like to recover the media he wants.
I connected the drive and gparted describes it as having 3 partitions:
a fat32 EFI, and two hfs+ partitions called Macintosh HD & Recovery HD.
I can mount the partition hfs+ partition 'Macintosh HD'
with
mount -t hfsplus /dev/sdb2 /mac
but get a permission denied message trying to open or copy the folders that look like media.
I do not want to write to the drive by changing the file permissions.
Is there a way for me to copy folders from this drive without changing the permissions of the folders?
Personally I would never try to play around and mount stuff but create a disk or partition copy first. That way you have a backup should the HW fail and you can work on the copy without altering it or the original HW. (And if you mount then use "-o ro".) Like with NTFS native OS tools understand their file system best. In absence of that opportunity and depending on the distribution you use you may have a "hfsutils" package for further analysis, or see Gentoo: diskdev_cmds, or try Testdisk and Photorec as they have HFS+ support.
Thanks. I have saved and restored partitions with partimage & fsarchiver but never copied a partition. What do you suggest? Thanks for the insight in to how to get started without disturbing the original any more than I may have by mounting it.
Regards
If you don't need to anticipate HW trouble then 'dd' (man 'dd' fo more) should do or else use whatever tool you're comfortable with that produces raw disk images (proprietary ones, basically). Else see dd_rescue and ddrescue, they're not the same, dcfldd, linen, etc, etc... 'dmesg', 'sfdisk -l', 'blkid' or 'cat /proc/partitions' output to confirm which partition you should image and then to copy the source partition (say "/dev/sdc2") to a file on the mounted "/mnt/bkup" filesystem the simplest invocation is to run 'dd if=/dev/sdc2 of=/mnt/bkup/sdc2.dd'. If you lack disk space and a trade-off wrt time is not a problem you could compress it on the fly: 'dd if=/dev/sdc2|bzip2 > /mnt/bkup/sdc2.dd.bz2'. (As you can see 'dd' uses stdout so network transfers with nc or SSH are easy too.)
Thanks.
Haven't used dd before and tried:
sudo dd if=/dev/sdb2 of=/dev/sda1 conv=notrunc,noerror
Then I made a directory called /recov and tried:
sudo mount -t hfsplus /dev/sda1 /recov
Got error:
mount: wrong fs type, bad option, bad superblock on /dev/sda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Have to do more reading on dd I guess
Thanks in advance for any help
I suggested you dump output to a file. hfsutils tools may but tools like Testdisk, Photorec, Foremost, scalpel, pyFLAG, TASK, the Sleuthkit, etc, etc don't require a mounted file system to be able to analyze it (some will take care of any prepping themselves). But hey, if you're not inclined to take cues from what's posted then all I do is shrug and say "only useless things and those for use by skilled operators come without manual"...
Sorry for seeming like had not taken your advice about writing to a file. Before you posted at 1:00 PM I had started dd to make a copy of the partition and then went to work. I didn't get to see your comments until after I returned.
If I dump the output to a file with something like:
dd if=/dev/sdc2 of=/mnt/bkup/sdc2.dd
how will I read the file that is created? dd is new for me. I think I have not used it because of the references to 'disk destroyer'
If I dump to a file do I need to specify bs= to improve performance?
Regards,
Sorry for seeming like had not taken your advice about writing to a file. Before you posted at 1:00 PM I had started dd to make a copy of the partition and then went to work. I didn't get to see your comments until after I returned.
I see. No need to say sorry then.
Quote:
Originally Posted by SBFree
If I dump the output to a file with something like: dd if=/dev/sdc2 of=/mnt/bkup/sdc2.dd how will I read the file that is created?
Kind of depends. Tools I mentioned before work OK with raw disk images files: just supply the complete path and file name. For tools that don't, like the hfsutils tools, you just loop-mount the partition file or let 'kpartx' handle it in case of a full disk image.
Quote:
Originally Posted by SBFree
dd is new for me. I think I have not used it because of the references to 'disk destroyer'
LOL, like Partimage or FSArchiver aren't? ;-p
Quote:
Originally Posted by SBFree
If I dump to a file do I need to specify bs= to improve performance?
The simplest explanation I read is that you should picture using [i,o]bs= as "read(blocksize), write(blocksize), read(blocksize), ...". In short you don't "need" to unless there's a reason. For example with a failing disk you may want to nibble the smallest possible small block size to increase your chances (do read about the difference between ddrescue and dd_rescue) but in common cases (unless the machine has little RAM or the destination optimizes writes differently) you wouldn't be interested in the the effects of say the system bus, disk controllers, drive caches, partial reads, kernel caching or sysctls and just use any block size that is n times the disk block size.
There's lots of opinions (we even got a whole thread devoted to using 'dd' somewhere) but IMHO the only way to know for sure is to test it each time you think it may influence performance. But sometimes you just don't have that luxury.
Just wanted to say thanks for the insights and leads. I recovered 19,000+ photos and movies the teacher had made and not backed up. Now it's up to them to sort through them.
I got over my fear of dd thanks to you. I guess from my registration, I've been using Linux for 8 years and still feel I have a ton to learn.
Scott
Just wanted to say thanks for the insights and leads. I recovered 19,000+ photos and movies the teacher had made and not backed up. Now it's up to them to sort through them.
You're welcome. Note images may store info (EXIF) so exiftools may at least help sort them by shot date or display any comments. Thanks for the feedback. Please mark thread solved. And yes, I learn stuff every day too.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.