Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Here is my problem. I am attempting to create an image of a 20 GB hard drive (via the dd command). However, during the imaging process, the dd command craps out after 16 GB and gives me a "File size limit exceeded" error. I have tried a number of different things like reformatiing the drive I am writting to and moving partitions around on the drive I am writting. I get the same error. I have plenty of space available (currently I have 53GB of free space that I can write to). I found a post talking about being able to extend this limit using the "ulimit command, but was curious if anyone has ran into the same problem and how they went about solving it. Here is a link to the post that I refered to above. Thanks!
Basically, the metadata for each file can only store so many names of blocks, so if your file takes more than that number, you're sunk. I suggest you reformat that partition with a larger block size so that you can hold more data per file.
A quick Google search claims that the maximum ext2 file size is 2GB or 4GB, depending on the website, but it doesn't mention the blocksize. If you're already at 16GB for one file then those website's math must not take into account large blocksizes.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.