Filesystem in a filesystem?
Hi there.
Although I've been reading this forum for some time now, this is my first question, so I take this oportunity to introduce myself. My question is a rather uncommon one, with plenty of place for creative answers. A friend an I have a shared web hosting account, where we have running 5 different sites. The hosting company claims the accounts to have unlimited disk space, which might be true, as we have 30 GB stored there now with no problems. But there is one glitch which don't advertise: claiming possible performance problems of the file system, there is a limit at 50.000 files for each account. 50.000 files are a lot of files, but due to the caracteristics of the websites and of the web framework we use, each site takes more than 10.000 files, and I've already had to make tarballs of the rarely used files (but used nonetheless), as we already got a warning email for having surpassed the file count limit. So, what do you guys think I could I do? Is there some kind of "filesystem in a filesystem" for Linux, so that a lot of files could actually be saved in a single file?. The server is a Slackware with ext3 I think, and I can compile whatever I need, but obviously no root access. |
You could create an archive filesystem, that on the server would be one file. Within that file, it would be a filesystem and be able to store files in it. Depends on how they would be counting how many files you have on the server, and whether or not they would add up too.
To create a filesystem in a filesystem, you could do this for a 10MB filesystem: Code:
> dd if=/dev/zero of=/path/to/filesystem/newfilesys bs=1024 count=10000 You could try to upgrade to a larger number of total files on the server, or figure out an archive server to use. |
dd a file and make a filesystem on the file, mount the file, yada yada yada, just like above. But that might be difficult without root access. And not particularly efficient.
Does the host give you access to mysql? You can store the files in the database, and the database would only use a few files in and off itself. I'm not sure if it would trigger the same issue or not. But it's a pre-made somewhat efficient perveyor of data that's readily available and doesn't really require god like priviledges to use. Or a rocket science degree to understand. |
Quote:
ISPs have different rules for databases because they are much more efficient (in both time and space) than static filesystems. And one could store each database record in compressed form to save space. Your ISP's file limit of 50,000 files seems perfectly reasonable. Only online newspapers with multi-year archives is likely to exceed that limit (with 10 new articles per day, that would be about 14 years of storage). And this is why a database makes more sense -- the database might contain the minimal amount of information required to reconstruct the original page, using the most efficient content description, and then each record would be compressed before storage. |
All times are GMT -5. The time now is 10:27 PM. |