LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 04-21-2014, 04:51 PM   #1
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Rep: Reputation: Disabled
Question Need 210% space to "merge" data partition-to-partition?


Hi.
I'm running into a problem on my linux home file server:
Everytime I try and copy a chunk of data from one of the sata partitions to another partition I run out of space.
All the partitions are ext4
All register far more than adequate space for the copy (+40-50% more than needed).

I can't figure out why this happens- I've got 4 1.5-2.0 TB drives in there, and a 3tb drive. In gparted it shows plenty of space; also using various tools the drives show they have plenty of space. Yet, I'll log in, set up a copy from one partition to another, and come back the next day- BOOM! -there's "not enough space" to do the copy. I check the folders again and again.
I have no -NO- idea where to begin to find out why this is constantly happening- it doesn't make any sense unless this is not an accurate notice, and the error is something other than a "lack of space."
Because it's all on the same server, as far as I am aware of (it's 24/7 on and no standby) it's not "timing out" or otherwise running into issues. Do I need 200%, 210% to copy or merge a folder, and then it will delete the doubled-up, or does this have something to do with a swap file?
The error shows up from drive-to-drive (sdb to sdc) or partition-to-partition (sdb1 to sdb 4, etc).

Any idea where to begin? It's been going on for weeks and weeks now and I can't organize or sort out and get rid of doubles and I'm getting lost in the nightmare of incomplete copies and running out of space.

the only thing I can think of it buying two or three more 3TB drives and doing this partition by partition of everything on the system, pulling the 3TB drives and then wiping and reformatting the whole mess of the smaller drives.

I'm completely lost and stumped.

OS: Linuxmint 11, 32bit.
Partitions: all ext4, none bigger than 750gb.

Thanks.
 
Old 04-21-2014, 05:05 PM   #2
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,508

Rep: Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102
Since you've provided no details on how you're doing the copy, that makes it more or less impossible to diagnose.

Are you using the GUI? dd? rsync? cp? What flags? How big are the folders you're copying to and what's the available space at the destination according to df?

Last edited by suicidaleggroll; 04-21-2014 at 05:06 PM.
 
Old 04-21-2014, 06:11 PM   #3
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Original Poster
Rep: Reputation: Disabled
I'm using a GUI file manager (Nautilus). I've not had time to learn any rsync or other tools yet.
at most I sudo nautilus after checking in disk usage analyer and gparted for space.

I just tried copying 100gb of data to a 127gb "free space" on another partition, and it ran out of space again at 88gb or something.

What I'm trying to do is organize data via partitions, with doubles, to sort out later via meld, as I learn to use it. There's a lot of redundancy but within each there's often huge chunks of data that are not replicated anywhere else on the server- all of it is from various laptops I used over the years and copied off the HD's before I formatted them or sold them. I did the classic error of figuring I'd sort out redundancies later. There's probably 500gb data total that is unique, that is taking up 5-6 TB.
 
Old 04-21-2014, 06:17 PM   #4
michaelk
Moderator
 
Registered: Aug 2002
Posts: 15,938

Rep: Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825
Post the output of command df -h
 
1 members found this post helpful.
Old 04-21-2014, 06:31 PM   #5
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Original Poster
Rep: Reputation: Disabled
Code:
$ df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7              12G  7.1G  4.3G  63% /
none                  492M  712K  491M   1% /dev
none                  501M  204K  501M   1% /dev/shm
none                  501M  6.6M  495M   2% /var/run
none                  501M     0  501M   0% /var/lock
/dev/sdc6             296G  168G  113G  60% /mnt/1.1
/dev/sdc7             522G  325G  171G  66% /mnt/1.2
/dev/sdc8             394G  220G  154G  59% /mnt/1.4
/dev/sdd5             493G  436G   32G  94% /mnt/2.1
/dev/sdd6             640G  541G   67G  90% /mnt/2.2
/dev/sdd7             244G  229G  2.1G 100% /mnt/2.3
/dev/sde5             1.8T  1.7T   91G  95% /mnt/3.1
/dev/sdf5             730G  693G  120K 100% /mnt/4.1
/dev/sdf6             658G  541G   85G  87% /mnt/4.2
/dev/sdf7             446G  366G   59G  87% /mnt/4.3
(p.s. I don't know how to post this like I've seen it in linux forums- and don't know what it's called to ask the question on how to post the output even- tried googling what I could and got ssh debian logging in and whatnot... but that's the "df -h" command run and copied to a txt file copied here).
Thank you!

Last edited by ChiggyDada; 04-22-2014 at 12:41 AM. Reason: added code tags from help down further in the thread. Thanks!
 
Old 04-21-2014, 06:39 PM   #6
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,508

Rep: Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102
Quote:
Originally Posted by ChiggyDada View Post
I'm using a GUI file manager (Nautilus).
Sorry, I have zero experience with any of the Linux GUI file managers. My guess is there are symbolic links or hard links that are expanding during the copy, causing the data to take up more data on the destination than on the source. Either that, or you're not reading the used space on the source or the available space on the destination correctly. Remember, you can't just subtract used space from drive size to get available space.
 
1 members found this post helpful.
Old 04-21-2014, 06:42 PM   #7
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,508

Rep: Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102
You need to give us an example here.

Post the output of "df -h", and "du -sh /path/to/source/folder"
Then do the copy
Then post the output of "df -h", and "du -sh /path/to/destination"
 
1 members found this post helpful.
Old 04-21-2014, 06:51 PM   #8
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Original Poster
Rep: Reputation: Disabled
Dang it.
Well gparted shows there's a significant amount of space on most of the drives- far more than enough. There's only a few symbolic links, and I don't think any on this machine (only on my current laptop), so I think I can rule that out, least for now.
Conky shows plenty of space
gparted shows plenty of space.

do you know of any tutorials for meld or rsync where I can begin to remove doubles or... somehow begin to synch things correctly or figure out HOW much space I really have to get a handle on this -at least so it doesn't happen in the future? Even if I go out and buy another two giant drives, this is a problem caused in part at my end from copying folders into a partition and then having something happen (nautilus crashing and the jobs never finish) or else the above running out of space happens or my child comes in and hits the keyboard while I'm trying to learn how to solve this and a year later the problems have compounded because I never quite have enough time to learn how to use something without being interrupted or something crashing. I'm hardware and computer savvy with windows but I've only begun to switch over to linux in the last 3 years.

It's getting to a point it's going to be really challenging and time consuming to fix, but I've GOT the hardware to do it... and the OS finally to deal with this kind of thing. The data is precious (family pictures and recordings mostly) but it's 10x copied over and no complete file- and thousands of subfolders- far more than a human could sit and manually sort =Linux is the tool.

I know I CAN solve it, and that Linux has the tools, but I don't know what or how to ask the right question to get started so I can google answers and follow a tutorial or somehow get knowledgeable to remove doubles, copy the data to one partition (that supposedly does have enough space) and then use a good sync' tool to keep this from happening again. That's the frustrating new user issue- what is the question I need to ask really?
*chuckle*

Thank you for your help.

Last edited by ChiggyDada; 04-21-2014 at 06:55 PM.
 
Old 04-21-2014, 06:55 PM   #9
michaelk
Moderator
 
Registered: Aug 2002
Posts: 15,938

Rep: Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825
In addition by default assuming ext3/4 partition 5% is reserved for root. Unless you adjust this amount you can only fill the filesystem to 95%. That is why sdf5 shows 100% from what appears to still have free space.

Ignoring links sdc7 and sdc8 are the only file systems which has more the 100GB free space. So where were you trying to copy the 100GB, sdc6?
 
1 members found this post helpful.
Old 04-21-2014, 08:08 PM   #10
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Original Poster
Rep: Reputation: Disabled
(-sorry about that- looks like the forum moved my post to after your response as I was editing it)

Ok, I'm about to run those commands - I just ran

"$ cp -avr /path/to/source /path/to/destination " (adding the show output switch was a mistake with 10,000+ files)

I had to sudo in to get it to work


"$ sudo cp -avr /path/to/source /path/to/destination "

As soon as it's done I'll run the du commands and post.
Whatever I'm doing there's permission errors possibly all throughout the data from swapping out distros a few years ago I never sorted out correctly (and from data created on windows machines and copied through the network).


So now I know I have these questions and need to learn how to...
1) 100% effectively change ALL my data ownership to "me" (at least it's all mounted in /mnt/x.x).
2) find out how much space I actually DO have available per partition, for use for copying
3) use cp correctly after owning my own data
4) use meld to remove redundant copies
5) use rsync to avoid this mess in the future.
+
6) How to output the terminal more effectively for forums (is this how it's done? http://askubuntu.com/questions/15237...le-with-others)
 
Old 04-21-2014, 08:21 PM   #11
michaelk
Moderator
 
Registered: Aug 2002
Posts: 15,938

Rep: Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825Reputation: 1825
Another important point when you have lots of file is inodes. It is possible to have free space but not enough inodes and so the copy will fail. Post the output of the command:

df -ih
 
1 members found this post helpful.
Old 04-21-2014, 08:38 PM   #12
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,508

Rep: Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102
Quote:
Originally Posted by ChiggyDada View Post
So now I know I have these questions and need to learn how to...
1) 100% effectively change ALL my data ownership to "me" (at least it's all mounted in /mnt/x.x).
2) find out how much space I actually DO have available per partition, for use for copying
3) use cp correctly after owning my own data
4) use meld to remove redundant copies
5) use rsync to avoid this mess in the future.
+
6) How to output the terminal more effectively for forums (is this how it's done? http://askubuntu.com/questions/15237...le-with-others)
1) chown -R user /path/to/files
2) df -h
3) "cp -a" should be all you need
4) no experience with meld
5) switch from "cp -a" to "rsync -a", everything else is the same
6) put your output in [code][\code] tags (change from \code to /code)
 
1 members found this post helpful.
Old 04-22-2014, 12:37 AM   #13
ChiggyDada
LQ Newbie
 
Registered: Apr 2012
Distribution: Mint, CentOS, Unbuntu based
Posts: 24

Original Poster
Rep: Reputation: Disabled
Smile

Quote:
Originally Posted by michaelk View Post
...Ignoring links sdc7 and sdc8 are the only file systems which has more the 100GB free space. So where were you trying to copy the 100GB, sdc6?
Thanks (esp for the note on root reserving space!!!!)

I freed up a bunch on sdc7
(1.2) by copying a large part of it to sde5 (3.1) using the command line (cp -a...). Without getting into the trivialities of my activity, I just had an error on that operation from the command line that seems to make actual sense (vs. GUI tools). I.E. I needed 44+ GB but only have 22 gb available on sde5 (3.1) -which looks accurate.
However, when I started out I had 237 gb available supposedly, for 120gb of files, if I'm not mistaken.... Nope! It says 91G above, from earlier-!!!
so why is dh -f accurate but all the GUI tools are so off for free space? gparted listed that as 237 free GB- I wrote it down (whoopie, that's a sure accurate way of doing it, eh?) before I began, just to be sure. lol.
Code:
$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda7              12G  7.1G  4.3G  63% /
none                  492M  712K  491M   1% /dev
none                  501M  384K  501M   1% /dev/shm
none                  501M  6.6M  495M   2% /var/run
none                  501M     0  501M   0% /var/lock
/dev/sdc6             296G  168G  113G  60% /mnt/1.1
/dev/sdc7             522G  302G  194G  61% /mnt/1.2
/dev/sdc8             394G  221G  154G  59% /mnt/1.4
/dev/sdd5             493G  436G   32G  94% /mnt/2.1
/dev/sdd6             640G  541G   67G  90% /mnt/2.2
/dev/sdd7             244G  229G  2.1G 100% /mnt/2.3
/dev/sde5             1.8T  1.7T   23G  99% /mnt/3.1
/dev/sdf5             730G  693G  120K 100% /mnt/4.1
/dev/sdf6             658G  541G   85G  87% /mnt/4.2
/dev/sdf7             446G  366G   59G  87% /mnt/4.3


$ df -ih
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda7               778K    218K    561K   28% /
none                    123K    1.1K    122K    1% /dev
none                    126K       7    126K    1% /dev/shm
none                    126K      66    126K    1% /var/run
none                    126K       3    126K    1% /var/lock
/dev/sdc6                19M     33K     19M    1% /mnt/1.1
/dev/sdc7                34M    703K     33M    3% /mnt/1.2
/dev/sdc8                25M    1.5K     25M    1% /mnt/1.4
/dev/sdd5                32M    456K     31M    2% /mnt/2.1
/dev/sdd6                41M    116K     41M    1% /mnt/2.2
/dev/sdd7                16M    527K     15M    4% /mnt/2.3
/dev/sde5               117M    2.1M    115M    2% /mnt/3.1
/dev/sdf5                47M    1.2M     46M    3% /mnt/4.1
/dev/sdf6                42M    525K     42M    2% /mnt/4.2
/dev/sdf7                29M    469K     28M    2% /mnt/4.3
...So, is this that "root" reserve issue or possibly inodes, etc?
Or just a pebkac error (like most)?

Looks like I will have to possibly begin to remove duplicates or plug in another drive to make space to sort at this point. I don't think I want to remove root reserve space for now, and there's too many nested folders for redundancy checking until I learn how to fix that against less nested folders -something I hear meld can sort out (? -or I have some tedious work ahead)...

Thank you for the help so far- if some of these problems are that easy to solve... Good Lord: I really made a mess not using rsync all along (and chowning from the start). I'll let it crunch out the chown on all my /mnt/ tonight and see where I am with the commands in the morning.
Thank you for the help!!!!!



p.s. Thanks Suicidal' for the extra direct answers for those questions I figured out a bit more how to ask.

Last edited by ChiggyDada; 04-22-2014 at 12:39 AM. Reason: adding a thank you for all
 
Old 04-22-2014, 07:24 AM   #14
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,702

Rep: Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270Reputation: 1270
You might double check the units being used.

df -h uses 1024 for a K, and 1024*1024 (or 1,048,576) for M by default..

And if you didn't tune the filesystems for data only (or just "not the system") 5% will be reserved for root only.
 
1 members found this post helpful.
Old 04-22-2014, 10:20 AM   #15
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,508

Rep: Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102Reputation: 2102
Quote:
Originally Posted by ChiggyDada View Post
However, when I started out I had 237 gb available supposedly, for 120gb of files, if I'm not mistaken.... Nope! It says 91G above, from earlier-!!!
so why is dh -f accurate but all the GUI tools are so off for free space? gparted listed that as 237 free GB
Make sure you're not confusing available space on the partition with unpartitioned space.
I've never used gparted, so I'm not sure on its accuracy with regards to unused space, but I just checked on one of my systems and it seemed fine.

gparted:
Size: 72.75 TiB
Used: 31.91 TiB
Unused: 40.85 TiB

df -h:
Size: 73 TiB
Used: 32 TiB
Avail: 41 TiB

Apart from some rounding, everything seems kosher.
 
1 members found this post helpful.
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] "data=writeback" in fstab mounts root partition as "read-only" holister Linux - General 7 11-28-2011 09:47 PM
best to install /temp on my raid0 "partition" for video editing; space limit. for /? streams &dragonflies Linux - Software 0 10-17-2008 01:49 AM
Getting a "No space left on device" message on a non-full partition... sugar2 Linux - Hardware 12 07-06-2007 01:43 AM
nfs mounted partition giving error "no space left" niranjan_mr Linux - Software 4 01-03-2007 05:01 AM
Can I "merge" my NTFS XP partition into my Fedora patition? jdruin Linux - Hardware 4 04-20-2004 06:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 08:14 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration