LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 03-30-2020, 08:41 AM   #1
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Rep: Reputation: Disabled
ubuntu 19 running out of space when there is plenty of


I installed ubuntu 19.10 with lvm in first drive that is a 120gb SSD (
Code:
/dev/mapper/ubuntu--vg-ubuntu--lv
). Server also has a raid1 array with 1TB via software

Code:
md0 : active raid1 sdb1[0] sdc1[1]
976629440 blocks super 1.2 [2/2] [UU]
bitmap: 0/8 pages [0KB], 65536KB chunk
I also created a symbolic link (/media/nextcloud/raid1). When I run
Code:
df -h
, I get the following result:

Code:
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1.9G     0  1.9G   0% /dev
tmpfs                              390M  1.4M  389M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  109G   57G   48G  55% /
tmpfs                              2.0G     0  2.0G   0% /dev/shm
tmpfs                              5.0M  4.0K  5.0M   1% /run/lock
tmpfs                              2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0                          92M   92M     0 100% /snap/core/8592
/dev/loop1                          92M   92M     0 100% /snap/core/8689
/dev/sda2                          976M  196M  714M  22% /boot
/dev/loop2                          68M   68M     0 100% /snap/lxd/13901
/dev/loop3                          68M   68M     0 100% /snap/lxd/13942
/dev/sda1                          511M  7.8M  504M   2% /boot/efi
tmpfs                              390M  5.7M  385M   2% /run/user/128
/dev/loop4                         218M  218M     0 100% /snap/nextcloud/19299
tmpfs                              390M  4.0K  390M   1% /run/user/1000
First of all, I can't see the array in the list. And also, the lv size is being reduced when I copy files to the array. Therefore, the lv will run out of space. Why is that?
 
Old 03-30-2020, 08:49 AM   #2
ehartman
Senior Member
 
Registered: Jul 2007
Location: Delft, The Netherlands
Distribution: Slackware
Posts: 1,674

Rep: Reputation: 888Reputation: 888Reputation: 888Reputation: 888Reputation: 888Reputation: 888Reputation: 888
Quote:
Originally Posted by lochesistemas View Post
First of all, I can't see the array in the list.
The array seems to be mapped into LV's (or just a single one), so is in the mount list as /dev/mapper.
If you would have used the array directly it would have been under a /dev/md (multi-device) path (that is: software RAID, hardware raid is seen as a single device as the O/S doesn't know it IS a RAID one).
 
Old 03-30-2020, 08:50 AM   #3
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
Quote:
Originally Posted by lochesistemas View Post

First of all, I can't see the array in the list.
You don't see the array because you didn't mount it.

Quote:
And also, the lv size is being reduced when I copy files to the array.
You can't copy files to the array, since you didn't mount it. I guess you copy them from one location in the LV to another, thereby occupying more space.
 
Old 03-30-2020, 09:02 AM   #4
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
here's more info for you to understand the situation

Code:
# ls /dev/mapper/
Code:
control  ubuntu--vg-ubuntu--lv
Code:
# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Mar 20 15:09:56 2020
        Raid Level : raid1
        Array Size : 976629440 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976629440 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Fri Mar 20 18:47:40 2020
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : nextcloud:0  (local to host nextcloud)
              UUID : 7287e0e7:190f1d3e:f2ea21fa:de88d788
            Events : 10510

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
The raid1 is mounted under
Code:
/mnt/raid1
and created a symbolink link to
Code:
/media/nextcloud/raid1
 
Old 03-30-2020, 09:53 AM   #5
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,702

Rep: Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895
The posted output of the df command does not show /dev/md0 as mounted.

Post the output of the command
lsblk
 
Old 03-30-2020, 09:55 AM   #6
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
Quote:
Originally Posted by lochesistemas View Post
The raid1 is mounted under
Code:
/mnt/raid1
No it's not. Type mount and be convinced.
 
1 members found this post helpful.
Old 03-30-2020, 10:02 AM   #7
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
Code:
/dev/md0 on /mnt/raid1 type ext4 (rw,relatime)
 
Old 03-30-2020, 10:10 AM   #8
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
I just rechecked for a millionth time and now, found out that it was not added in fstab. Just readded it, rebooted the server and not, it shows under df -h

Code:
# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1.9G     0  1.9G   0% /dev
tmpfs                              390M   11M  379M   3% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  109G  109G     0 100% /
tmpfs                              2.0G     0  2.0G   0% /dev/shm
tmpfs                              5.0M  4.0K  5.0M   1% /run/lock
tmpfs                              2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0                          68M   68M     0 100% /snap/lxd/14034
/dev/sda2                          976M  196M  714M  22% /boot
/dev/loop1                         218M  218M     0 100% /snap/nextcloud/19299
/dev/loop2                          92M   92M     0 100% /snap/core/8592
/dev/loop3                          92M   92M     0 100% /snap/core/8689
/dev/loop4                          68M   68M     0 100% /snap/lxd/14066
/dev/md0                           916G   77M  870G   1% /mnt/raid1
/dev/sda1                          511M  7.8M  504M   2% /boot/efi
 
Old 03-30-2020, 10:14 AM   #9
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
now.. what is using the 120gb ssd drive?

Code:
# du -h --max-depth=1 | sort -hr
du: cannot access './proc/2150/task/2150/fd/4': No such file or directory
du: cannot access './proc/2150/task/2150/fdinfo/4': No such file or directory
du: cannot access './proc/2150/fd/3': No such file or directory
du: cannot access './proc/2150/fdinfo/3': No such file or directory
8.9G    .
4.4G    ./usr
2.7G    ./var
1.7G    ./snap
201M    ./boot
16M     ./run
14M     ./etc
100K    ./root
64K     ./home
28K     ./mnt
24K     ./tmp
16K     ./lost+found
8.0K    ./media
4.0K    ./srv
4.0K    ./opt
4.0K    ./cdrom
0       ./sys
0       ./proc
0       ./dev
 
Old 03-30-2020, 10:55 AM   #10
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,702

Rep: Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895
Any files you wrote to /mnt/raid1 when it was not mounted would be using space on the / filesystem.

To find which directories are using the most space. However, it will not show files written to /mnt/raid1 that are on the / filesystem if the raid is mounted.

du -hs * | sort -rh | head -5
 
1 members found this post helpful.
Old 03-30-2020, 11:56 AM   #11
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
Code:
/mnt/raid1# ls -la
total 24
drwxr-xr-x 3 root root  4096 Mar 20 15:11 .
drwxr-xr-x 4 root root  4096 Mar 20 15:11 ..
drwx------ 2 root root 16384 Mar 20 15:11 lost+found
Code:
/# du -hs * | sort -rh | head -5
du: cannot access 'proc/3800/task/3800/fd/4': No such file or directory
du: cannot access 'proc/3800/task/3800/fdinfo/4': No such file or directory
du: cannot access 'proc/3800/fd/3': No such file or directory
du: cannot access 'proc/3800/fdinfo/3': No such file or directory
4.4G    usr
2.7G    var
1.7G    snap
201M    boot
41M     run
Code:
/# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1.9G     0  1.9G   0% /dev
tmpfs                              390M   41M  350M  11% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  109G  109G     0 100% /
tmpfs                              2.0G     0  2.0G   0% /dev/shm
tmpfs                              5.0M  4.0K  5.0M   1% /run/lock
tmpfs                              2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0                          68M   68M     0 100% /snap/lxd/14034
/dev/sda2                          976M  196M  714M  22% /boot
/dev/loop1                         218M  218M     0 100% /snap/nextcloud/19299
/dev/loop2                          92M   92M     0 100% /snap/core/8592
/dev/loop3                          92M   92M     0 100% /snap/core/8689
/dev/loop4                          68M   68M     0 100% /snap/lxd/14066
/dev/md0                           916G   77M  870G   1% /mnt/raid1
/dev/sda1                          511M  7.8M  504M   2% /boot/efi

how can I reduce the space in the logical volume? I cannot seem to find the used space anywhere..
 
Old 03-30-2020, 03:21 PM   #12
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,702

Rep: Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895
unmount /dev/md0 and see what files if any are in the /mnt/raid1 directory.
 
1 members found this post helpful.
Old 03-30-2020, 04:18 PM   #13
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
wow.. I umounted and files are there.. how can I "move" them to the array, mount in once again and make the files available?
 
Old 03-30-2020, 04:27 PM   #14
michaelk
Moderator
 
Registered: Aug 2002
Posts: 25,702

Rep: Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895Reputation: 5895
mount md0 to another mount point, move the files from /mnt/raid1 to that other mount point.

This moves the files from your / partition to /dev/md0
 
1 members found this post helpful.
Old 03-30-2020, 06:30 PM   #15
lochesistemas
LQ Newbie
 
Registered: Mar 2020
Posts: 8

Original Poster
Rep: Reputation: Disabled
Finally!

Code:
# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1.9G     0  1.9G   0% /dev
tmpfs                              390M  1.4M  389M   1% /run
Code:
/dev/mapper/ubuntu--vg-ubuntu--lv  109G  7.0G   98G   7% /
tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/loop0 68M 68M 0 100% /snap/lxd/14034 /dev/loop1 68M 68M 0 100% /snap/lxd/14066 /dev/loop2 218M 218M 0 100% /snap/nextcloud/19299 /dev/sda2 976M 196M 714M 22% /boot /dev/loop3 92M 92M 0 100% /snap/core/8689 /dev/loop4 92M 92M 0 100% /snap/core/8592 /dev/sda1 511M 7.8M 504M 2% /boot/efi /dev/md0 916G 89G 781G 11% /mnt/raid1 tmpfs 390M 5.7M 385M 2% /run/user/128 tmpfs 390M 4.0K 390M 1% /run/user/1000

Thank you so much for all your help!!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Ubuntu 17.10: We're coming GNOME! Plenty that's Artful in Aardvark, with a few Wayland wails LXer Syndicated Linux News 0 10-20-2017 02:33 PM
No space left on device error with plenty of free INodes after cross compiler setup keptil Linux - General 3 01-18-2011 08:05 AM
Commands fail with disk full messages, but df says I've got plenty of space rsgrimes Linux - Newbie 14 02-11-2010 05:03 PM
OOM kill of custom program despite plenty of free swap space VelocideX Linux - Software 3 05-06-2009 12:45 AM
"admission denied" even though i am root and there's plenty of space ungua SUSE / openSUSE 5 02-12-2005 07:45 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 08:22 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration