Linux - ContainersThis forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So I installed Docker on a Rocky Linux VM with a 10GB hard drive (runs on Proxmox) for two small docker applications (rsshub & tinytinyrss). The applications were not taking much more than 2.5GB of hard drive space at the moment of installation, and the VM had plenty of space left in /
Fast forward today, I keep getting notifications from this VM that the hard drive is nearly full (the VM is configured to send an email notification via cron if a storage layer reaches 80% or more)... df -H reveals:
Examining / I believe the issue is inside "/var/lib/docker/overlay2/" but to be transparent, googling this issue sent me in so many different directions that I am not sure. I know very little about docker, and I believe it uses virtual FS's to perform storage, and some users have pointed out that even if overlay2 seems to be the culprit, in reality it could be some other files occupying all the space somewhere else within the docker virtual FS... Could be bull**** too...
Anyways, /var/lib/docker is indeed 5.8GB
Code:
du -hs /var/lib/docker
5.8G /var/lib/docker
Looking at docker's FS usage stats:
Code:
docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 4 2.657GB 0B (0%)
Containers 4 4 174.4MB 0B (0%)
Local Volumes 2 2 146.1kB 0B (0%)
Build Cache 0 0 0B 0B
Looking at docker's container sizes:
Code:
docker ps --size
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
f260f3a94b0f diygod/rsshub "dumb-init -- npm ru…" 2 months ago Up 13 hours 0.0.0.0:1200->1200/tcp, :::1200->1200/tcp rsshub_rsshub_1 174MB (virtual 408MB)
d2e03bd28b6e redis:alpine "docker-entrypoint.s…" 2 months ago Up 13 hours 6379/tcp rsshub_redis_1 0B (virtual 32.4MB)
064474c5af54 browserless/chrome:1.43-chrome-stable "./start.sh" 2 months ago Up 13 hours 3000/tcp rsshub_browserless_1 11B (virtual 2.14GB)
3ebad0761681 portainer/portainer-ce "/portainer" 2 months ago Up 13 hours 8000/tcp, 9443/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp portainer 0B (virtual 252MB)
Finally trying to find the largest folders within /var/lib/docker:
I thought the logs and json files were the issue but looking at the total space used by each of these file types (find . -name "*.log" | xargs du -sch):
-logs < 519MB
-JSON < 7.5MB
-DEB < 139MB
....
1. What is causing the filesystem to grow out of control?
2. How to clean up Docker to free up HDD space?
You may be onto something... I have created 2 more env variables for rsshub as per their wiki (https://docs.rsshub.app/en/install/#docker-image) in the hope the cache will clear itself at some point but I am not expecting anything at this moment...
For the chrome browserless docker app, I also believe it may be the root cause... I however cant find how to clear cache, or limit its growth...
theoretical you may go inside the docker image and look around, as if it was a real VM. But that require some knowledge about them.
you may get some ideas (for example) here: https://phoenixnap.com/kb/how-to-ssh...cker-container
"Unused objects" won't account for this. Something inside the containerized environment must be the culprit in this case.
Fully agree with you. I tried pruning the system, and it deleted a few unused images and cleared up 233MB of space, barely doing a dent in the problem.
"docker image ls" gives:
Code:
REPOSITORY TAG IMAGE ID CREATED SIZE
diygod/rsshub latest f40e0167073d 5 weeks ago 234MB
portainer/portainer-ce latest ad0ecf974589 3 months ago 252MB
redis alpine 5c08f13a2b92 3 months ago 32.4MB
browserless/chrome 1.43-chrome-stable ec7bb30f39c5 15 months ago 2.14GB
clearly the problem is "ec7bb30f39c5".
ssh into the container, I could see that /usr is taking 1.7GB, with /usr/src and /usr/lib taking most of that space...
Am I wrong to say that there's little to do with this unless screwing around in the container trying to save space, when perhaps its the container that is NOT built intelligently (to clean itself up)?
These days, I've encountered a lot of software (specifically including "Chrome") that is extremely wasteful of disk space probably because the authors thereof figure that it doesn't matter. It is entirely correct to presume that the "stock" container definition also did not bother. Really, the only thing that you can do at this point is to enter the container and have a look around ... look at the configuration files, zero-in on exactly what is in (say ...) "/opt," and then exactly why.
Incidentally – this has become one of the specific reasons why I have come to dislike "Docker." In my view, it leaves you too far disconnected from what is actually going on. "The price of convenience" may turn out to be too much. However, that's just me . . .
Last edited by sundialsvcs; 03-03-2022 at 05:43 PM.
Am I wrong to say that there's little to do with this unless screwing around in the container trying to save space, when perhaps its the container that is NOT built intelligently (to clean itself up)?
Yes, you are wrong. First of all you can drop the whole container and start over with a new one.
Next you can use ncdu or other software to see what is in /usr and clean that manually (if possible).
This is not the container, it cannot clean itself (as a VM or any other OS cannot clean the storage), but the software running inside (=chrome) which eats up your disk.
Docker (and other container solutions) will encapsulate the running process by having its own environment and virtually locking out anything else. Docker can also control the used resources like RAM or CPU, see for example here: https://docs.docker.com/engine/refer...-per-container
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.