[SOLVED] How to increase size of my / root partition
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
[SOLVED] How to increase size of my / root partition
Running CentOS 6.3 and need to expand the root partition into the unused space next to it?
Tried booting into single user mode but gparted still doesn't allow the (mounted) root partition to be expanded.
Can boot from CD to CentOS 7 systemrescue but from there how do I change the size of the partition.
Unfortunately this old server doesn't seem to like running from any of the live environments (Fedora Live locks-up) which rules out doing it from within any of the live linux distros.
you can move files not needed to boot out of (p1) and make them mounts (there is a new option called "bind" too). there are partition moving programs (they are rhobust but you should back up). if you use raid (or which type of filesystem) is an issue. if you have ext2 its simple failsafe and fast - your probably ok. use some highly featured complex filesystem: it might buck and twist and loose something.
however: the BEST thing to do is
1) back up
2) make two partitions, #1 for rescue (a full but more minimal install), #2 the whole system
also important: your "/" keeps a record of all mounts of all kinds. if you change "/" and the mounts "aren't there yet" your system will no longer be coherent and no longer work.
finally, each chroot/ directory is it's own "/", and once (root user) is in it the (user) has the same full access to the kernel as "the real /", so security is out the window on most of todays systems (unless very carefully assembled).
if you can copy sys files into chroot then type "chroot chroot1/" then it's a chroot, just one of any number of "/" the kernel sees
gparted does have a live CD/USB image. Make sure you backup any and all important data just in case. Hopefully they are small enough to run on your system.
Yes, but you have to get things exactly right or your machine won't boot and you'll need to boot from something like SystemRescueCD to recover. If you aren't already familiar with the command line partitioning tools, I don't recommend that you try. Just boot from the rescue CD that you're likely to need anyway and run gparted from that.
If you're really insistent on doing it from the command line on the running system, post the output from "fdisk -lu" so that I can see exactly how to proceed or whether you're going to run up against the limitations fdisk has about where the first partition can start. You're using an old version of fdisk that defaults to cylinder units, and that's not exact enough.
Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00056923
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 31101839 15549896 83 Linux
/dev/sda4 89706960 625137344 267715192+ f W95 Ext'd (LBA)
/dev/sda5 617056713 625137344 4040316 82 Linux swap / Solaris
/dev/sda6 89707086 617056649 263674782 83 Linux
Partition table entries are not in disk order
I don't need to do this from a running system as I can boot from CD to CentOS 7 SystemRescue. Just checked and I am actually running version 6.7.
Edit that file and make the size for partition 1 larger. The maximum that will fit in the available space is 89704912 sectors. That would increase the partition size from its current ~15GB to ~44GB.
Now use sfdisk to repartition /dev/sda from that file and then enlarge the filesystem to fill its new partition:
Code:
sfdisk /dev/sda <parts.out
resize2fs /dev/sda1
Really, you can do all that from the running system. You will just have to include the "--force" option with sfdisk in the repartitioning step and then reboot before resizing the filesystem. The only issue with that is that "--force" will also override any other detectable problems.
(I'm really hesitating before hitting "Submit Reply" on this, but here goes ...)
Easily the best thing to do is to convert the system to use LVM = Logical Volume Management. If this had already been in place, then you could have simply added another "physical volume" to the root "storage pool."
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.