Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I needed to move my ubuntu install from /dev/sda6 (5GB) partition to /dev/sda3 (30GB) partition. I never thought I'd use ubuntu that much so put it on a small test partition.
I have used partimage to create an image of sda6 then restored it to sda3.
When I do a df -h I get the wrong output.
I did investigate using tune2fs -m -0 /dev/sda3 regarding reserved space for /root but this did not help.
When you copy a partition at the byte level (with dd or similar tools), you are copying the entire filesystem---which will be the same size even in a newer, larger, space. I'm not sure what partition resizing tools do in such a situation. (What "resize command" did you use?)
As for installing GRUB, the shell method you show requires 3 commands:
grub
root
setup
Example, to point GRUB to drive 1, partition 3 for its files, and then install the the MBR of drive 1:
grub
root (hd0,2)
setup (hd0)
I have the same problem. In my case I am trying to use partimage from a 40GB HDD to a 80GB HDD.
When I do a df -h I get the following on 80GB
Code:
Filesystem Size Used Available Use% Mounted on
/dev/sda3 35.9G 22.5G 11.6G 66% /
where as the sfdisk -s /dev/sda3 I get the following on 80GB
Code:
#sfdisk -s /dev/sda3
77352975
Any way out of this?
Quote:
Originally Posted by pobman
Hello,
I needed to move my ubuntu install from /dev/sda6 (5GB) partition to /dev/sda3 (30GB) partition. I never thought I'd use ubuntu that much so put it on a small test partition.
I have used partimage to create an image of sda6 then restored it to sda3.
When I do a df -h I get the wrong output.
I did investigate using tune2fs -m -0 /dev/sda3 regarding reserved space for /root but this did not help.
In the end I decided to blow away my install and start a fresh, but I will store as I am sure it will happen again, now to try a similar thing on my ntfs partition with that bloat ware.
Thanks, VinayatLQ - I had the same problem and followed your 8 steps - they worked great. df now reports the same size as gparted.
I'm planning to use the partition as a raid1 mirror with a similar HD so it was important to get it right (I'd resized it to match it to the other HD).
There was only one minor difference in what I did - the partition is new and separate from the ones in use (/, /home, etc), so I could unmount it and work on it without using a rescue CD like Knoppix.
I don't think of myself as a newbie, or of this as a newbie-type question, but I appreciate getting help from people that are more expert than me, and I'll try to do the same.
I know it has been a while since this thread has seen any action. I am only posting because this keeps coming up high in my Google searches and I found a simple solution.
I resized my RAID6 array by adding three new drives but DF kept showing the old size.
After some research I found that I simply needed to tell the OS to resize the partition to its maximum size. It took a while but it did it LIVE without unmounting the array and no data loss. As a matter of fact I was even able to write to the disk while it was being resized. I wouldn't try rebooting because that sounds down right dangerous.
It took about 3 hours (+/-) to expand my array from 3T to 6T using this command:
Code:
sudo resize2fs /dev/md9
By not specifying the size you are resizing to, the entire available space is used which is exactly what I wanted.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.