[SOLVED] Can Linux have difficulty with a 2TB drive?
Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
No kidding. Does it automatically check for updates and install them on boot up? I've been told that windows does that now whether you like it or not. No wonder the default on windows is to hibernate and not turn off.
AFAIK, Linux Mint does not update on boot. There is an error reported in the attached file, which I want to track down and verify that it is, or not, the cause of the slow boot. The message should not be relevant I was told. But I am searching the web, and seek more expert input, that the error may have been triggered by verifying the health of three disk drive and a DVD drive. The disk drives are: 120GB SSD for Windows 10 without user data, 1TB Seagate dedicated to Linux with 70GB of Linux data, and the 2B for everything else.
I have a 12TB mechanical disk array on my file server - I guess boot time is not an issue as it is on 24/7, but size is not an issue (insert joke here).
When I moved some of my Linux desktops from mechanical to SSD, we are talking 2 mins changing to under 30 seconds. My main desktop went from a minute to 10 seconds power to desktop. I built a living room PC for my sister with an M.2 drive which is power to desktop in under 10 (jealous, wish I did that on mine).
Just giving my anecdotal experience with Linux boot times. And to add that for every single machine that I converted from Windows to Linux, the time was improved massively. We had a Windows 10 machine that took about 10 minutes before it was operational (an expensive laptop) - put Debian on it and it was under a minute on a mechanical drive. Boot times are so variable but at the same time, generally easy to improve.
I know, that at the end, if I want better performance it would be with an SSD, and best performance with an M2. Mine is an 8th gen. MB and I saw one unused M2 slot. 120GB M2 would do the trick. Thanks for the heads up.
There is an error reported in the attached file, which I want to track down and verify that it is, or not, the cause of the slow boot. ... But I am searching the web, and seek more expert input, that the error may have been triggered by verifying the health of three disk drive and a DVD drive.
That is not the best way to solve it. You need to analyze your own system. We have no access to it and we cannot see the logs and cannot do any test. Waiting for the perfect solution without details is just nonsense.
Anyhoo, your bootinfo.txt shows that you have a whole bunch of drives with NTFS and vfat partitions, and each drive with its own EFI partition it seems, in there.
A hot mess from my (Linux) point of view, sorry.
Also the 2TB drive in question conatins only Microsoft partitions.
Do you have any difficulties with all your NTFS partitions apart from the boot delay?
Anyhoo, your bootinfo.txt shows that you have a whole bunch of drives with NTFS and vfat partitions, and each drive with its own EFI partition it seems, in there. A hot mess from my (Linux) point of view, sorry.
Also the 2TB drive in question contains only Microsoft partitions.
Do you have any difficulties with all your NTFS partitions apart from the boot delay?
Thanks for your interest and I am responding in reverse order. I was wrong about the Linux drive size. Linux is installed on the 1TB drive, and Windows data is on the 2TB. Nevertheless, the problem came about after installing the 2TB drive.
There are no boot delays or speed problems with Windows 10. It boots fast and with instant access to the NTFS partitions. Keeping in mind that Windows ignores Linux partitions whereas Linux does not ignore Windows partitions.
Here are the CPU specs. Performance nearly matches that of an I3 of the same (8th) generation.
You misread the Boot-repair report. There is only one EFI partition on the SSD created by Windows. If you see any others, I don't know anything about them.
1. The SSD is dedicated to Windows 10 and its three masked system partitions.
2. A 2TB drive divided into four partitions, all NTFS.
3. A 1TB drive divided into three EXT4 partitions, for system, data and backup, and a 5GB swap partition.
Since I wrote this post, I don't mount the NTFS paritions, but it still makes no difference.
P.S: After booting there is no lag when accessing anything on the EXT4 and NTFS drives.
Last edited by ineuw; 06-26-2020 at 03:38 AM.
Reason: clarifications
Old thread, but in case anybody is wondering, the LVM developers are nuts. All of their code: vgs, lvs, lvm will print that scary-sounding error if called by a program with any open file descriptor other than 0, 1, or 2 (like a shell script writing to a log on fd 63). It is not a problem with you or the calling code. It is a paranoia issue with LVM.
fsck will scan the filesystem if the previous shutdown did not save everything and umount the disk. Make sure your power button is set up to run shutdown and wait rather than shut off immediately which will lose data. Also, do not mount an HDD with barrier=0 unless you know what you're doing.
Old thread, but in case anybody is wondering, the LVM developers are nuts. All of their code: vgs, lvs, lvm will print that scary-sounding error if called by a program with any open file descriptor other than 0, 1, or 2 (like a shell script writing to a log on fd 63). It is not a problem with you or the calling code. It is a paranoia issue with LVM.
fsck will scan the filesystem if the previous shutdown did not save everything and umount the disk. Make sure your power button is set up to run shutdown and wait rather than shut off immediately which will lose data. Also, do not mount an HDD with barrier=0 unless you know what you're doing.
Much thanks for the clarifications and thanks to all for contributing to my understanding. I will check the disk configurations, and buy an M2 for booting Linux.
^ what's M2 and how would buying more hardware solve the problem?
BTW, I'm not convinced that the error from post #1 is relevant at all.
Quote:
Originally Posted by ineuw
Since I wrote this post, I don't mount the NTFS paritions, but it still makes no difference.
Well, then, logically, the delay cannot come from mounting the drives, no?
Anyhow, I think we need some hard data on this:
Code:
systemd-analyze blame
In the end I tend to agree that 75s isn't too bad for a 4-threaded consumer CPU and a machine chock full of spinning NTFS hard drives, and you should probably leave well enough alone.
What's M2 and how would buying more hardware solve the problem?
BTW, I'm not convinced that the error from post #1 is relevant at all.
Well, then, logically, the delay cannot come from mounting the drives, no?
Anyhow, I think we need some hard data on this: "systemd-analyze blame"
ondohono, I noticed the "scratching" of M2. :-)
Also came to believe that these are not issues.
The delay is due to the speed of the drive and because of the 12 partitions:
3 ext4
7 ntfs
1 swap
1 vfat
Originally, I started out with 18 partitions and reduced them to 12.
"systemd-analyze blame sums up to ~84 seconds.
The previous installation there was a dedicated 120GB SSD which booted ~15 seconds.
Which is roughly the time you are bemoaning.
Doesn't help us a bit unless you show us the full output.
The increase from 15 to 84 seconds boot time qualifies me to bemoan. I enjoyed your choice of the word, but I stop bemoaning. What really surprised me was the speed difference of an electromechanical device versus an SSD and I looked at it as a software defect.
The increase from 15 to 84 seconds boot time qualifies me to bemoan. I enjoyed your choice of the word, but I stop bemoaning. What really surprised me was the speed difference of an electromechanical device versus an SSD and I looked at it as a software defect.
And you're still not showing us the output of 'systemd-analyze blame'???
Sorry for using fancy words, sometimes I just can't help myself.
Rather self-explanatory.
I don't know what fstrim does, but you probably need to tell it to leave NTFS partitions alone.
apt-daily has nothing to do with your hard drives, apt is your packet manager.
Waiting for a (wireless) network to become ready can take quite some time and is not necessarily required during boot.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.