CentOSThis forum is for the discussion of CentOS Linux. Note: This forum does not have any official participation.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
As an update for my original post, I have found the CentOS 7 installer to be MUCH more stable and easier to use when run from within a live environment. Unlike the standalone installer, when the install process is started from within a live session it does auto-detect the network interface, set up NTP automatically, the boot loader is enabled by default, and the disk partitioner doesn't fatally crash every 2 minutes.
I've since abandoned the standard DVD iso, and instead use the Gnome Live iso and install from in the live session. This says nothing of systemd, gnome 3, etc., but at least the installation process works correctly this way.
I found the partitioner to be confusing, and network set up to be impossible. I wanted a static IP address, but the installer would not seem to allow it. I set up the network manually, but the button to complete the settings was always greyed out.
I spent all day yesterday getting the system set up the way I wanted it. I feel like I should learn to use Gnome Classic, since the standard being forced on us. But I hate Gnome Classic too much. Gnome Classic is too slow, too many unwanted features, and I cannot put icons in the task bar. I use MATE.
This morning I go to boot the system, and it won't work. Every rescue attempt has failed. Yeah, this is just what is needed for the enterprise.
An entire day of work wasted. I will make one more attempt to re-install, then give up. Salix 14.1 looks pretty good.
Over a year after I started using CentOS 7 on some of the servers I admin, my opinion of it has only gotten worse.
I made the mistake of installing it on my work laptop, hoping that it would provide a nice stable work environment that I wouldn't have to reinstall for a long time. Huge disappointment.
All kinds of things are broken. Things that I'd never had an issue with in any other distribution, including Fedora, don't work. The package availability actually seems to have shrunk since CentOS 6.
This is supposed to be the third incremental release of a rock-solid stable distribution and it's like beta software. At least on the desktop. On the server side, the update from 7.0 to 7.1 was so bad, that I had to reinstall some of our servers to get them working again. Part of this was caused because the base packages in the install DVD were different from those in the base repository.
From what I've seen on bugzilla, these issues are present in RHEL as well. What was Redhat thinking?
My opinion hasn't improved much either. Since setting up my workstation on 7 a little over a year ago, I've had to format and reinstall two separate times.
One because a sudden power outage corrupted the XFS filesystem beyond repair. Who on earth thought it was a good idea to use XFS as the default filesystem on a server OS? XFS is horrible, I've never seen a more failure-prone filesystem in my life. Any hiccup in the power supply has a very reasonable chance of borking your entire filesystem and forcing you to format and reinstall. Why does it even have a journal? It apparently doesn't use it.
The second reinstall was because stupid me decided to install exfat driver support so I could access an external drive. This seemingly harmless (and successful, by the way) driver install locked up the system, and on reboot wouldn't stop kernel panicking. I tried dropping back to previous kernels, no dice, nothing would boot, the system was hosed. Even uninstalling the driver didn't help.
I have CentOS on a workstation as well, have few CentOS kernels to choose from when booting, normal thing with the updates... and then each new update rendered one of the kernels unbootable (fails to mount /sysroot, oh what an upgrade). Out of 6 kernels on the menu, 3 are already unbootable.
I don't have a lot of experience with Linux, but I would choose CentOS 6.7 over 7. I tried 7 on an old Athlon PC I was going to use as a server and when I tried looking for help with problems, most posts I found were for 5 or 6 which differed too much with 7 to be useful. So when I got the new (old) Dell OptiPlex to replace the Athlon, I installed CentOS 6.7 instead. Never looked back.
To be more specific, I remember a change from iptables to firewalld in CentOS 7 which was harder to find help for because of how new it was. (Although it's probably a bad example as it is inherited from RHEL)
In my opinion it isn't wise to change a less widely used OS too much. People on the new system will find it a lot more difficult to find help.
"One because a sudden power outage corrupted the XFS filesystem beyond repair."
You shouldn't be using XFS at all if you don't have a UPS ( Uninterruptible Power System ). If you simply plug your machine into the wall you should stick with ext4 as it is much safer. I have an APC UPS and use ext4.
"Who on earth thought it was a good idea to use XFS as the default filesystem on a server OS? XFS is horrible, I've never seen a more failure-prone filesystem in my life. Any hiccup in the power supply has a very reasonable chance of borking your entire filesystem and forcing you to format and reinstall. Why does it even have a journal? It apparently doesn't use it."
This is much less of an issue with servers as responsible server owners don't plug their servers into the wall. They use a UPS to provide battery backup. A UPS can even shut down a system cleanly for long running power outages. With servers there is a tradeoff between filesystem performance and reliability. XFS has better performance for large files such as database tables and VM images but requires a UPS and better backup practices.
I was reading the posts earlier in this thread and was shocked how confused you were about the installer and manual partitioning. I was confused for 10-20 min before I figured out that you have to check the 'Format' checkbox in the lower section to enable using existing partitions.
It is a safety feature of Anaconda that when you select an existing partition it is initially grayed out so it doesn't damage existing partitions. However if you tick the 'Format' checkbox you give the installer permission to reformat the partition and it is then displayed in black for reuse and mounting. The checkbox should be labeled 'Reformat' instead as it is asking users for permission to reformat the selected partition.
I installed both Centos 7 and Fedora 24 to a new SSD about a month ago. I created all of my partitions in GParted in advance and had to tick the 'Format' checkbox so I could use them.
This is much less of an issue with servers as responsible server owners don't plug their servers into the wall. They use a UPS to provide battery backup. A UPS can even shut down a system cleanly for long running power outages. With servers there is a tradeoff between filesystem performance and reliability. XFS has better performance for large files such as database tables and VM images but requires a UPS and better backup practices.
So a failure-prone filesystem is acceptable in a server environment because it has somewhat better performance and servers should have better backups anyway? I guess that's why so many servers store the OS and data on RAID 0 arrays. [/sarcasm]
I hope you know that there are many reasons why a server might experience an unclean shutdown, power failure is only one of them. Even something as simple as the OS hanging and forcing a hard shut down is easily enough to brick an XFS filesystem (yes, that has happened to me as well, and I had to reinstall the OS because of it). I'm not saying XFS shouldn't be available, but it should not be the default on any distro, much less a server-oriented distro.
Quote:
Originally Posted by tofino_surfer
I was reading the posts earlier in this thread and was shocked how confused you were about the installer and manual partitioning. I was confused for 10-20 min before I figured out that you have to check the 'Format' checkbox in the lower section to enable using existing partitions.
In the two years since I made this thread, a lot of changes have been made to the installer. I don't have an old ISO to verify, but I'm fairly sure there was no "Format" checkbox available back then. Based on my experience though, even if it had one, clicking it likely would have seg-faulted the installer anyway.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.