- Virtually no central authentication. At most there might be a NIS box out there that might provide central logins to some boxes but usually these boxes don’t make up even half of the infrastructure.
NIS/ldap/AD, I have seen it in multiple environments.
I have to admit it is not that common. I think it really depends on the goal of the project. I believe alot of sysadmins use SSH with public/private keypairs.
Setup it up once, works forever
- Lack of Monitoring – While Windows shops usually have overly expensive products for monitoring, most Unix shops I have experienced are lucky to have anything. Only one position I’ve held had a Nagios setup.
I can't believe this. I have seen Nagios in alot of places. And expensive monitoring tools as well.
- Bad Naming Schemes: In fact, no naming schemes at all. Rather, machines are named after your favorite cartoon characters, musicians or superheros. The names of boxes have no standard and do not communicate location, purpose, or OS.
In smaller companies this is certainly the case. I believe that this is more due to Linux not being used by end users, while Windows hosts are more likely for that purpose.
Like a printserver, fileserver, Remote desktop farm, ...
In these small environments there is mostly 1 admin. So he knows all the names.
- DNS: Even with all the “cute” hostnames, virtually none of them end up in DNS, so everyone ends up with huge /etc/hosts files or they pretend they are uber-1337 and just remember what IPs go to what.
DNS is something almost everybody sets up.
Even in a small environment or at home (with maybe 2 or 3 hosts) you find dns setups.
He's probably working in legacy environments.
- No Centrailized patching
This is certainly true.
Linux systems are not that oftenly patched as windows systems.
In Windows you actually have to do this each month if you don't want to be on the news(of course this is a bit exaturated
Some companies have wsus on windows.
Most admins do it manually in linux.
yum update -y and you are done(and reboot if kernel update).
- No OS standardization - Shops are too often a mishmash of Redhat and Solaris. That I suppose, I can deal with. Other shops allowed their Admin and/or Devs to have way too much freedom. You’ll stumble across FreeBSD, Debian, Ubuntu, OpenSuse, Fedora, CentOS, and this list goes on. You jump from box to box having to deal with new locations for config files, different ways of manageing Apache and other services, different ways to do package managment, and thousands of little “gotchas” that happen on one distro but not the others.
I see alot of companies are moving from Solaris to Redhat. And mixing those 2 is not that bad. It is just a different operating system.
However, although I love working with linux I have to agree that each distro doing their own thing with the config files is a frustration to me as well.
There is a LSB(Linux Standard Base) for this and even that seems to not cover enough to prevent this mess.
That different distros have different package management tools can be because the structure for packets is totally different.
For example Gentoo: this is a source based distro. rpm, ... just wouldn't work for such a distro.
- Instead of fixing problems, they are often worked around with crappy perl or bash scripts
Not on my watch
I want a fix. Not a workaround.
Maybe in the first place I implement a workaround to assure the system is down.
I create a ticket and followup later for the fix.
- Too many people, usually developers who have no understanding of the OS, having root and/or sudo access
- Crappy password policies, if any at all.
I can't recall alot of people using it. However my point again earlier that Linux machines are mostly not used by end users.
And I only use pub/priv keypairs, so do alot of other admins.
- Shoddy Documentation, usually slopped together in some crappy wiki. I love wikis. Lots of very professionally done wikis exist online. But I’ll be damned if wiki’s internal to a company are ever top notch. People just slop pages together, organization via namesspaces rarely takes place, it’s always a disorganized mess. What’s worse is that wikis usually exist only to supplement what isn’t in an endless array of Spreadsheets that are either emailed around to everyone, or shared off some shitty Sharepoint site.
I think this is just a general problem in the IT sector. Missing/crappy documentation/...
- Little used or no change management/ticketing systems what so ever and resistance to using them.
I believe every really technical person loves to fix issues/configure/...
Administration is "overhead"/not liked by most admins.
This is not OS dependent.
- Skimping out on support, even for old hardware that you know is going to fail eventually
Mostly due to budget.
Alot of companies don't spend on proactive work. These companies don't understand the value of proactive work.
So you don't have the time and eventually alot of the things he is mentioning are happening, be it not all at once.
- Shoddy backups, and crappy rsync scripts to work around the shoddy backups.
rsync is indeed pretty common for backups.
However I always use a backup product with central management(bacula for example).
- More often than not, cabling is more similar to the image above, than not.
I think this is a good reason to fire someone
I have seen this in far larger environments and this is a recipe for disaster.
In these cases they have network admins who should do this (or datacenter engineers).
So I don't believe this is typically for one OS.
Maybe in the environments where there is less budget(and for that reason they choose linux) some of these things are missing.
Even if there is opensource software around it might take time to implement and finetune.
This is a big cost as well.
Or there might be other priorities (probably driven by budget and/or the management).
All in all he has a point.