LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 08-14-2020, 06:20 AM   #1
bheadmaster
LQ Newbie
 
Registered: Aug 2020
Posts: 2

Rep: Reputation: Disabled
Best practices for remote Linux machines


Let's say you're developing software for some special purposes, and you're deploying it on your own machine, on a remote location (perhaps behind a firewall). You're using [GNU-slash-]Linux as your operating system. You have no physical access to the machines, but you do have a remote terminal access like SSH.

What are some best practices for managing and maintaining such machines?

In my experience, there are a lot of things that can make it pretty painful:
- upgrades on popular distributions like Ubuntu can sometimes have unpredictable consequences
- not upgrading your system leaves potential security holes and risks dependency issues if you're upgrading your developed software
- a lot of stuff happens "under the hood" which implicitly requires unrestricted network access (e.g. NTP time synchronization)
- eventual security breaches and/or system issues can reinstall, and therefore physical access
- etc. etc.

I was thinking that maybe a do-it-yourself distribution like Slackware could make the system much more stable, at the expense of not having a dime-a-dozen engineers that are comfortable with the system...

So I'm looking for other people's experience with this.
Whatever's on your mind related to the topic

Last edited by bheadmaster; 08-14-2020 at 06:21 AM. Reason: Stallman-correctness
 
Old 08-14-2020, 06:29 AM   #2
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,850

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
There is no "best" practices, this always depends on the goal and the infrastructure. For example we have a stable development environment, there is no upgrade, no "under the hood" happenings and in general everything [every change] is controlled.
 
Old 08-14-2020, 06:34 AM   #3
bheadmaster
LQ Newbie
 
Registered: Aug 2020
Posts: 2

Original Poster
Rep: Reputation: Disabled
Well, I'd say the "stable development environment" and "every change is controlled" counts as best practices?

I know it might sound simple to you, but how do you actually maintain such controlled environment? How do you make sure that nothing happens in the remote machine that is not under your control? And so on...
 
Old 08-14-2020, 06:39 AM   #4
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,850

Rep: Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309Reputation: 7309
it is not only me, but a group who maintain and not only one host, but more.
But in different situations there can be different requirements and solutions.

Last edited by pan64; 08-14-2020 at 07:38 AM.
 
Old 08-14-2020, 07:17 AM   #5
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,631

Rep: Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696Reputation: 2696
I have helped manage such an environment, although with some differences. The OS was RHEL and CentOS mixed, we had full control of the network, hosts hardware, hosts software, remote access using host control modules that allowed us to reboot, watch the boot, or remote load an OS onto new iron easily.

We had two levels of software: those provided in OS repository, and our own home grown application/serve suites. Our own software was distributed from a unique repository system completely under our control. It was distribution and OS agnostic because we hade clients running their own servers on RHEL, AIX, HP-UX, Solaris, and Windows (where we could not dissuade them, yeah it WAS sad). In this way we controlled the dependency lists, prerequisites, and could move forward or roll back under our full control and at will.

As a network/systems admin I am a bit OCD: if I hold the responsibility then give me the damn keys to the kingdom or go home. I do not need to control everything (and do not WANT to) but I need to SEE everything and be able to control everything I need to do the job.

It has been two years since I had to manage more than a couple of machines remotely in anything like that intense or critical a situation, but at that time I would not have considered Ubuntu suitable. My options were stable, long term distributions well supported for corporate operations from the SUSE and RHEL families with excellent support form the hardware vendor.

If you do not and cannot control your infrastructure, then you can never consider your machines secure. For one thing, physical access trumps every software security measure known to man.

For an application development and distribution situation it is hard to beat configuring your own packaging and distribution to ensure that every prerequisite and requirement is met (or will be met during installation/update) BEFORE any software or configuration changes are deployed.

One key point: the term "best practices" is a phrase coined by vendors who want to sell you on THEIR solution. Usually to make a $! You do what works, will work for the longest time without breaking, will work to satisfy the business and security requirements, and that you can control and support without pain going forward. If you have a team, pick their brains and get some level of agreement of requirements, standards, and architecture before you configure of code anything you might have to live with for years. Rushing is tempting, but can lead to disaster.

Finally, do not lean too much on the advice of people who do NOT understand every fine detail of your situation. That includes me. We at LQ can give you lots of good advice, advice that might lead you to the WRONG solution because of things we do not and cannot know. We do not have to live with your solution, but YOU do!

Last edited by wpeckham; 08-14-2020 at 07:20 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Five best practices for administering remote systems LXer Syndicated Linux News 0 04-21-2020 10:02 PM
LXer: How to make your Linux machines visible to other Linux machines LXer Syndicated Linux News 0 12-04-2018 01:32 AM
apache2 on linux serving other linux machines immediately, but windows machines after 4-5 sec delay zjmarlow Linux - Server 9 07-27-2015 07:01 PM
Best place to install applications / Best file practices Rustylinux Linux - General 2 03-26-2007 11:25 PM
Best Practices in Remote administration in Redhat 8, Terminal Server clone? sboscarine Linux - General 3 02-09-2003 03:39 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 04:54 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration