LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Containers
User Name
Password
Linux - Containers This forum is for the discussion of all topics relating to Linux containers. Docker, LXC, LXD, runC, containerd, CoreOS, Kubernetes, Mesos, rkt, and all other Linux container platforms are welcome.

Notices


Reply
  Search this Thread
Old 10-31-2017, 01:25 PM   #1
saldon
LQ Newbie
 
Registered: Jun 2008
Location: USA
Distribution: Ubuntu, OpenSUSE, RedHat
Posts: 29

Rep: Reputation: 1
Containerizing Network Services


Has anyone tried placing some of your key network services into a container? i.e. LDAP server, DNS server, DHCP Server, Mail server, or Wiki server. Seems like using Docker containers for these services could be beneficial. Thoughts?
 
Old 11-01-2017, 02:31 PM   #2
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,512

Rep: Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657
Quote:
Originally Posted by saldon View Post
Has anyone tried placing some of your key network services into a container? i.e. LDAP server, DNS server, DHCP Server, Mail server, or Wiki server. Seems like using Docker containers for these services could be beneficial. Thoughts?
I have (Bind, ftp/sftp, mail, each in its own container or two), but using OpenVZ containers. Packing more features securely onto one iron box (or two for failover) is nowhere near as efficient using other technology.
 
1 members found this post helpful.
Old 11-03-2017, 09:22 AM   #3
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,610
Blog Entries: 4

Rep: Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905
In my view, some of these services, such as DNS or DHCP, are "services of the host," and therefore usually exist outside of the container structure.

Server programs, on the other hand, might reside in a container simply as a means of tightly controlling what they can see and can access.

But ... it has to make sense. Containers aren't magic.
 
Old 11-04-2017, 05:24 AM   #4
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,512

Rep: Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657
Quote:
Originally Posted by sundialsvcs View Post
In my view, some of these services, such as DNS or DHCP, are "services of the host," and therefore usually exist outside of the container structure.

Server programs, on the other hand, might reside in a container simply as a means of tightly controlling what they can see and can access.

But ... it has to make sense. Containers aren't magic.
Actually, while the software to PROVIDE them runs on a host these are NETWORK services. The network does not really care where they run, only that they work.

There are two way the software can run that involves containers:
1. You can run the software in an LXC style process container for security and isolation
2. You can use a server container to run and isolate the software as if you were using full virtualization.

Each has certain advantages, but I prefer #2 for the Disaster Recovery and High Availability advantages.
 
Old 11-06-2017, 07:46 AM   #5
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,610
Blog Entries: 4

Rep: Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905
I like to say that, "a container is a very-sophisticated pair of Rose-Colored Glasses."

The software "in" the container is actually using a portion of the resources of the host, and is being run directly by that host's operating system. But it cannot see it clearly. Instead, it sees what it wants to see – and, what we want it to see.

Services like DNS and DHCP are often run outside of containers for the same reason that "these services, on your home network, are provided by something 'outside of' your computer(s)." They probably need to see the real world as it actually is. "Putting blinkers on 'em" probably wouldn't make much sense.
 
Old 11-06-2017, 06:07 PM   #6
saldon
LQ Newbie
 
Registered: Jun 2008
Location: USA
Distribution: Ubuntu, OpenSUSE, RedHat
Posts: 29

Original Poster
Rep: Reputation: 1
All great points. I'm thinking of running the a VM to host several docker images. With this solution I'm leaving all the fault tolerance and backups to the hypervisor and using the containers to create "ideally" more isolated and secure network services. This could also make deploying upgrades to these services faster with an easy roll-back option. Thoughts?
 
Old 11-07-2017, 05:08 AM   #7
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,512

Rep: Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657
Quote:
Originally Posted by saldon View Post
All great points. I'm thinking of running the a VM to host several docker images. With this solution I'm leaving all the fault tolerance and backups to the hypervisor and using the containers to create "ideally" more isolated and secure network services. This could also make deploying upgrades to these services faster with an easy roll-back option. Thoughts?
Not a new plan, good shops have been doing this for years. The biggest reason is not speed of deployment for services like these, but rather the speed of failover in HA solutions, the backup and recovery (DR) options, and maximizing the use of resources (use of the host) to maximize ROI (Return on Investment).
 
1 members found this post helpful.
Old 11-08-2017, 08:28 AM   #8
saldon
LQ Newbie
 
Registered: Jun 2008
Location: USA
Distribution: Ubuntu, OpenSUSE, RedHat
Posts: 29

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by wpeckham View Post
Not a new plan, good shops have been doing this for years. The biggest reason is not speed of deployment for services like these, but rather the speed of failover in HA solutions, the backup and recovery (DR) options, and maximizing the use of resources (use of the host) to maximize ROI (Return on Investment).
How is container failover better than VM failover? I've always run my hypervisors in redundant clusters and the failover has always been seamless. DR with VM's is pretty easy too. I'm trying to understand what the containers provide that is better than a VM and can containers offer better security for my network services.
 
Old 11-08-2017, 08:41 PM   #9
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,610
Blog Entries: 4

Rep: Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905
You should always bear in mind that "containers," vs. "virtual-machine monitors," are entirely different technologies, each with their own advantages and disadvantages – and, fundamental characteristics.

With containers, all of the processes actually are "running on the same Linux host." They just don't know it. With virtualization, they're running in an environment which literally undertakes to create an entire machine. (Virtualization relies very heavily on CPU-architecture features provided by modern chips.)

The advantage of containers is that they provide "isolation" – of a certain sort – at much less cost. Processes perceive only the files that we allow them to see, and a user-id/group-id/permissions structure that, to them, appears to be "real." The processes which run in a container are [usually ...] always user-land processes ... even they think that they are running as root.

But – and this is the key(!) point – we're actually accomplishing this feat by means of, shall we say, "a chroot-jail, on steroids." Everything that the containerized process perceives is actually a very-carefully constructed illusion. Almost nothing that "the containerized application thinks is true," really is true. We are performing the entire trick within the auspices of an operating system that is directly running on the (virtual?) "real hardware." We're not actually emulating the whole environment: instead, we're very-tightly controlling what the process (thinks that it ...) sees, and of course, what it can do. (And we're exploiting bulletproof, hardware-provided, features to help us do so.)

Virtual-machine technology, on the other hand relies heavily on fairly-exotic hardware support by the CPU, which puts itself into a specially-constructed physical operating mode. Therefore, it is truly capable of running any operating system. Containers, on the other hand, are purely a software-constructed environment that is peculiar to Linux, and cannot support an environment other than their own. (Other operating systems, such as Windows, today provide similar contrivances, each one peculiar to itself.)

"With containers, we really are 'pulling the wool down over your eyes!'" "But it works!"

Last edited by sundialsvcs; 11-08-2017 at 08:52 PM.
 
Old 11-09-2017, 04:56 AM   #10
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,512

Rep: Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657Reputation: 2657
Quote:
Originally Posted by sundialsvcs View Post
... Containers, on the other hand, are purely a software-constructed environment that is peculiar to Linux, and cannot support an environment other than their own. (Other operating systems, such as Windows, today provide similar contrivances, each one peculiar to itself.)
I cannot totally agree, but that is a pretty good high level description. Actually, containerization is a specific and limited form of virtualization, but run under a kernel and therefor limited to those things that kernel can do. Full virtualization runs additional kernels as sub processes of the hypervisor (Node 0) kernel. There are many similarities. In full virtualization the host is telling the guest a few more lies, and the process separation is even greater because of that guest kernel. One high level difference is that with containers, since they share a kernel, you can achieve great density. In other words, the same hardware can run many more containers than full virtual guests.

The other issue I have is that they are specific to Linux, they are not. Windows can be tweaked to perform the same kind of containerization, and the people that originated Virtuozzo did exactly that. Microsoft changed the licensing to remove any financial advantage to doing it in Windows about 2007 I believe. The PARALLELS products, I believe, once used that technology.
 
Old 11-09-2017, 06:40 AM   #11
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
Originally Posted by wpeckham View Post
Actually, containerization is a specific and limited form of virtualization, but run under a kernel and therefore limited to those things that kernel can do.
As if the kernel is limited in the kind of things it can do

Quote:
The other issue I have is that they are specific to Linux, they are not.
Correct, BSD jails, Solaris Zones, AIX WPARs and HP-UX Resource Partitions are other examples of such technology.
 
Old 11-09-2017, 08:49 AM   #12
saldon
LQ Newbie
 
Registered: Jun 2008
Location: USA
Distribution: Ubuntu, OpenSUSE, RedHat
Posts: 29

Original Poster
Rep: Reputation: 1
I've worked with virtualization, primarily VMware and KVM, for the past tens years. So, I understand that technology pretty well. I've just started looking at Docker. Thus far I'm not seeing any real advantage to using Docker to "containize" basic network services. I do see where it is really great for application developers. Does anyone here see an advantage to say running my LDAP or DNS server in a container versus a virtual machine. Are there perhaps some security advantages I'm not seeing?
 
Old 11-21-2017, 03:33 AM   #13
camp0
Member
 
Registered: Dec 2016
Location: Dublin
Distribution: Fedora
Posts: 70

Rep: Reputation: 4
Hi,

If you use basic services and you feel more happy with VMs go with them, we use VMs and dockers and in general if the services are not elastic we use VMs, on the other hand, for services that needs to grow dynamically we use docker for some of them, I think is the HTTP servers. However, we use in the past VMware and was fine, if you have the money for pay the licenses.
Is difficult to evaluate what is best, I think both are complementary technologies that depending on your use case, one is better than the other, in your case for a LDAP and a DNS server I will go to the VM, unless your DNS server is a high performance server.

Hope it helps
 
Old 11-22-2017, 09:48 AM   #14
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,610
Blog Entries: 4

Rep: Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905Reputation: 3905
Virtualization uses CPU-specific features which create the basis of a robust virtual-machine environment, calling-out to the hypervisor when a VM attempts to do certain things and when its time-slice ends.

Containers use Linux features to create total isolation between what are actually Linux processes running directly on the host, and to furnish them with the illusions that they expect.

The essential advantage of containers is that they are very lightweight, because they do, in fact, run directly in the host environment. (They just can't take off their rose-colored glasses or get out of their comfy padded room.) It is also trivially-easy to make a new one, or to get rid of it, because "a container basically consists of a set of rules which are applied by the supervisor."

So, containers are a great invention i-f your particular use-case is compatible with both their features and their limitations. When I'm running an installation on a commercial cloud-server, I strongly prefer to have everything in my hands, and not to be too dependent upon the influences of a hypervisor that I can't directly control. I'd rather have their VMWare running one beefy virtual-machine that hosts the majority of my entire environment, using files and databases that are local to it. (That being said, I might then have another, separate virtual machine running database-replication and such.)

Last edited by sundialsvcs; 11-22-2017 at 09:55 AM.
 
  


Reply

Tags
container, docker, services


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Micro-Application Services Require Micro-Network Services LXer Syndicated Linux News 0 11-30-2016 02:21 AM
LXer: Containerizing OpenStack with Docker, Google joins the OpenStack Foundation, and more LXer Syndicated Linux News 0 07-20-2015 11:30 AM
What command should i use to know about network services? marhen Programming 2 06-15-2008 03:46 AM
Network Services hkillen Linux - Newbie 2 09-30-2006 04:16 PM
Can 3 network services use the same umask? greenhornet Linux - Networking 2 04-23-2002 10:16 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Containers

All times are GMT -5. The time now is 01:09 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration