2

Its a growing practice to run applications into containerized environments which provide an abstraction to the operative system resources.

There already are reads about container overhead in certain cases, not all the reads out are critic, there are some where they test no overhead. Like this article regarding i/o.

In my experience, in most of the cases, the machines are installed with a concrete os version, then are just provisioned using scripts.
Often as well, machines are set up using an already configured image of a unix system, where all libs and dependencies are well known.

Having a robust os configuration, modules or apps can be easily maintained already using a vast set of package managers, which load a set of dependencies using configuration files (requirements.txt, package.json, build.manifest, build.sbt)

Let's say that one can automate the complete build of a system, terraform or conf full architectures, store already built systems in images, and maintain applications updated with git hooks with no need of an intermediate layer to care about all that.

I wonder, in which cases you see optimal to make use of a containerized solution at application level?

Evhz
  • 152
  • 8

2 Answers2

3

Containers are an extra layer of abstraction between the bare metal and the OS that your application sees.

If you already have a layer of abstraction, (say virtual machines, or a run time envrionment), then its hard to justify a container unless there is a flaw in your existing abstraction.

In an 'optimal' solution all your VMs would be perfectly sized for the application running on them, or they would be sufficiently stable that multiple applications would function on them without conflicts and developers would understand infrastructure enough to be able to program apps which fit with whatever they were run on.

In practice sometimes this isn't the case. A container, if it functions correctly, gives you another out.

I would tend to agree with your apparent sentiment though. Containers aren't mature enough, and other abstraction layers are very mature. It should always be possible, in an optimal case, to do without containers

Ewan
  • 70,664
  • 5
  • 76
  • 161
  • Agree that a good configuration for an application container can frequently give you an extra help (*as hardware limits per container*). Maybe this *one-more* configuration also needs to a better fit on my mindset too :) – Evhz May 10 '18 at 21:16
3

The article you are referring to is talking about time to run a script inside a container. I understand this to mean that the docker container starts, the script executes and then the container shuts down. The extra time her is most likely simply the extra time it takes to create the container instance; I highly doubt that one the script started that it is taking significantly longer to run.

Running a container on top of an OS adds some overhead, for sure. But if you are running a single container inside each VM instance, there's no real reason to use containers. It would completely missing the point. One of the main reasons to use containers is to lower overhead. The way that containers allow for that is that instead of having, say a 'real' machine with 12 VMs each with their own OS and libraries etc. you have can have one VM with 12 container instances on it. That means you have reduced the number of running OS copies on that machine by 11. It's also possible to run docker on bare metal eliminating the VM entirely (note: there are security implications to doing this.)

In terms of optimal use cases, there are many other benefits to containerization for many different types of applications. It would be easier to name the situations where it's not useful such as if you are running a single instance of an app or perhaps lots of short-lived applications like scripts as described in the article.

JimmyJames
  • 24,682
  • 2
  • 50
  • 92
  • I see your point about running different container instances in one server. But I never had in real life a machine running several services, but instead, many machines running the same copy of a service. Good also to run many instances of the same app within the hardware resources available in a machine, it can be an extra value for fault tolerance. As well to use them to limit the hardware assignment for the services they run. – Evhz May 10 '18 at 21:21
  • @Evhz Consider the situation of a web service running on Java. It's easy to get a VM with 32, 64, ... GB of RAM but if you have a major GC cycle on a huge heap like that, you could have a pause for a number of seconds. If you instead run a dozen instances in a container, you can use all of that memory without one giant heap to worry about. – JimmyJames May 11 '18 at 14:55