Its a growing practice to run applications into containerized environments which provide an abstraction to the operative system resources.
There already are reads about container overhead in certain cases, not all the reads out are critic, there are some where they test no overhead. Like this article regarding i/o.
In my experience, in most of the cases, the machines are installed with a concrete os version, then are just provisioned using scripts.
Often as well, machines are set up using an already configured image of a unix system, where all libs and dependencies are well known.
Having a robust os configuration, modules or apps can be easily maintained already using a vast set of package managers, which load a set of dependencies using configuration files (requirements.txt, package.json, build.manifest, build.sbt)
Let's say that one can automate the complete build of a system, terraform or conf
full architectures, store already built systems in images, and maintain applications updated with git hooks with no need of an intermediate layer to care about all that.
I wonder, in which cases you see optimal to make use of a containerized solution at application level?