karaf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brad Johnson" <bradj...@redhat.com>
Subject RE: Levels of Containerization - focus on Docker and Karaf
Date Wed, 11 Jan 2017 23:04:46 GMT
As you say, the levels of concern between Docker and Karaf are different and
not incompatible.  And I suppose you’ll commonly run multiple Karaf
instances in single Docker instance.  I’m down in the OSGi/Karaf level so
much I don’t get up into the containerization of things like Docker,
Kubernettes, etc.

 

I’ve heard about some of the flakiness you mention.  I suspect that sort of
thing will get hammered out over time.  Someone will one day realize why a
thread race or deadlock happens when the moon is full and the tide is low.

 

I think if I were to get more involved with Docker my chief concern would be
the direction of the framework and the contentiousness that’s been there for
some time. Rocket left in 2014 and 2016 ended with a lot of different
companies and contributors talking about forking the codebase to bring more
stability.  That seems like such a mundane and common development practice
that it surprises me. From the way Linux comes out with new releases and bug
fixes to the way ServiceMix and Red Hat turns ServiceMix into Fuse.  When I
work in Fuse I’m stepping into yesteryear.  That’s the cost of getting
hardened, hammered and tested code.

 

Why isn’t Docker following a similar path?  If someone wants to use the
older, stable relase that’s supported by some organization then go for it.
If you want to use the bleeding edge and contribute, that’s cool too.  It
seems such a common mechanism that I’m sure I must be missing something.
The problem of being tangentially aware.

 

Brad

 

 

From: Vincent Zurczak [mailto:vincent.zurczak@linagora.com] 
Sent: Wednesday, January 11, 2017 3:55 PM
To: user@karaf.apache.org
Subject: Re: Levels of Containerization - focus on Docker and Karaf

 

Hi,

 

Le 11/01/2017 à 22:18, Brad Johnson a écrit :

So I don't know enough about Docker internals/configuration requirements but
I have a question. Is installing a JRE in each Docker container the common
practice as opposed to some mechanism for sharing it? The JRE alone is
180MB. 


Just get or build a Docker image with the JRE inside.
This way, you will only use 180MB on your disk (plus the OS size).
You then instantiate this image to create one or several containers. One
definition, several instances, just like classes: one definition in jars,
but several instances in memory.



Le 11/01/2017 à 21:07, CodeCola a écrit :

Not a question but a request for comments. With a focus on Java.
 
Container technology has traditionally been messy with dependencies and no
easy failsafe way until Docker came along to really pack ALL dependencies
(including the JVM) together in one ready-to-ship image that was faster,
more comfortable, and easier to understand than other container and code
shipping methods out there. The spectrum from (Classical) Java EE Containers
(e.g. Tomcat, Jetty) --> Java Application Servers that are containerized
(Karaf, Wildfly, etc), Application Delivery Containers (Docker) and
Virtualization (VMWare, Hyper-V) etc. offers a different level of isolation
with different goals (abstraction, isolation and delivery).
 
What are the choices, how should they play together, should they be used in
conjunction with each other as they offer different kinds of
Containerization?


I work on a project which is relies on Karaf and that manages cloud
infrastructures (so, VMs) but also Docker containers.
It all depends on what you want. If all your software components are Java
ones, using an OSGi container is enough (my main interest in OSGi is about
class loading isolation, even though there are some other pretty good
stuff). If you have heterogenous components (different languages, different
versions...), Docker containers can help a lot. But they cannot solve all
the problems.

First, the host system has limited resources. When you reach a given number
of containers or resources consumption, new ones will starve. So, you need
new VMs. Second, Docker containers are not totally isolated. They use the
same kernel than the host system, as well as some properties (global
namespace). And I have already seen a Docker container that could not stop
(I had to restart the host system... :(). This is why you also need VM
management. This is about architecture and addressing fail-over,
replication, load balancing and so on. Container solutions like Docker Swarm
or Kubernetes generally rely on a cluster of VMs. And thanks to placement
constraints, you create, dispatch, move Docker containers on this or that
node.

Then, I regularly run Docker containers that embed a Karaf server, and the
container itself runs in a VM that was dynamically created in a cloud
infrastructure. They work together, each layer having its own purpose. This
is a matter of design. If you deal with (Java) application development, you
can benefit from Java containers (OSGi, JEE...). If you deal with
architecturing a solution with several applications, you may consider using
VMs (old-school) or Docker containers. When you deal with containers, you
will eventually manage several VMs. And you can also mix containers and VMs.
Indeed, some applications just do not fit well within containers. Those that
manage data, such as databases, can be part of them.

So, they SHOULD not be used together.
They CAN be used together. It is then up to you to determine what you need
and why you make this or that choice. Should I say "as usual"? ;)

Regards,

-- 
Vincent Zurczak
Linagora: www.linagora.com <http://www.linagora.com/> 

 <https://twitter.com/VincentZurczak>
<http://fr.linkedin.com/pub/vincent-zurczak/18/b35/6a7>
<callto://vincent.zurczak>   <http://vzurczak.wordpress.com> 


Mime
View raw message