karaf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Baker <nba...@pentaho.com>
Subject Re: Levels of Containerization - focus on Docker and Karaf
Date Thu, 12 Jan 2017 16:18:35 GMT
I do agree that an "opinionated" or "prescriptive" stack would help. It shouldn't prohibit
the usage of any Karaf feature of course.


New users gravitate to full-stack solutions. Agnostic platforms with lots of options and no
predefined stack, while obviously having many merits and longer legs (don't be Wicket), just
haven't been winning out in adoption. This applies across the spectrum of computing.

________________________________
From: Brad Johnson <bradjohn@redhat.com>
Sent: Thursday, January 12, 2017 10:46:55 AM
To: user@karaf.apache.org
Subject: RE: Levels of Containerization - focus on Docker and Karaf

Guillaume,

I’d mentioned that in an early post as my preferred way to do microservices and perhaps
a good way of doing a Karaf Boot. I’ve worked with the Karaf 4 profiles and they are great.
 Also used your CDI OSGi service.  If we could use the Karaf 4 profiles with the CDI implementation
with OSGi services and the Camel Java DSL as a standard stack, it would permit focused development
and standardized bundle configurations.

When I created a zip with Karaf, CXF and Camel the footprint was 30MB.

While having Karaf Boot support DS, Blueprint, CDI, etc. I’m not sure that’s the healthiest
move to encourage adoption.  We need less fragmentation in the OSGi world and not more.  Obviously
even if Karaf Boot adopts one as the recommended standard it doesn’t mean that the others
can’t be used.  When reading through Camel documentation on-line, for example, the confusion
such fragmentation brings is obvious.  One becomes adept at converting Java DSL to blueprint
or from blueprint to the Java DSL in one’s mind.

The static profiles work great and will let us create a number of standardized appliances
for a wide variety of topology concerns and not just for microservices.  A “switchboard”
appliance, for example, might be used for orchestrating microservices and managing the APIs.
 A “gateway” appliance might have standard JAAS, web service configuration and a routing
mechanism for calling microservices.  An “AB” appliance could be used for 80/20 testing.
 And so on.  Take the idea of enterprise integration patterns and bring it up to enterprise
integration patterns using Karaf appliances.

Many appliances might be “sealed”.  An appliance for AB testing, for example, would have
configuration for two addresses in the configuration file and a percentage of traffic going
to each.  Non need to actually program or re-program the internals any more than we’d usually
re-program a Camel component.  But the source would be there if one wanted to create a new
component or modify how an existing one functioned.

I’d vote for using CDI along with the Camel Java DSL for a couple of reasons.  The first
would be the standardization and portability of both code and skills.  Using CDI would mean
that any Glassfish, JBoss, etc. developer would feel comfortable using the code.  Using Java
Camel DSL would be for the same reason.  It would also give a programmer a sense that if they
give Karaf Boot with the static profiles a shot, they aren’t locked in but can easily move
to a different stack if necessary.  In a sense this is the same reason that Google chose Java
language to run on the DVM. It tapped into a large existing skillbase instead of trying to
get the world to adopt and learn a new language.  CDI with OSGi extensions also allows developers
to use one paradigm for everything from lashing up internal dependency injections to working
with OSGi services. I believe when you put that CDI extension out there you used blueprint
style proxies under the cover.  As a developer using the CDI OSGi extension it was transparent
to me.  If you later decided to rework that as a DS service, it would remain transparent and
very much in the whole spirit of OSGi and its mechanisms for allowing refactoring and even
rewriting without breaking the world.  It also makes unit testing a snap.  Any of us who have
wrestled with Camel Blueprint Test Support can appreciate that.

This would also permit for standardization of documentation and of Karaf Boot appliance project
structures and Maven plug-in use.  A bit of convention over configuration.  Projects would
have a standard configuration.cfg file that gets deployed via features to the pid.cfg. A standard
features  file in the filtered folder. Those already exist, of course, but it isn’t as standardized
as it could be.

Personally I think this sort of goal with CDI, Karaf 4 and its profiles, and Camel Java DSL
should be accelerated since Spring Boot is already out there.  Waiting for another couple
of years to release this as a standard might be too late.

The pieces are already there so it isn’t like we’d have to start from scratch. This would
also play well with larger container concerns like Docker and Kubernettes.

Brad

From: Guillaume Nodet [mailto:gnodet@apache.org]
Sent: Thursday, January 12, 2017 4:55 AMJ
To: user <user@karaf.apache.org>
Subject: Re: Levels of Containerization - focus on Docker and Karaf

Fwiw, starting with Karaf 4.x, you can build custom distributions which are mostly static,
and that more closely map to micro-services / docker images.  The "static" images are called
this way because you they kinda remove all the OSGi dynamism, i.e. no feature service, no
deploy folder, read-only config admin, all bundles being installed at startup time from etc/startup.properties.
This can be easily done by using the karaf maven plugin and configuring startupFeatures and
referencing the static kar, as shown in:
  https://github.com/apache/karaf/blob/master/demos/profiles/static/pom.xml


2017-01-11 21:07 GMT+01:00 CodeCola <prasenjit@rogers.com<mailto:prasenjit@rogers.com>>:
Not a question but a request for comments. With a focus on Java.

Container technology has traditionally been messy with dependencies and no
easy failsafe way until Docker came along to really pack ALL dependencies
(including the JVM) together in one ready-to-ship image that was faster,
more comfortable, and easier to understand than other container and code
shipping methods out there. The spectrum from (Classical) Java EE Containers
(e.g. Tomcat, Jetty) --> Java Application Servers that are containerized
(Karaf, Wildfly, etc), Application Delivery Containers (Docker) and
Virtualization (VMWare, Hyper-V) etc. offers a different level of isolation
with different goals (abstraction, isolation and delivery).

What are the choices, how should they play together, should they be used in
conjunction with each other as they offer different kinds of
Containerization?

<http://karaf.922171.n3.nabble.com/file/n4049162/Levels_of_Containerization.png>



--
View this message in context: http://karaf.922171.n3.nabble.com/Levels-of-Containerization-focus-on-Docker-and-Karaf-tp4049162.html
Sent from the Karaf - User mailing list archive at Nabble.com.



--
------------------------
Guillaume Nodet
------------------------
Red Hat, Open Source Integration

Email: gnodet@redhat.com<mailto:gnodet@redhat.com>
Web: http://fusesource.com<http://fusesource.com/>
Blog: http://gnodet.blogspot.com/


Mime
View raw message