karaf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Krzysztof Sobkowiak <krzys.sobkow...@gmail.com>
Subject Re: [PROPOSAL] Karaf Decanter monitoring
Date Tue, 14 Oct 2014 16:06:24 GMT
+1

I think it's a good idea. It's good to have a monitoring functionality
for Karaf.  I would prefer to make it as a separate subproject like
Cellar, to make the Karaf code base simply and could have a separate
release cycle (from the same reason we had plans to extract enterprise
features in a separate subproject). It could be an Karaf odd-on. Karaf
Decanter is a good name.

Regards
Krzysztof 

On 14.10.2014 17:12, Jean-Baptiste Onofré wrote:
> Hi all,
>
> First of all, sorry for this long e-mail ;)
>
> Some weeks ago, I blogged about the usage of ELK
> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc to
> provide a monitoring dashboard (know what's happen in Karaf and be
> able to store it for a long period):
>
> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>
>
> If this solution works fine, there are some drawbacks:
> - it requires additional middlewares on the machines. Additionally to
> Karaf itself, we have to install logstash, elasticsearch nodes, and
> kibana console
> - it's not usable "out of the box": you need at least to configure
> logstash (with the different input/output plugins), kibana (to create
> the dashboard that you need)
> - it doesn't cover all the monitoring needs, especially in term of
> SLA: we want to be able to raise some alerts depending of some events
> (for instance, when a regex is match in the log messages, when a
> feature is uninstalled, when a JMX metric is greater than a given
> value, etc)
>
> Actually, Karaf (and related projects) already provides most (all)
> data required for the monitoring. However, it would be very helpful to
> have a "glue", ready to use and more user friendly, including a
> storage of the metrics/monitoring data.
>
> Regarding this, I started a prototype of a monitoring solution for
> Karaf and the applications running in Karaf.
> The purpose is to be very extendible, flexible, easy to install and use.
>
> In term of architecture, we can find the following component:
>
> 1/ Collectors & SLA Policies
> The collectors are services responsible of harvesting monitoring data.
> We have two kinds of collectors:
> - the polling collectors are invoked by a scheduler periodically.
> - the event driven collectors react to some events.
> Two collectors are already available:
> - the JMX collector is a polling collector which harvest all MBeans
> attributes
> - the Log collector is a event driven collector, implementing a
> PaxAppender which react when a log message occurs
> We can planned the following collectors:
> - a Camel Tracer collector would be an event driven collector, acting
> as a Camel Interceptor. It would allow to trace any Exchange in Camel.
>
> It's very dynamic (thanks to OSGi services), so it's possible to add a
> new custom collector (user/custom implementation).
>
> The Collectors are also responsible of checking the SLA. As the SLA
> policies are tight to the collected data, it makes sense that the
> collector validates the SLA and call/delegate the alert to SLA services.
>
> 2/ Scheduler
> The scheduler service is responsible to call the Polling Collectors,
> gather the harvested data, and delegate to the dispatcher.
> We already have a simple scheduler (just a thread), but we can plan a
> quartz scheduler (for advanced cron/trigger configuration), and
> another one leveraging the Karaf scheduler.
>
> 3/ Dispatcher
> The dispatcher is called by the scheduler or the event driven
> collectors to dispatch the collected data to the appenders.
>
> 4/ Appenders
> The appender services are responsible to send/store the collected data
> to target systems.
> For now, we have two appenders:
> - a log appender which just log the collected data
> - a elasticsearch appender which send the collected data to a
> elasticsearch instance. For now, it uses "external" elasticsearch, but
> I'm working on an elasticsearch feature allowing to embed
> elasticsearch in Karaf (it's mostly done).
> We can plan the following other appenders:
> - redis to send the collected data in Redis messaging system
> - jdbc to store the collected data in a database
> - jms to send the collected data to a JMS broker (like ActiveMQ)
> - camel to send the collected data to a Camel direct-vm/vm endpoint of
> a route (it would create an internal route)
>
> 5/ Console/Kibana
> The console is composed by two parts:
> - a angularjs or bootstrap layer allowing to configure the SLA and
> global settings
> - embedded kibana instance with pre-configured dashboard (when the
> elasticsearch appender is used). We will have a set of already created
> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
> template. The kibana instance will be embedded in Karaf (not external).
>
> Of course, we have ready to use features, allowing to very easily
> install modules that we want.
>
> I named the prototype Karaf Decanter. I don't have preference about
> the name, and the location of the code (it could be as Karaf
> subproject like Cellar or Cave, or directly in the Karaf codebase).
>
> Thoughts ?
>
> Regards
> JB


-- 
Krzysztof Sobkowiak

JEE & OSS Architect | Senior Solution Architect @ Capgemini | Committer
@ ASF
Capgemini <http://www.pl.capgemini.com/> | Software Solutions Center
<http://www.pl.capgemini-sdm.com/> | Wroclaw
e-mail: krzys.sobkowiak@gmail.com <mailto:krzys.sobkowiak@gmail.com> |
Twitter: @KSobkowiak
Calendar: http://goo.gl/yvsebC

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message