karaf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Charles Moulliard <ch0...@gmail.com>
Subject Re: [PROPOSAL] Karaf Decanter monitoring
Date Wed, 15 Oct 2014 09:27:25 GMT
I don't know if you have planned something like this but that could be
interesting to have a layer to plugin the backend (elasticsearch, ...) if
we would like to plug later on another no-sql backend (mongodb, influxdb,
...)

On Wed, Oct 15, 2014 at 11:20 AM, Jean-Baptiste Onofré <jb@nanthrax.net>
wrote:

> Hi Charles,
>
> Good idea yes, I will prepare a README.md for the vote.
>
> Regards
> JB
>
>
> On 10/15/2014 11:18 AM, Charles Moulliard wrote:
>
>> Hi Jean Baptiste,
>>
>> I like the project name "decanter = décanter in French I suspect". Can you
>> (when you will have the time of course) add a README.md or README.adoc
>> file
>> to your project (github) to explain what it does, ... ?
>>
>> Regards,
>>
>> On Wed, Oct 15, 2014 at 11:12 AM, Jean-Baptiste Onofré <jb@nanthrax.net>
>> wrote:
>>
>>  It's the collected data.
>>>
>>> Basically, it's:
>>>
>>> - timestamp of the data/metric
>>> - map of key/value (for instance, JMX Attribute Name => JMX Attribute
>>> Value)
>>>
>>> Regards
>>> JB
>>>
>>>
>>> On 10/15/2014 11:11 AM, Guillaume Nodet wrote:
>>>
>>>  Great thx !
>>>>
>>>> First technical question, can you explain what does the Map<Long,
>>>> Map<String
>>>> , Object>> in the api interfaces (Collector, Appender, etc...)
>>>> represents
>>>> ?
>>>>
>>>> Guillaume
>>>>
>>>> 2014-10-15 11:08 GMT+02:00 Jean-Baptiste Onofré <jb@nanthrax.net>:
>>>>
>>>>   Oh by the way, I forgot the github link:
>>>>
>>>>>
>>>>> https://github.com/jbonofre/karaf-decanter
>>>>>
>>>>> Sorry about that guys !
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>>
>>>>> On 10/14/2014 05:12 PM, Jean-Baptiste Onofré wrote:
>>>>>
>>>>>   Hi all,
>>>>>
>>>>>>
>>>>>> First of all, sorry for this long e-mail ;)
>>>>>>
>>>>>> Some weeks ago, I blogged about the usage of ELK
>>>>>> (Logstash/Elasticsearch/Kibana) with Karaf, Camel, ActiveMQ, etc
to
>>>>>> provide a monitoring dashboard (know what's happen in Karaf and be
>>>>>> able
>>>>>> to store it for a long period):
>>>>>>
>>>>>> http://blog.nanthrax.net/2014/03/apache-karaf-cellar-camel-
>>>>>> activemq-monitoring-with-elk-elasticsearch-logstash-and-kibana/
>>>>>>
>>>>>>
>>>>>> If this solution works fine, there are some drawbacks:
>>>>>> - it requires additional middlewares on the machines. Additionally
to
>>>>>> Karaf itself, we have to install logstash, elasticsearch nodes, and
>>>>>> kibana console
>>>>>> - it's not usable "out of the box": you need at least to configure
>>>>>> logstash (with the different input/output plugins), kibana (to create
>>>>>> the dashboard that you need)
>>>>>> - it doesn't cover all the monitoring needs, especially in term of
>>>>>> SLA:
>>>>>> we want to be able to raise some alerts depending of some events
(for
>>>>>> instance, when a regex is match in the log messages, when a feature
is
>>>>>> uninstalled, when a JMX metric is greater than a given value, etc)
>>>>>>
>>>>>> Actually, Karaf (and related projects) already provides most (all)
>>>>>> data
>>>>>> required for the monitoring. However, it would be very helpful to
>>>>>> have a
>>>>>> "glue", ready to use and more user friendly, including a storage
of
>>>>>> the
>>>>>> metrics/monitoring data.
>>>>>>
>>>>>> Regarding this, I started a prototype of a monitoring solution for
>>>>>> Karaf
>>>>>> and the applications running in Karaf.
>>>>>> The purpose is to be very extendible, flexible, easy to install and
>>>>>> use.
>>>>>>
>>>>>> In term of architecture, we can find the following component:
>>>>>>
>>>>>> 1/ Collectors & SLA Policies
>>>>>> The collectors are services responsible of harvesting monitoring
data.
>>>>>> We have two kinds of collectors:
>>>>>> - the polling collectors are invoked by a scheduler periodically.
>>>>>> - the event driven collectors react to some events.
>>>>>> Two collectors are already available:
>>>>>> - the JMX collector is a polling collector which harvest all MBeans
>>>>>> attributes
>>>>>> - the Log collector is a event driven collector, implementing a
>>>>>> PaxAppender which react when a log message occurs
>>>>>> We can planned the following collectors:
>>>>>> - a Camel Tracer collector would be an event driven collector, acting
>>>>>> as
>>>>>> a Camel Interceptor. It would allow to trace any Exchange in Camel.
>>>>>>
>>>>>> It's very dynamic (thanks to OSGi services), so it's possible to
add a
>>>>>> new custom collector (user/custom implementation).
>>>>>>
>>>>>> The Collectors are also responsible of checking the SLA. As the SLA
>>>>>> policies are tight to the collected data, it makes sense that the
>>>>>> collector validates the SLA and call/delegate the alert to SLA
>>>>>> services.
>>>>>>
>>>>>> 2/ Scheduler
>>>>>> The scheduler service is responsible to call the Polling Collectors,
>>>>>> gather the harvested data, and delegate to the dispatcher.
>>>>>> We already have a simple scheduler (just a thread), but we can plan
a
>>>>>> quartz scheduler (for advanced cron/trigger configuration), and
>>>>>> another
>>>>>> one leveraging the Karaf scheduler.
>>>>>>
>>>>>> 3/ Dispatcher
>>>>>> The dispatcher is called by the scheduler or the event driven
>>>>>> collectors
>>>>>> to dispatch the collected data to the appenders.
>>>>>>
>>>>>> 4/ Appenders
>>>>>> The appender services are responsible to send/store the collected
data
>>>>>> to target systems.
>>>>>> For now, we have two appenders:
>>>>>> - a log appender which just log the collected data
>>>>>> - a elasticsearch appender which send the collected data to a
>>>>>> elasticsearch instance. For now, it uses "external" elasticsearch,
but
>>>>>> I'm working on an elasticsearch feature allowing to embed
>>>>>> elasticsearch
>>>>>> in Karaf (it's mostly done).
>>>>>> We can plan the following other appenders:
>>>>>> - redis to send the collected data in Redis messaging system
>>>>>> - jdbc to store the collected data in a database
>>>>>> - jms to send the collected data to a JMS broker (like ActiveMQ)
>>>>>> - camel to send the collected data to a Camel direct-vm/vm endpoint
>>>>>> of a
>>>>>> route (it would create an internal route)
>>>>>>
>>>>>> 5/ Console/Kibana
>>>>>> The console is composed by two parts:
>>>>>> - a angularjs or bootstrap layer allowing to configure the SLA and
>>>>>> global settings
>>>>>> - embedded kibana instance with pre-configured dashboard (when the
>>>>>> elasticsearch appender is used). We will have a set of already created
>>>>>> lucene queries and a kind of "Karaf/Camel/ActiveMQ/CXF" dashboard
>>>>>> template. The kibana instance will be embedded in Karaf (not
>>>>>> external).
>>>>>>
>>>>>> Of course, we have ready to use features, allowing to very easily
>>>>>> install modules that we want.
>>>>>>
>>>>>> I named the prototype Karaf Decanter. I don't have preference about
>>>>>> the
>>>>>> name, and the location of the code (it could be as Karaf subproject
>>>>>> like
>>>>>> Cellar or Cave, or directly in the Karaf codebase).
>>>>>>
>>>>>> Thoughts ?
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>>
>>>>>>  --
>>>>> Jean-Baptiste Onofré
>>>>> jbonofre@apache.org
>>>>> http://blog.nanthrax.net
>>>>> Talend - http://www.talend.com
>>>>>
>>>>>
>>>>>
>>>>  --
>>> Jean-Baptiste Onofré
>>> jbonofre@apache.org
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>>
>>
>>
>>
> --
> Jean-Baptiste Onofré
> jbonofre@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>



-- 
Charles Moulliard
Apache Committer / Architect @RedHat
Twitter : @cmoulliard | Blog :  http://cmoulliard.github.io

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message