karaf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christian Schneider <ch...@die-schneider.net>
Subject Re: Some ideas for decanter
Date Mon, 09 Mar 2015 11:15:41 GMT
Some answers inline:

On 09.03.2015 11:32, Achim Nierbeck wrote:
> Hi Christian,
> please find my comments inline:
> 2015-03-09 10:45 GMT+01:00 Christian Schneider <chris@die-schneider.net>:
> 1. Structure of the Appender data
> I think the structure of the Appender data is a bit awkward:
>      Map<Long, Map<String, Object>> data
> This means that we can not transport data if it happens at the same Date
> which in practice could happen. So if we need several events we should
> rather use Collection<Map<String, Object>>
> I am not sure if we really need several events in one call. The Appenders
> could do this internally so I would rather have the data be just a simple
> Map<String, Object>
> which appender did you look at?
> The JMX appender is approprate and the timestamp is collected by the
> elasticsearch trigger/appender transformed to the required timestamp field
> (take a look at the changes I did)
The JMX Appender uses System.currentTimeMillis() as key for the outer 
map. So a second event may be created with the same time if the system 
is fast. I think this is broken and the problem is in the Appender API 
not your implementation.
>> 2. Use of the EventAdmin service
>> If we just need to transport a Map then why not simply use the OSGi
>> EventAdmin. It transports a Map<String, Object> on a topic. The topic is
>> even very similar to logging categories.
>> EventAdmin is standardized so we do not have to invent a new interface. An
>> appender would then simply subscribe to a topic and process the Events.
>> A collector would simply create and send EventAdmin events.
>> The EventAdmin already provides a nice implementation of the Dispatcher.
> I'd rather stick to the JMX appender as the current default one and would
> make sure the others do log accordingly.
There is no JMX appender. What are you talking about?

My proposal is to connect the Collectors and the Appenders using 
EventAdmin instead of a custom interface that looks almost like 
EventAdmin. Except that EventAdmin has a topic name which also would 
make sense for us.
It has nothing to do with JMX.
>> 3. Logging of MDC data
>> The current appender.log does not add the mdc Map to the log data. I plan
>> to add this to better cover the business logging requirement. One question
>> here is of we should rather flatten the MDC values using a prefix on the
>> key or rather put the whole MDC map into the data map under one key "mdc".
>> At least elastic search can process such nested maps.
> make sure the stuff looks similar to the changes I did with the JMX
> appender.
So you mean the change with adding the attributes as a map into one 
value. Yes that is how I also thought to do it for the MDC.
>> 4. How to send to elastic search.
>> The current appender.elasticsearch uses the elasticsearch API. I propose
>> to use JestClient instead. JestClient uses the HttpRest interface of
>> elastic search and can also be configured for Authentication. It should
>> have a smaller footprint then full elastic search.
> it's the one used for the internal one and does work appropriate. If the
> elastic search is external a logging appender with the standard ELK stack
> (elasticsearch, logstash, kibana) should be sufficient.
I am talking about how to send data to elastic search. Of course I could 
use logstash but why would I need decanter then?
I think we should provide an appender that can be used in production. If 
the elastic search one can not do any authentication then I would not 
recommend to use it.
Jest at least announces that it can be secured and can talk to 
elasticsearch. I did a small prototype and it looks good.
If we can not agree on jest then maybe we could at least have an 
additional Jest appender. So people can choose.
>> 5. The decanter feature is version 1.3 of the xsd and seems to fail in
>> karaf < 4
>> So I propose to use the older feature definition to make sure we can also
>> cover at least karaf 3.
> worked like a charm for me on Karaf 4.
I would like it to work in karaf 3 and 4. Of course it works on karaf 4.


Christian Schneider

Open Source Architect

View raw message