camel-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Claus Ibsen <>
Subject Re: Component development : guidance required
Date Wed, 03 Aug 2016 07:53:14 GMT

I guess it always depends on what needs different people have. Some
may have high throughput where loosing acks is accepted and that
messages can be re-replayed. And others want a more traditional
transaction like scenario where each message is acked individually.

I think the camel-kafka component has some of these abilities and
could be a good candidate to look at.
I do think that kafka java client has auto commit feature now where it
periodically commits the acks. In older versions it used not to and we
had some "hacky" code to do this on our own.
But maybe check inside the kafka java client how it does it.

For traditionally JMS then there is batch jms which can batch X number
of JMS messages into a single TX and aggregate them together:
But then the idea is that you are okay in Camel to aggregate X
messages together and route them as one together.

On Wed, Aug 3, 2016 at 12:53 AM, Evgeny M <> wrote:
> Good day.
> Working on a camel component for Google PubSub.
> One of the ways to get high throughput is a batched consumer, where a number
> of messages is received in a single API call.
> The obvious choice is to push them further down the line as individual
> exchanges.
> This, however, poses a certain issue with the acknowledgements - as each
> message needs to be ack'ed back to the server.
> Option No 1 - Ack individually.
> Essentially attach a Synrhonisation to each exchange. Easy to implement,
> costly to run - each ack is a separate API call.
> Option No 2 - Ack immediately as a batch.
> Efficient as the whole batch gets acked immediately in a single API call.
> Prone to data loss if something goes wrong downstream and the Error Handler
> does not recover the situation.
> Option No 3 - Do not ack at all.
> Instead implement a special type of the producer - Ack producer - to be
> called explicitly within the route. Group exchanges before the call.
> Explicit. Efficient. Would work with failure scenarios as Google PubSub
> resends the message if the ack has not been received in a predefined time.
> Yet the ideal solution from my perspective would be to implement a
> Synchronisation that acts as a facade for the service in the back end that
> batches ack requests and sends them off in a single API.
> Does it sound OK? Is it aligned with the whole Camel approach?
> If so - is there another component I could borrow the implementation details
> of such batching syncrhonization? I was thinking if we could use something
> like a thread safe non blocking queue, we could potentially spawn an
> executor pool for just the acks.
> Any guidance is appreciated.
> Cheers.
> --
> View this message in context:
> Sent from the Camel Development mailing list archive at

Claus Ibsen
----------------- @davsclaus
Camel in Action 2:

View raw message