camel-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sashika <sashik...@gmail.com>
Subject Re: Suggestions on how to cluster Camel?
Date Sat, 23 Jul 2016 13:26:56 GMT
We have used Akka with akka-camel. For consumers you can seamlessly fail
over when you use Untyped consumer actors and by using a cluster singleton.
Akka cluster singleton will make sure there's only one consumer in the
cluster and will start another when a active node goes down. In fact we
haven't used these with file or sftp consumers but should work as you
expect.
For file producers you can use the camel load balancing in conjunction with
akka clustering to detect dead letters and do the required routing
accordingly.

On Jul 23, 2016 01:49, "Vitalii Tymchyshyn" <vit@tym.im> wrote:

We are using Apache Zookeeper for cluster coordination. While camel has
zookeeper module, we made our own class to overcome bundled module
limitations.

Best regards, Vitalii Tymchyshyn

Пт, 22 лип. 2016 14:39 користувач David Hoffer <dhoffer6@gmail.com> пише:

> We have a standalone Camel app (runs as daemon with no container) that we
> need to cluster and I'm looking for options on how to do this.
>
> Our Camel app handles file routing.  All inputs are files so exchanges
deal
> with byte arrays and the file name.  Destinations are either file folders
> or web-services where we attach the file and call the service to publish.
> Also currently we use JMX to remote manage and monitor.
>
> So how best to cluster this?  Technically what is most important in the
> cluster feature set is fail-over so we can guarantee high availability but
> it would be nice to get load balancing too.
>
> Our app gets its input via local disk folders (which we can convert to
> network shares (e.g. vnx)) or via external SFTP endpoints.  The app has
> about 100 of these folders/sftp endpoints.  So when clustered all the
> routes would be using the network shares instead of local folders.
>
> I'm assuming that file and sftp endpoints should handle this well as they
> already use a file lock to prevent contention. However we would have to
> have a solution for stale file locks for clustered nodes that failed.  How
> would the other nodes know they can delete the locks for failed nodes (but
> only for failed nodes)?
>
> Also since we would now be processing routes concurrently we would have to
> determine if the receiving webservice endpoints can handle concurrent
> connections.  Ideally I'd like to be able to control/tune the concurrent
> nature of each route (across the cluster) so that if needed we could
> limit/stop concurrent processing of a route but still always have
fail-over
> cluster node support.
>
> Then there is the JMX issue, right now we have apps to manage and monitor
> route traffic but somehow this would have to be aggregated across all
nodes
> in the cluster.
>
> Are there any techniques or frameworks that could help us implement this?
> Any suggestions on approaches that work and what doesn't work?
>
> Thanks,
> -Dave
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message