camel-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From greenbean <>
Subject Best Practices For Data Propagation Through Pipeline
Date Thu, 23 Apr 2009 13:41:47 GMT

Can anyone provide a "best practice" for data propagation and manipulations
through a pipeline?

Say I have a bunch of beans, each carrying out a specific type of
processing.  I would like the beans to be generic in that they take in the
data they require and spit out their results.  However, with these beans I
lose the original input data if each takes the payload as input and returns
the results of their processing as output for the payload.

In general I want to take input data from a queue and process it using
several beans, some concurrently, some in a pipleline.  The beans in the
pipeline may require the original data, plus the output from one or more
other beans.  What is the best way to create these types of routes without
designing the input and output of beans to be directly dependent on each
other?  I could have the original data propagate in the payload and have
beans do their processing and add the results as properties, but that
requires my generic processing beans to understand that it needs to put the
results in a property.

View this message in context:
Sent from the Camel - Users (activemq) mailing list archive at

View raw message