flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wolfgang Hoschek <whosc...@cloudera.com>
Subject Re: Flume workflow design
Date Thu, 18 Jul 2013 22:51:55 GMT
Take a look at these options:

- HBase Sinks (send data into HBase):


- Apache Flume Morphline Solr Sink (for heavy duty ETL processing and ingestion into Solr):


- Apache Flume MorphlineInterceptor (for light-weight event annotations and routing): 


- For MapReduce jobs it is typically more straightforward and efficient to send data directly
to destinations, i.e. without going through Flume. For example using the MapReduceIndexerTool
when going from HDFS into Solr: 



On Jul 18, 2013, at 3:37 PM, Flavio Pompermaier wrote:

> Hi to all,
> I'm new to Flume but I'm very excited about it!
> I'd like to use it to gather some data, process received messages and then indexing to
> Any suggestion about how to do that with Flume?
> I've already tested an Avro source that sends data to HBase,
> but my use case requires those messages to be saved in HBase but also processed and then
indexed in Solr (obviously I also need to convert the object structure to convert them).
> I think the first part is quite simple (I just use 2 sinks, one that store in HBase)
and another one that forward to another Avro instance, right?
> If messages are sent during a map/reduce job, is the avro source the best option to send
documents to index to my sink (i.e. that is my first part of the flow that up to now I simulated
with an avro source..)?
> Best,
> Flavio

View raw message