flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jagadish Bihani <jagadish.bih...@pubmatic.com>
Subject Re: HDFS sink Bucketwriter working
Date Thu, 27 Sep 2012 06:58:23 GMT

Thanks for the reply Mike.

-- I have been following the user guide.

-- Actually I didn't get the expected behaviour with rolling as per the 
(i.e. whenever I set rolling size to 10 MB and other rolling params to 
0) I would expect that all
the incoming events will get into this single file until it reaches to 
the size 10 MB and then
next events will go to next file and so on. But it simultaneously opens 
many files
at the same time which I thought related to params like transEventMax 
and batchSize.

-- Hence I started going through the source code and came across
few questions mentioned in the mail below.  I had posted exceptions which
I got in other threads. But I think even if I get to know the inner working
of BucketWriter class that will help to solve my troubles.


On 09/27/2012 12:19 PM, Mike Percy wrote:
> Jagadish,
> Refer to the user guide here: 
> http://flume.apache.org/FlumeUserGuide.html#hdfs-sink
> Note the defaults for rollInterval, rollSize, and rollCount. If you 
> want to use rollSize only, then you should set the others to 0.
> Also worth mentioning setting batchSize to something larger if you 
> want to maximize your performance. I often go with 1000, depending on 
> the application you may want to go lower or higher.
> Regards,
> Mike
> On Wed, Sep 26, 2012 at 8:23 PM, Jagadish Bihani 
> <jagadish.bihani@pubmatic.com <mailto:jagadish.bihani@pubmatic.com>> 
> wrote:
>     Hi
>     I had few doubts about HDFS sink Bucketwriter :
>     -- How does HDFS's bucketwriter works? What criteria does it use
>     to create
>     another bucket?
>     -- Creation of a file in HDFS is function of how many parameters ?
>     Initially
>     I thought it is function of only rolling parameter(interval/size).
>     But apparently
>     it is also function 'batchsize' and 'txnEventMax'.
>     -- If my requirement is that; If I get data from 10 Avro sinks to
>      a single avro source and
>     I want to dump it to HDFS with fixed size (say 64 MB) file. What
>     should I do?
>     Presently If I set it 64 MB rolling size; Bucketwriter creates
>     many files ( I suspect it
>     is = trxEventMax) and after a while it throws exceptions like 'too
>     many open files'. (I have limit of
>     75000 open file descriptors).
>     Information about above things will be of great help to tune flume
>     properly for the requirements.
>     Reagards,
>     Jagadish

View raw message