hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: Exponential performance decay when inserting large number of blocks
Date Thu, 14 Jan 2010 04:00:12 GMT
Err, ignore that attachment - attached the wrong graph with the right
labels!

Here's the right graph.

-Todd

On Wed, Jan 13, 2010 at 7:53 PM, Todd Lipcon <todd@cloudera.com> wrote:

> On Wed, Jan 13, 2010 at 6:59 PM, Eric Sammer <eric@lifeless.net> wrote:
>
>> On 1/13/10 8:12 PM, Zlatin.Balevsky@barclayscapital.com wrote:
>> > Alex, Dhruba
>> >
>> > I repeated the experiment increasing the block size to 32k.  Still doing
>> > 8 inserts in parallel, file size now is 512 MB; 11 datanodes.  I was
>> > also running iostat on one of the datanodes.  Did not notice anything
>> > that would explain an exponential slowdown.  There was more activity
>> > while the inserts were active but far from the limits of the disk
>> system.
>>
>> While creating many blocks, could it be that the replication pipe lining
>> is eating up the available handler threads on the data nodes? By
>> increasing the block size you would see better performance because the
>> system spends more time writing data to local disk and less time dealing
>> with things like replication "overhead." At a small block size, I could
>> imagine you're artificially creating a situation where you saturate the
>> default size configured thread pools or something weird like that.
>>
>> If you're doing 8 inserts in parallel from one machine with 11 nodes
>> this seems unlikely, but it might be worth looking into. The question is
>> if testing with an artificially small block size like this is even a
>> viable test. At some point the overhead of talking to the name node,
>> selecting data nodes for a block, and setting up replication pipe lines
>> could become some abnormally high percentage of the run time.
>>
>>
> The concern isn't why the insertion is slow, but rather why the scaling
> curve looks the way it does. Looking at the data, it looks like the
> insertion rate (blocks per second) is actually related as 1/n where N is the
> number of blocks. Attaching another graph of the same data which I think is
> a little clearer to read.
>
>
>> Also, I wonder if the cluster is trying to rebalance blocks toward the
>> end of your runtime (if the balancer daemon is running) and this is
>> causing additional shuffling of data.
>>
>
> That's certainly one possibility.
>
> Zlatin: here's a test to try: after the FS is full with 400,000 blocks, let
> the cluster sit for a few hours, then come back and start another insertion.
> Is the rate slow, or does it return to the fast starting speed?
>
> -Todd
>

Mime
View raw message