hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kiran Dangeti <kirandkumar2...@gmail.com>
Subject Re: Hadoop property precedence
Date Sat, 13 Jul 2013 08:42:16 GMT
Shalish,

The default block size is 64MB which is good at the client end. Make sure
the same at your end also in conf. You can increase the size of each block
to 128MB or greater than that only thing you can see the processing will be
fast but at end there may be  chances of losing data.

Thanks,
Kiran


On Fri, Jul 12, 2013 at 10:20 PM, Shalish VJ <shalishvj@yahoo.com> wrote:

> Hi,
>
>
>     Suppose block size set in configuration file at client side is 64MB,
> block size set in configuration file at name node side is 128MB and block
> size set in configuration file at datanode side is something else.
> Please advice, If the client is writing a file to hdfs,which property
> would be executed.
>
> Thanks,
> Shalish.
>

Mime
View raw message