hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rahul p <rahulpoolancha...@gmail.com>
Subject Re: fs.local.block.size vs file.blocksize
Date Thu, 09 Aug 2012 14:28:40 GMT
Hi Tariq,
I am trying to start wordcount mapreduce, i am not getting how to start and
where to start ..
i very new to java.
can you help how to work with this..any help will appreciated.


Hi All,
Please help start with Hadoop on CDH , i have instaleed in my local PC.
any help will appreciated.

On Thu, Aug 9, 2012 at 9:10 PM, Ellis H. Wilson III <ellis@cse.psu.edu>wrote:

> Hi all!
>
> Can someone please briefly explain the difference?  I do not see
> deprecated warnings for fs.local.block.size when I run with them set and I
> see two copies of RawLocalFileSystem.java (the other is
> local/RawLocalFs.java).
>
> The things I really need to get answers to are:
> 1. Is the default boosted to 64MB from Hadoop 1.0 to Hadoop 2.0?  I
> believe it is, but want validation on that.
> 2. Which one controls shuffle block-size?
> 3. If I have a single machine non-distributed instance, and point it at
> file://, do both of these control the persistent data's block size or just
> one of them or what?
> 4. Is there any way to run with say a 512MB blocksize for the persistent
> data and the default 64MB blocksize for the shuffled data?
>
> Thanks!
>
> ellis
>

Mime
View raw message