hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: fs.local.block.size vs file.blocksize
Date Sun, 12 Aug 2012 17:58:32 GMT
Thanks for clarifying Ellis. Am sorry I assumed certain things when
replying here.

I looked at it as well and it does absolutely nothing, nor is referred
by anything, nor can we do anything with it. We may as well remove it
(the tunable), or document it. Please do file a HADOOP JIRA (once
Apache JIRA is up).

On Sun, Aug 12, 2012 at 11:10 PM, Ellis H. Wilson III <ellis@cse.psu.edu> wrote:
> Many thanks to Eli and Harsh for their responses!  Comments in-line:
>
>
> On 08/12/2012 09:48 AM, Harsh J wrote:
>>
>> Hi Ellis,
>>
>> Note that when in Hadoop-land, a "block size" term generally means the
>> chunking size of HDFS writers and readers, and that is not the same as
>> the FS term "block size" in any way.
>
>
> Yes, I do know that, but I was confused about something else.  More on that
> later in #2.
>
>> On Thu, Aug 9, 2012 at 6:40 PM, Ellis H. Wilson III<ellis@cse.psu.edu>
>> wrote:
>>>
>>> Can someone please briefly explain the difference?  I do not see
>>> deprecated
>>>
>>> warnings for fs.local.block.size when I run with them set and I see two
>>> copies of RawLocalFileSystem.java (the other is local/RawLocalFs.java).
>>
>>
>> The right param still seems to be "fs.local.block.size", when it comes
>> to using "getDefaultBlocksize" calls via the file:/// filesystems or
>> other filesystems that have not over-riden the default behavior.
>
>
> This question was more out of curiosity than anything.  My experiments agree
> that "fs.local.blocksize" is the right parameter for controlling the
> blocksize of file:///, but I'm still quite perplexed as to where
> file.blocksize actually is used.  I chased it around for a while in Eclipse
> last night, but have yet to see where it is directly resourced (keyconfigs
> sets it and suggests FileSystem, RawLocalFileSystem and CheckSumFileSystem
> all use it, but I don't see it being used in any practical way).
>
>
>>> The things I really need to get answers to are:
>>> 1. Is the default boosted to 64MB from Hadoop 1.0 to Hadoop 2.0?  I
>>> believe
>>> it is, but want validation on that.
>>
>>
>> The dfs.blocksize, which applies to HDFS, has not changed from its 64
>> MB default.
>
>
> I was referring to RawLocalFileSystem, not DistributedFileSystem.  I am
> fairly certain from my tests and from the code I've dug through that the
> default blocksize is still 32MB at the moment.  Please note that my
> questions here are fairly unconcerned with HDFS, as I'm not using it at all
> in >75% of my tests.
>
>
>>> 2. Which one controls shuffle block-size?
>>
>>
>> There is no "shuffle block-size", as shuffle goes to local filesystems
>> and that has no block size concepts. Can you elaborate on this?
>
>
> This was a plain ol' misconception/mistake on my part, still sticking around
> from when I started working in the Hadoop source just over a year back.  I
> mistook performance increases in TeraGen but performance decreases in
> TeraSort (noted by an elongated shuffle phase) when I increased file:///'s
> blocksize to suggest that the shuffling used the file:/// filesystem as
> well.  I now understand why this can happen, and appreciate you clarifying
> as my digging through the shuffle code has done that indeed, no chunking
> occurs on shuffle.  My apologies for the confusing question, based on errant
> inferences.
>
> Thanks again to both of you!  However, if anyone has better intuition on
> what the file.blocksize parameter does, I'd be happy to hear it.
>
> Best,
>
> ellis



-- 
Harsh J

Mime
View raw message