Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2DEC7CE78 for ; Thu, 9 Aug 2012 14:29:12 +0000 (UTC) Received: (qmail 45091 invoked by uid 500); 9 Aug 2012 14:29:07 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 44998 invoked by uid 500); 9 Aug 2012 14:29:07 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 44990 invoked by uid 99); 9 Aug 2012 14:29:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 14:29:07 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of rahulpoolanchalil@gmail.com designates 209.85.217.176 as permitted sender) Received: from [209.85.217.176] (HELO mail-lb0-f176.google.com) (209.85.217.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 14:29:02 +0000 Received: by lboi15 with SMTP id i15so342038lbo.35 for ; Thu, 09 Aug 2012 07:28:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=/k/KBYDd/7UJH/noBMGUJUlfA4qe9uAuDL0MmK3wiBc=; b=f7Q5Ut+3C9n+QX6MhcisHj7NQA0SvNYTuy7dE+bXBzpcrmufrij0OFnY6mIYmgP/8H juhDHe8zTm0czpRuKVCS1jSfRVCBR4PRQ3UlbkjD3Q1fWqPpgxdZijkAj9ti76D4D8Nw 6UUc7Q0s0uIggBoa8j2CHM2dKHD9rUenMKeNl/H/5LbRpS7erK2Yn+7hx7u8O1XRo7Fe +AUzrS+sSrDkOc6HKPODDvlxx6qKNa8Z3uzr5hiYFtoabTS44M4PtVgissrcQi0W4SXc twATGbsT0ETH80aldIHfT7OUjUV7hIG0VvvVWfyc4PGFwYF7jYGTaJgV29L6/k+APWyN T5og== MIME-Version: 1.0 Received: by 10.152.146.169 with SMTP id td9mr4254624lab.42.1344522521022; Thu, 09 Aug 2012 07:28:41 -0700 (PDT) Received: by 10.112.8.39 with HTTP; Thu, 9 Aug 2012 07:28:40 -0700 (PDT) In-Reply-To: <5023B6AB.807@cse.psu.edu> References: <5023B6AB.807@cse.psu.edu> Date: Thu, 9 Aug 2012 22:28:40 +0800 Message-ID: Subject: Re: fs.local.block.size vs file.blocksize From: rahul p To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8f2345894212e504c6d60b63 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8f2345894212e504c6d60b63 Content-Type: text/plain; charset=ISO-8859-1 Hi Tariq, I am trying to start wordcount mapreduce, i am not getting how to start and where to start .. i very new to java. can you help how to work with this..any help will appreciated. Hi All, Please help start with Hadoop on CDH , i have instaleed in my local PC. any help will appreciated. On Thu, Aug 9, 2012 at 9:10 PM, Ellis H. Wilson III wrote: > Hi all! > > Can someone please briefly explain the difference? I do not see > deprecated warnings for fs.local.block.size when I run with them set and I > see two copies of RawLocalFileSystem.java (the other is > local/RawLocalFs.java). > > The things I really need to get answers to are: > 1. Is the default boosted to 64MB from Hadoop 1.0 to Hadoop 2.0? I > believe it is, but want validation on that. > 2. Which one controls shuffle block-size? > 3. If I have a single machine non-distributed instance, and point it at > file://, do both of these control the persistent data's block size or just > one of them or what? > 4. Is there any way to run with say a 512MB blocksize for the persistent > data and the default 64MB blocksize for the shuffled data? > > Thanks! > > ellis > --e89a8f2345894212e504c6d60b63 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Tariq,
I am trying to start wordcount mapreduce, i am not getting ho= w to start and where to start ..
i very new to java.=A0
can you help how to work with this..any help will appreciated.


Hi All,
Please help start with Hado= op on CDH , i have instaleed in my local PC.
any help will apprec= iated.

On Thu, Aug 9, 2012 at 9:10 PM, El= lis H. Wilson III <ellis@cse.psu.edu> wrote:
Hi all!

Can someone please briefly explain the difference? =A0I do not see deprecat= ed warnings for fs.local.block.size when I run with them set and I see two = copies of RawLocalFileSystem.java (the other is local/RawLocalFs.java).

The things I really need to get answers to are:
1. Is the default boosted to 64MB from Hadoop 1.0 to Hadoop 2.0? =A0I belie= ve it is, but want validation on that.
2. Which one controls shuffle block-size?
3. If I have a single machine non-distributed instance, and point it at fil= e://, do both of these control the persistent data's block size or just= one of them or what?
4. Is there any way to run with say a 512MB blocksize for the persistent da= ta and the default 64MB blocksize for the shuffled data?

Thanks!

ellis

--e89a8f2345894212e504c6d60b63--