Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D502910EF5 for ; Tue, 10 Sep 2013 04:27:42 +0000 (UTC) Received: (qmail 85586 invoked by uid 500); 10 Sep 2013 04:27:36 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 85456 invoked by uid 500); 10 Sep 2013 04:27:34 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 85449 invoked by uid 99); 10 Sep 2013 04:27:33 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Sep 2013 04:27:33 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yankunhadoop@gmail.com designates 209.85.216.52 as permitted sender) Received: from [209.85.216.52] (HELO mail-qa0-f52.google.com) (209.85.216.52) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Sep 2013 04:27:27 +0000 Received: by mail-qa0-f52.google.com with SMTP id j15so123880qaq.4 for ; Mon, 09 Sep 2013 21:27:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CuQvY9eoayy9MjiTi6rmdJP4so9q6J28ODq5FG23kkQ=; b=CtGctJUwOGeIIzrU1gDP7CxB8GDuYJCL3ty610ZJS0uyU0V4EQVlhHw6YpbWiS2vgJ s32+xnaiV0D7C2A0v3pRGTCiZBm8EOblDcxpeFkHwoJNHvXlQ18SDW+jU9KBlRzHbfql AhvQ+4oJ4ZxuNiwYL4EjH/UDA3fxxqbqDc2kVaVr+fivpZnrqIar0QAUJuhYT6UdVDgN 5eJRzoSCkabHs91EfEMnNTqTncsNG1pxbY01TpVBx8hqO8a48KzOJou/GyNPF6O3CFE5 F4M7/34iJc1WyPD8ZTUkWvudvWAVkMWAiYKNIKeF/e+tnBGopnL2DviDhmpdHlvD7+GZ kHow== MIME-Version: 1.0 X-Received: by 10.224.167.69 with SMTP id p5mr5526250qay.7.1378787227125; Mon, 09 Sep 2013 21:27:07 -0700 (PDT) Received: by 10.140.34.103 with HTTP; Mon, 9 Sep 2013 21:27:07 -0700 (PDT) In-Reply-To: References: Date: Tue, 10 Sep 2013 12:27:07 +0800 Message-ID: Subject: Re: modify hdfs block size From: kun yan To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=089e01536e06e4ee1304e5ffead9 X-Virus-Checked: Checked by ClamAV on apache.org --089e01536e06e4ee1304e5ffead9 Content-Type: text/plain; charset=ISO-8859-1 thank your very much 2013/9/10 Harsh J > You cannot change the blocksize (i.e. merge or split) of an existing > file. You can however change it for newer files, and also download and > re-upload older files again with newer blocksize to change it. > > On Tue, Sep 10, 2013 at 9:01 AM, kun yan wrote: > > Hi all > > Can I modify HDFS data block size is 32MB, I know the default is 64MB > > thanks > > > > -- > > > > In the Hadoop world, I am just a novice, explore the entire Hadoop > > ecosystem, I hope one day I can contribute their own code > > > > YanBit > > yankunhadoop@gmail.com > > > > > > -- > Harsh J > -- In the Hadoop world, I am just a novice, explore the entire Hadoop ecosystem, I hope one day I can contribute their own code YanBit yankunhadoop@gmail.com --089e01536e06e4ee1304e5ffead9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
thank your very much

<= br>
2013/9/10 Harsh J <harsh@cloudera.com&g= t;
You cannot change the blocksize (i.e. merge = or split) of an existing
file. You can however change it for newer files, and also download and
re-upload older files again with newer blocksize to change it.

On Tue, Sep 10, 2013 at 9:01 AM, kun yan <yankunhadoop@gmail.com> wrote:
> Hi all
> Can I modify HDFS data block size is 32MB, I know the default is 64MB<= br> > thanks
>
> --
>
> In the Hadoop world, I am just a novice, explore the entire Hadoop
> ecosystem, I hope one day I can contribute their own code
>
> YanBit
> yankunhadoop@gmail.com >



--
Harsh J



--

In the Hadoop world, I am just a novice, explor= e the entire Hadoop ecosystem, I hope one day I can contribute their own co= de
=A0
YanBit

--089e01536e06e4ee1304e5ffead9--