Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8D08E7A71 for ; Sat, 27 Aug 2011 19:43:34 +0000 (UTC) Received: (qmail 51711 invoked by uid 500); 27 Aug 2011 19:43:33 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 51492 invoked by uid 500); 27 Aug 2011 19:43:33 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 51484 invoked by uid 99); 27 Aug 2011 19:43:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 27 Aug 2011 19:43:32 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.212.48] (HELO mail-vw0-f48.google.com) (209.85.212.48) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 27 Aug 2011 19:43:25 +0000 Received: by vws7 with SMTP id 7so5950994vws.35 for ; Sat, 27 Aug 2011 12:43:04 -0700 (PDT) Received: by 10.52.21.194 with SMTP id x2mr2823607vde.389.1314474184203; Sat, 27 Aug 2011 12:43:04 -0700 (PDT) MIME-Version: 1.0 Received: by 10.52.113.35 with HTTP; Sat, 27 Aug 2011 12:42:44 -0700 (PDT) X-Originating-IP: [24.104.73.2] In-Reply-To: References: <00e801cc64d6$fe9fdb90$fbdf92b0$@ncsu.edu> From: Ted Dunning Date: Sat, 27 Aug 2011 12:42:44 -0700 Message-ID: Subject: Re: set reduced block size for a specific file To: hdfs-user@hadoop.apache.org Cc: rbclay@ncsu.edu Content-Type: multipart/alternative; boundary=20cf30781396d0dc0f04ab81ded1 X-Virus-Checked: Checked by ClamAV on apache.org --20cf30781396d0dc0f04ab81ded1 Content-Type: text/plain; charset=ISO-8859-1 There is no way to do this for standard Apache Hadoop. But other, otherwise Hadoop compatible, systems such as MapR do support this operation. Rather than push commercial systems on this mailing list, I would simply recommend anybody who is curious to email me. On Sat, Aug 27, 2011 at 12:07 PM, Uma Maheswara Rao G 72686 < maheswara@huawei.com> wrote: > Hi Ben, > Currently there is no way to specify the blocksize from command line in > Hadoop. > > Why can't you write the file from java program? > Is there any use case for you to write some files only from command line? > > Regards, > Uma > > ----- Original Message ----- > From: Ben Clay > Date: Saturday, August 27, 2011 10:03 pm > Subject: set reduced block size for a specific file > To: hdfs-user@hadoop.apache.org > > > I'd like to set a lowered block size for a specific file. IE, if > > HDFS is > > configured to use 64mb blocks, I'd like to use 32mb blocks for a > > specificfile. > > > > > > > > Is there a way to do this from the commandline, without writing a > > jar which > > uses org.apache.hadoop.fs.FileSystem.create() ? > > > > > > > > I tried the following, but it didn't work: > > > > > > > > hadoop fs -Ddfs.block.size=1048576 -put /local/path /remote/path > > > > > > > > I also tried -copyFromLocal. It looks like the -D is being ignored. > > > > > > > > Thanks. > > > > > > > > -Ben > > > > > > > > > --20cf30781396d0dc0f04ab81ded1 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable There is no way to do this for standard Apache Hadoop.

B= ut other, otherwise Hadoop compatible, systems such as MapR do support this= operation.

Rather than push commercial systems on= this mailing list, I would simply recommend anybody who is curious to emai= l me.

On Sat, Aug 27, 2011 at 12:07 PM, Uma Mahesw= ara Rao G 72686 <maheswara@huawei.com> wrote:
Hi Ben,
Currently there is no way to specify the blocksize from command line in Had= oop.

Why can't you write the file from java program?
Is there any use case for you to write some files only from command line?
Regards,
Uma

----- Original Message -----
From: Ben Clay <rbclay@ncsu.edu&g= t;
Date: Saturday, August 27, 2011 10:03 pm
Subject: set reduced block size for a specific file
To: hdfs-user@hadoop.apache.= org

> I'd like to set a lowered block size for a specific file. =A0IE, i= f
> HDFS is
> configured to use 64mb blocks, I'd like to use 32mb blocks for a > specificfile.
>
>
>
> Is there a way to do this from the commandline, without writing a
> jar which
> uses org.apache.hadoop.fs.FileSystem.create() ?
>
>
>
> I tried the following, but it didn't work:
>
>
>
> hadoop fs -Ddfs.block.size=3D1048576 =A0-put /local/path /remote/path<= br> >
>
>
> I also tried -copyFromLocal. =A0It looks like the -D is being ignored.=
>
>
>
> Thanks.
>
>
>
> -Ben
>
>
>
>

--20cf30781396d0dc0f04ab81ded1--