Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7B696483A for ; Fri, 27 May 2011 05:07:49 +0000 (UTC) Received: (qmail 88528 invoked by uid 500); 27 May 2011 05:07:46 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 88433 invoked by uid 500); 27 May 2011 05:07:46 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 88417 invoked by uid 99); 27 May 2011 05:07:45 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 May 2011 05:07:45 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mapred.learn@gmail.com designates 209.85.212.171 as permitted sender) Received: from [209.85.212.171] (HELO mail-px0-f171.google.com) (209.85.212.171) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 May 2011 05:07:40 +0000 Received: by pxi7 with SMTP id 7so1040400pxi.2 for ; Thu, 26 May 2011 22:07:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=HWJEdcY5I0VQ4lozQJF/T6ISYMrbs+mkyuZgGSpauTs=; b=m4oJ4lwyJR//Ik7WQVcGIJxAHXUFH9HDYgEjY5SUNlkvTdfr+T1VkYUiPwPVM6Z+g7 HxwAe3yhF33nCfaaZrF3KdaoeZhyxmF6ZDvbXCoh6oEhNGj2KQX6DXMC4sB1NRHFnXKX b5BDIzAW4Kwqd/EgIhF+6FPI0K8/dzodC82vc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=bKq61hf287gUvADEX/6welsRqvBCC2INpw2ZslPOmxKT/9eG3dYlY0eHUQo9O7ed8e 9kQ+hvlLGlGe5yzzrjw6mcLQSahOGLlVs3/TcUlSbIXRIKdrXIAFAFTfAFS2cTgDoJWf CWzQ1ZQ5a7LMz+HPnbH0ZxZNQvdRMoIzsZ0/8= MIME-Version: 1.0 Received: by 10.143.26.9 with SMTP id d9mr271574wfj.218.1306472839520; Thu, 26 May 2011 22:07:19 -0700 (PDT) Received: by 10.143.167.20 with HTTP; Thu, 26 May 2011 22:07:19 -0700 (PDT) In-Reply-To: References: Date: Thu, 26 May 2011 22:07:19 -0700 Message-ID: Subject: Re: Are hadoop fs commands serial or parallel From: Mapred Learn To: common-user@hadoop.apache.org, mapreduce-user@hadoop.apache.org, cdh-user@cloudera.org Content-Type: multipart/alternative; boundary=001636e0ac5282211c04a43ae9b4 --001636e0ac5282211c04a43ae9b4 Content-Type: text/plain; charset=ISO-8859-1 Hi guys, Another question related to it is that when you do hadoop fs -copyFromLocal or use API to call fs.write(), does it write to local filesystem first before writing to HDFS. I read and found out that it writes on local file-system until block-size is reached and then writes on HDFS. Wouldn't HDFS Client choke if it writes to local filesystem if multiple such fs -copyFromLocal commands are running. I thought atleast in fs.write(), if you provide byte array, it should not write on local file-system ? Could somebody tell how fs -copyFromLocal and fs.write() work ? Do they write on local-filesystem beofre block size is reached and then write to HDFS or write directly to HDFS ? Thanks in advance, -JJ On Wed, May 18, 2011 at 9:39 AM, Patrick Angeles wrote: > kinda clunky but you could do this via shell: > > for $FILE in $LIST_OF_FILES ; do > hadoop fs -copyFromLocal $FILE $DEST_PATH & > done > > If doing this via the Java API, then, yes you will have to use multiple > threads. > > On Wed, May 18, 2011 at 1:04 AM, Mapred Learn >wrote: > > > Thanks harsh ! > > That means basically both APIs as well as hadoop client commands allow > only > > serial writes. > > I was wondering what could be other ways to write data in parallel to > HDFS > > other than using multiple parallel threads. > > > > Thanks, > > JJ > > > > Sent from my iPhone > > > > On May 17, 2011, at 10:59 PM, Harsh J wrote: > > > > > Hello, > > > > > > Adding to Joey's response, copyFromLocal's current implementation is > > serial > > > given a list of files. > > > > > > On Wed, May 18, 2011 at 9:57 AM, Mapred Learn > > > wrote: > > >> Thanks Joey ! > > >> I will try to find out abt copyFromLocal. Looks like Hadoop Apis write > > > serially as you pointed out. > > >> > > >> Thanks, > > >> -JJ > > >> > > >> On May 17, 2011, at 8:32 PM, Joey Echeverria > wrote: > > >> > > >>> The sequence file writer definitely does it serially as you can only > > >>> ever write to the end of a file in Hadoop. > > >>> > > >>> Doing copyFromLocal could write multiple files in parallel (I'm not > > >>> sure if it does or not), but a single file would be written serially. > > >>> > > >>> -Joey > > >>> > > >>> On Tue, May 17, 2011 at 5:44 PM, Mapred Learn < > mapred.learn@gmail.com> > > > wrote: > > >>>> Hi, > > >>>> My question is when I run a command from hdfs client, for eg. hadoop > > fs > > >>>> -copyFromLocal or create a sequence file writer in java code and > > append > > >>>> key/values to it through Hadoop APIs, does it internally > > transfer/write > > > data > > >>>> to HDFS serially or in parallel ? > > >>>> > > >>>> Thanks in advance, > > >>>> -JJ > > >>>> > > >>> > > >>> > > >>> > > >>> -- > > >>> Joseph Echeverria > > >>> Cloudera, Inc. > > >>> 443.305.9434 > > >> > > > > > > -- > > > Harsh J > > > --001636e0ac5282211c04a43ae9b4--