Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 03323104F6 for ; Tue, 2 Jul 2013 18:13:44 +0000 (UTC) Received: (qmail 87871 invoked by uid 500); 2 Jul 2013 18:13:38 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 87477 invoked by uid 500); 2 Jul 2013 18:13:38 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 87465 invoked by uid 99); 2 Jul 2013 18:13:37 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Jul 2013 18:13:37 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of john.lilley@redpoint.net designates 206.225.164.217 as permitted sender) Received: from [206.225.164.217] (HELO hub021-nj-2.exch021.serverdata.net) (206.225.164.217) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Jul 2013 18:13:30 +0000 Received: from MBX021-E3-NJ-2.exch021.domain.local ([10.240.4.78]) by HUB021-NJ-2.exch021.domain.local ([10.240.4.33]) with mapi id 14.03.0123.003; Tue, 2 Jul 2013 11:13:09 -0700 From: John Lilley To: "user@hadoop.apache.org" Subject: RE: How can a YarnTask read/write local-host HDFS blocks? Thread-Topic: How can a YarnTask read/write local-host HDFS blocks? Thread-Index: AQHObzZ+oAqt3UfD+UOtoyYYr3097plCD2CAgA+xRVA= Date: Tue, 2 Jul 2013 18:13:07 +0000 Message-ID: <869970D71E26D7498BDAC4E1CA92226B658D6896@MBX021-E3-NJ-2.exch021.domain.local> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [173.160.43.61] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Blah blah, One point you might have missed: multiple tasks cannot all write the same H= DFS file at the same time. So you can't just split an output file into sec= tions and say "task1 write block1, etc". Typically each task outputs a sep= arate file and these file-parts are read or merged later. john -----Original Message----- From: Harsh J [mailto:harsh@cloudera.com]=20 Sent: Saturday, June 22, 2013 5:33 AM To: Subject: Re: How can a YarnTask read/write local-host HDFS blocks? Hi, On Sat, Jun 22, 2013 at 4:21 PM, blah blah wrote: > Hi all > > Disclaimer > I am creating a prototype Application Master. I am using old Yarn=20 > development version. Revision 1437315, from 2013-01-23 (SNAPSHOT=20 > 3.0.0). I can not update to current trunk version, as prototype=20 > deadline is soon, and I don't have time to include Yarn API changes. > > My cluster setup is as follows: > - each computational node acts as NodeManager and as a DataNode > - dedicated single node for the ResourceManager and NameNode > > I have scheduled Containers/Tasks to the hosts which hold input data=20 > HDFS blocks to achieve data locality (new=20 > AMRMClient.ContainerRequest(capability, > blocksHosts, racks, pri, numContainers)). I know that the Task=20 > schedule is not guaranteed (but lets assume Tasks were scheduled=20 > directly to hosts with input HDFS blocks). I have 3 questions=20 > regarding reading/writing data from HDFS. > > 1. How can a Container/Task read local HDFS block? > Since Container/Task was scheduled on the same computational node as=20 > its input HDFS block, how can I read the local block? Should I use=20 > LocalFileSystem, since HDFS block is stored locally? Any code snippet=20 > or source code reference will be greatly appreciated. The HDFS client does local reads automatically if there is a local DN where= they are running (and it has the block they request). A developer needn't = concern themselves with explicitly trying to read local data - it is done a= utomatically by the framework. > 2. Multiple Containers on same Host, how to differ which local block=20 > should be read by which Container/Task? > In case there are multiple Containers/Tasks scheduled to the same=20 > Host, and also different input HDFS blocks are stored on the same=20 > Host. How can I ensure that Container/Task will read "its" HDFS local=20 > block. For example INPUT consists of 10 blocks, Job uses 5 nodes, and=20 > for each node 2 containers were scheduled, also each node holds 2=20 > distinct HDFS blocks. How can I read Block_A in Container_2_Host_A and Bl= ock_B in Container_3_Host_A. > Again any code snippet or source code reference will be greatly appreciat= ed. You have to basically assign a file (offset + len) to each container ID you= launch. They then have to read this assigned file alone. You can pass this= read info to them via CLI options, some serialized file, etc.. > 3. Write HDFS block to local node (not local file system). > How can I write read-processed HDFS blocks back to HDFS, but store it=20 > on the same local host. As far as I know (if I am wrong please correct=20 > me), whenever Task writes some data to HDFS, HDFS tries to store it on=20 > the same host, then rack, then as close as possible (assuming replication= factor 3). > Is this process automated, and simple hdfs.write() will do the trick?=20 > You know that any code snippet or source code reference will be=20 > greatly appreciated. This process is automatic in the same way a local ready is automatic. You needn't write special code for this. > Thank you for your help in advance. > > regards > tmp -- Harsh J