Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CA960FA9F for ; Tue, 6 Aug 2013 10:07:33 +0000 (UTC) Received: (qmail 63561 invoked by uid 500); 6 Aug 2013 10:07:28 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 63242 invoked by uid 500); 6 Aug 2013 10:07:28 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 63235 invoked by uid 99); 6 Aug 2013 10:07:28 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Aug 2013 10:07:28 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.214.182 as permitted sender) Received: from [209.85.214.182] (HELO mail-ob0-f182.google.com) (209.85.214.182) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 06 Aug 2013 10:07:24 +0000 Received: by mail-ob0-f182.google.com with SMTP id wo10so340416obc.27 for ; Tue, 06 Aug 2013 03:07:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=51m4+pK1WklvOPaIPuyZ00IDY4TrKb/fzwTHrsxPfms=; b=eRFud6SqiLl5RZlfsEXT6NgzFnvVZgx7GW6naxjCZXFKo13t3ajUhoEgQWCphZ/WL/ HEMLk3IftDtw1Vv7sL7GOhuvtuBuRIQcu0UuQnN2Lrht7tYcKMNpVn5gd716v6qKtpmv k0GzX4WC5Ot9MYmIlXOliNraiM7NAkPFF+PEAmVcjAf09JFiVOxUGoxbj9De8mIRjCV6 we8FepTJ1u6RHaESEoqUjYstwKmQ3oD/LOY+F7qM0yNMTNx7R4r+0oNQxfaI/7XuH8B0 eE5rUJ1iBDgUpZleAuYA7to2Kv31f1Waut61DDnqiVl5KSIzzJ7bhrhls1w5CdkwrqNv ZIiA== X-Received: by 10.43.78.196 with SMTP id zn4mr1992958icb.55.1375783623749; Tue, 06 Aug 2013 03:07:03 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.87.164 with HTTP; Tue, 6 Aug 2013 03:06:43 -0700 (PDT) In-Reply-To: References: From: Harsh J Date: Tue, 6 Aug 2013 15:36:43 +0530 Message-ID: Subject: Re: setLocalResources() on ContainerLaunchContext To: "" Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQl79YY5hJqVy9YjM5+yKNBpEG6TOtwiXd0xwkpyhsx3t2J2Re+AcLwRVgMZiU9pw7EShHaf X-Virus-Checked: Checked by ClamAV on apache.org To be honest, I've never tried loading a HDFS file onto the LocalResource this way. I usually just pass a local file and that works just fine. There may be something in the URI transformation possibly breaking a HDFS source, but try passing a local file - does that fail too? The Shell example uses a local file. On Tue, Aug 6, 2013 at 10:54 AM, Krishna Kishore Bonagiri wrote: > Hi Harsh, > > Please see if this is useful, I got a stack trace after the error has > occurred.... > > 2013-08-06 00:55:30,559 INFO > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: CWD set > to /tmp/nm-local-dir/usercache/dsadm/appcache/application_1375716148174_0004 > = > file:/tmp/nm-local-dir/usercache/dsadm/appcache/application_1375716148174_0004 > 2013-08-06 00:55:31,017 ERROR > org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException > as:dsadm (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not > exist: hdfs://isredeng/kishore/kk.ksh > 2013-08-06 00:55:31,029 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: > DEBUG: FAILED { hdfs://isredeng/kishore/kk.ksh, 0, FILE, null }, File does > not exist: hdfs://isredeng/kishore/kk.ksh > 2013-08-06 00:55:31,031 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: > Resource hdfs://isredeng/kishore/kk.ksh transitioned from DOWNLOADING to > FAILED > 2013-08-06 00:55:31,034 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: > Container container_1375716148174_0004_01_000002 transitioned from > LOCALIZING to LOCALIZATION_FAILED > 2013-08-06 00:55:31,035 INFO > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalResourcesTrackerImpl: > Container container_1375716148174_0004_01_000002 sent RELEASE event on a > resource request { hdfs://isredeng/kishore/kk.ksh, 0, FILE, null } not > present in cache. > 2013-08-06 00:55:31,036 WARN org.apache.hadoop.ipc.Client: interrupted > waiting to send rpc request to server > java.lang.InterruptedException > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1290) > at > java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:229) > at java.util.concurrent.FutureTask.get(FutureTask.java:94) > at > org.apache.hadoop.ipc.Client$Connection.sendRpcRequest(Client.java:930) > at org.apache.hadoop.ipc.Client.call(Client.java:1285) > at org.apache.hadoop.ipc.Client.call(Client.java:1264) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at $Proxy22.heartbeat(Unknown Source) > at > org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.client.LocalizationProtocolPBClientImpl.heartbeat(LocalizationProtocolPBClientImpl.java:62) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.localizeFiles(ContainerLocalizer.java:249) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.runLocalization(ContainerLocalizer.java:163) > at > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.startLocalizer(DefaultContainerExecutor.java:106) > at > org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:979) > > > > And here is my code snippet: > > ContainerLaunchContext ctx = > Records.newRecord(ContainerLaunchContext.class); > > ctx.setEnvironment(oshEnv); > > // Set the local resources > Map localResources = new HashMap LocalResource>(); > > LocalResource shellRsrc = Records.newRecord(LocalResource.class); > shellRsrc.setType(LocalResourceType.FILE); > shellRsrc.setVisibility(LocalResourceVisibility.APPLICATION); > String shellScriptPath = "hdfs://isredeng//kishore/kk.ksh"; > try { > shellRsrc.setResource(ConverterUtils.getYarnUrlFromURI(new > URI(shellScriptPath))); > } catch (URISyntaxException e) { > LOG.error("Error when trying to use shell script path specified" > + " in env, path=" + shellScriptPath); > e.printStackTrace(); > } > > shellRsrc.setTimestamp(0/*shellScriptPathTimestamp*/); > shellRsrc.setSize(0/*shellScriptPathLen*/); > String ExecShellStringPath = "ExecShellScript.sh"; > localResources.put(ExecShellStringPath, shellRsrc); > > ctx.setLocalResources(localResources); > > > Please let me know if you need anything else. > > Thanks, > Kishore > > > > On Tue, Aug 6, 2013 at 12:05 AM, Harsh J wrote: >> >> The detail is insufficient to answer why. You should also have gotten >> a trace after it, can you post that? If possible, also the relevant >> snippets of code. >> >> On Mon, Aug 5, 2013 at 6:36 PM, Krishna Kishore Bonagiri >> wrote: >> > Hi Harsh, >> > Thanks for the quick and detailed reply, it really helps. I am trying >> > to >> > use it and getting this error in node manager's log: >> > >> > 2013-08-05 08:57:28,867 ERROR >> > org.apache.hadoop.security.UserGroupInformation: >> > PriviledgedActionException >> > as:dsadm (auth:SIMPLE) cause:java.io.FileNotFoundException: File does >> > not >> > exist: hdfs://isredeng/kishore/kk.ksh >> > >> > >> > This file is there on the machine with name "isredeng", I could do ls >> > for >> > that file as below: >> > >> > -bash-4.1$ hadoop fs -ls kishore/kk.ksh >> > 13/08/05 09:01:03 WARN util.NativeCodeLoader: Unable to load >> > native-hadoop >> > library for your platform... using builtin-java classes where applicable >> > Found 1 items >> > -rw-r--r-- 3 dsadm supergroup 1046 2013-08-05 08:48 >> > kishore/kk.ksh >> > >> > Note: I am using a single node cluster >> > >> > Thanks, >> > Kishore >> > >> > >> > >> > >> > On Mon, Aug 5, 2013 at 3:00 PM, Harsh J wrote: >> >> >> >> The string for each LocalResource in the map can be anything that >> >> serves as a common identifier name for your application. At execution >> >> time, the passed resource filename will be aliased to the name you've >> >> mapped it to, so that the application code need not track special >> >> names. The behavior is very similar to how you can, in MR, define a >> >> symlink name for a DistributedCache entry (e.g. foo.jar#bar.jar). >> >> >> >> For an example, checkout the DistributedShell app sources. >> >> >> >> Over [1], you can see we take a user provided file path to a shell >> >> script. This can be named anything as it is user-supplied. >> >> Onto [2], we define this as a local resource [2.1] and embed it with a >> >> different name (the string you ask about) [2.2], as defined at [3] as >> >> an application reference-able constant. >> >> Note that in [4], we add to the Container arguments the aliased name >> >> we mapped it to (i.e. [3]) and not the original filename we received >> >> from the user. The resource is placed on the container with this name >> >> instead, so thats what we choose to execute. >> >> >> >> [1] - >> >> >> >> https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java#L390 >> >> >> >> [2] - [2.1] >> >> >> >> https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java#L764 >> >> and [2.2] >> >> >> >> https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java#L780 >> >> >> >> [3] - >> >> >> >> https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java#L205 >> >> >> >> [4] - >> >> >> >> https://github.com/apache/hadoop-common/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java#L791 >> >> >> >> On Mon, Aug 5, 2013 at 2:44 PM, Krishna Kishore Bonagiri >> >> wrote: >> >> > Hi, >> >> > >> >> > Can someone please tell me what is the use of calling >> >> > setLocalResources() >> >> > on ContainerLaunchContext? >> >> > >> >> > And, also an example of how to use this will help... >> >> > >> >> > I couldn't guess what is the String in the map that is passed to >> >> > setLocalResources() like below: >> >> > >> >> > // Set the local resources >> >> > Map localResources = new HashMap> >> > LocalResource>(); >> >> > >> >> > Thanks, >> >> > Kishore >> >> > >> >> >> >> >> >> >> >> -- >> >> Harsh J >> > >> > >> >> >> >> -- >> Harsh J > > -- Harsh J