Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 271DD1768A for ; Mon, 20 Apr 2015 11:53:48 +0000 (UTC) Received: (qmail 87188 invoked by uid 500); 20 Apr 2015 11:53:41 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 87084 invoked by uid 500); 20 Apr 2015 11:53:41 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 87073 invoked by uid 99); 20 Apr 2015 11:53:41 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Apr 2015 11:53:41 +0000 X-ASF-Spam-Status: No, hits=4.2 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_SOFTFAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: softfail (athena.apache.org: transitioning domain of fotero@gmail.com does not designate 54.191.145.13 as permitted sender) Received: from [54.191.145.13] (HELO mx1-us-west.apache.org) (54.191.145.13) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Apr 2015 11:53:36 +0000 Received: from mail-ig0-f178.google.com (mail-ig0-f178.google.com [209.85.213.178]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 68B3D24F3B for ; Mon, 20 Apr 2015 11:53:16 +0000 (UTC) Received: by igbhj9 with SMTP id hj9so57983338igb.1 for ; Mon, 20 Apr 2015 04:52:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=MO4A5dRayJTdMtRkvvPqlbspcj7GqDffcURGjP6Bog0=; b=M6+mjU490fumI33GieL9qfrEufz9H8s6m9zNxXMImNr1W7zb8rbaZV4vGW9JaTh/d/ SyaDYJVQj8/92vw6A3S6OOx92wJi72Au66b2D16ESoMogrZZsQlF76Fw3LfYLYZ6vvvG xqe1BJ/k30LxMsxbNaDA6JXwUgtdAgqNyknRbWJViZki04cILs8QsoyDIC0uqeOHCRDC dmA2mRVO48j79kN+bsYgpoiGibo0wJuxfFx04WJ9FBmPkxfWUkWApiT7o3rWdz/iP151 torQ1TA4n7AzI4ik1S0G/tK3fS4CxqnQ2Nw0HzBKqoJFjzHVLKrlTz1/5xxD9LrhifV6 K5CA== MIME-Version: 1.0 X-Received: by 10.107.131.135 with SMTP id n7mr21800238ioi.37.1429530745189; Mon, 20 Apr 2015 04:52:25 -0700 (PDT) Received: by 10.50.10.227 with HTTP; Mon, 20 Apr 2015 04:52:25 -0700 (PDT) In-Reply-To: References: <80AF2309-9DDF-4E13-AFC9-38C5B2746383@gmail.com> Date: Mon, 20 Apr 2015 08:52:25 -0300 Message-ID: Subject: Re: ResourceLocalizationService: Localizer failed when running pi example From: "Fernando O." To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a113ea43e435b6c05142690d1 X-Virus-Checked: Checked by ClamAV on apache.org --001a113ea43e435b6c05142690d1 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I didn't specify it so it's using the default value (in /tmp) On Sun, Apr 19, 2015 at 10:21 PM, Drake=EB=AF=BC=EC=98=81=EA=B7=BC wrote: > Hi, > > guess the "yarn.nodemanager.local-dirs" property is the problem. Can you > provide that part of yarn-site.xml? > > Thanks. > > Drake =EB=AF=BC=EC=98=81=EA=B7=BC Ph.D > kt NexR > > On Mon, Apr 20, 2015 at 4:27 AM, Fernando O. wrote: > >> yeah... there's not much there: >> >> -bash-4.1$ cd nm-local-dir/ >> -bash-4.1$ ll * >> filecache: >> total 0 >> >> nmPrivate: >> total 0 >> >> usercache: >> total 0 >> >> I'm using Open JDK, would that be a problem? >> >> More log: >> >> STARTUP_MSG: java =3D 1.7.0_75 >> ************************************************************/ >> 2015-04-19 14:38:58,168 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX >> signal handlers for [TERM, HUP, INT] >> 2015-04-19 14:38:58,562 WARN org.apache.hadoop.util.NativeCodeLoader: >> Unable to load native-hadoop library for your platform... using >> builtin-java classes where applicable >> 2015-04-19 14:38:59,018 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Con= tainerEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl$ContainerEventDispatcher >> 2015-04-19 14:38:59,020 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.A= pplicationEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl$ApplicationEventDispatcher >> 2015-04-19 14:38:59,021 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.eve= nt.LocalizationEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService >> 2015-04-19 14:38:59,021 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEv= entType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices >> 2015-04-19 14:38:59,022 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl >> 2015-04-19 14:38:59,023 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Cont= ainersLauncherEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Cont= ainersLauncher >> 2015-04-19 14:38:59,054 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for >> class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl >> 2015-04-19 14:38:59,054 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class >> org.apache.hadoop.yarn.server.nodemanager.NodeManager >> 2015-04-19 14:38:59,109 INFO >> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >> hadoop-metrics2.properties >> 2015-04-19 14:38:59,197 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >> period at 10 second(s). >> 2015-04-19 14:38:59,197 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics >> system started >> 2015-04-19 14:38:59,217 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.ev= ent.LogHandlerEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.No= nAggregatingLogHandler >> 2015-04-19 14:38:59,217 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService: >> per directory file limit =3D 8192 >> 2015-04-19 14:38:59,227 INFO >> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.eve= nt.LocalizerEventType >> for class >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService$LocalizerTracker >> 2015-04-19 14:38:59,248 WARN >> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: = The >> Auxilurary Service named 'mapreduce_shuffle' in the configuration is for >> class class org.apache.hadoop.mapred.ShuffleHandler which has a name of >> 'httpshuffle'. Because these are not the same tools trying to send >> ServiceData and read Service Meta Data may have issues unless the refer = to >> the name in the config. >> 2015-04-19 14:38:59,248 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: >> Adding auxiliary service httpshuffle, "mapreduce_shuffle" >> 2015-04-19 14:38:59,281 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl: >> Using ResourceCalculatorPlugin : >> org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@7fc514a7 >> 2015-04-19 14:38:59,281 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl: >> Using ResourceCalculatorProcessTree : null >> 2015-04-19 14:38:59,281 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl: >> Physical memory check enabled: true >> 2015-04-19 14:38:59,281 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl: >> Virtual memory check enabled: true >> 2015-04-19 14:38:59,284 WARN >> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta= inersMonitorImpl: >> NodeManager configured with 14 G physical memory allocated to containers= , >> which is more than 80% of the total physical memory available (14.7 G). >> Thrashing might happen. >> 2015-04-19 14:38:59,287 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: >> Initialized nodemanager for null: physical-memory=3D14336 >> virtual-memory=3D30106 virtual-cores=3D8 >> 2015-04-19 14:38:59,318 INFO org.apache.hadoop.ipc.CallQueueManager: >> Using callQueue class java.util.concurrent.LinkedBlockingQueue >> 2015-04-19 14:38:59,334 INFO org.apache.hadoop.ipc.Server: Starting >> Socket Reader #1 for port 38230 >> 2015-04-19 14:38:59,359 INFO >> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding >> protocol org.apache.hadoop.yarn.api.ContainerManagementProtocolPB to the >> server >> 2015-04-19 14:38:59,359 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl: >> Blocking new container-requests as container manager rpc server is still >> starting. >> 2015-04-19 14:38:59,359 INFO org.apache.hadoop.ipc.Server: IPC Server >> Responder: starting >> 2015-04-19 14:38:59,359 INFO org.apache.hadoop.ipc.Server: IPC Server >> listener on 38230: starting >> 2015-04-19 14:38:59,366 INFO >> org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecre= tManager: >> Updating node address : ip-10-100-70-199.ec2.internal:38230 >> 2015-04-19 14:38:59,372 INFO org.apache.hadoop.ipc.CallQueueManager: >> Using callQueue class java.util.concurrent.LinkedBlockingQueue >> 2015-04-19 14:38:59,373 INFO org.apache.hadoop.ipc.Server: Starting >> Socket Reader #1 for port 8040 >> 2015-04-19 14:38:59,376 INFO >> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding >> protocol >> org.apache.hadoop.yarn.server.nodemanager.api.LocalizationProtocolPB to = the >> server >> 2015-04-19 14:38:59,376 INFO org.apache.hadoop.ipc.Server: IPC Server >> Responder: starting >> 2015-04-19 14:38:59,376 INFO org.apache.hadoop.ipc.Server: IPC Server >> listener on 8040: starting >> 2015-04-19 14:38:59,380 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService: >> Localizer started on port 8040 >> 2015-04-19 14:38:59,391 INFO org.apache.hadoop.mapred.IndexCache: >> IndexCache created with max memory =3D 10485760 >> 2015-04-19 14:38:59,403 INFO org.apache.hadoop.mapred.ShuffleHandler: >> httpshuffle listening on port 13562 >> 2015-04-19 14:38:59,405 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl: >> ContainerManager started at datanode-03.prod.com/10.100.70.199:38230 >> 2015-04-19 14:38:59,405 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl: >> ContainerManager bound to 0.0.0.0/0.0.0.0:0 >> 2015-04-19 14:38:59,405 INFO >> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer: Instantiatin= g >> NMWebApp at 0.0.0.0:8042 >> 2015-04-19 14:38:59,471 INFO org.mortbay.log: Logging to >> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via >> org.mortbay.log.Slf4jLog >> 2015-04-19 14:38:59,475 INFO org.apache.hadoop.http.HttpRequestLog: Http >> request log for http.requests.nodemanager is not defined >> 2015-04-19 14:38:59,487 INFO org.apache.hadoop.http.HttpServer2: Added >> global filter 'safety' >> (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter) >> 2015-04-19 14:38:59,489 INFO org.apache.hadoop.http.HttpServer2: Added >> filter static_user_filter >> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter= ) to >> context node >> 2015-04-19 14:38:59,489 INFO org.apache.hadoop.http.HttpServer2: Added >> filter static_user_filter >> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter= ) to >> context logs >> 2015-04-19 14:38:59,489 INFO org.apache.hadoop.http.HttpServer2: Added >> filter static_user_filter >> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter= ) to >> context static >> 2015-04-19 14:38:59,493 INFO org.apache.hadoop.http.HttpServer2: adding >> path spec: /node/* >> 2015-04-19 14:38:59,493 INFO org.apache.hadoop.http.HttpServer2: adding >> path spec: /ws/* >> 2015-04-19 14:38:59,505 INFO org.apache.hadoop.http.HttpServer2: Jetty >> bound to port 8042 >> 2015-04-19 14:38:59,505 INFO org.mortbay.log: jetty-6.1.26 >> 2015-04-19 14:38:59,545 INFO org.mortbay.log: Extract >> jar:file:/opt/test/service/hadoop/share/hadoop/yarn/hadoop-yarn-common-2= .6.0.jar!/webapps/node >> to /tmp/Jetty_0_0_0_0_8042_node____19tj0x/webapp >> 2015-04-19 14:38:59,778 INFO org.mortbay.log: Started HttpServer2$ >> SelectChannelConnectorWithSafeStartup@0.0.0.0:8042 >> 2015-04-19 14:38:59,778 INFO org.apache.hadoop.yarn.webapp.WebApps: Web >> app /node started at 8042 >> 2015-04-19 14:39:00,093 INFO org.apache.hadoop.yarn.webapp.WebApps: >> Registered webapp guice modules >> 2015-04-19 14:39:00,126 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending >> out 0 NM container statuses: [] >> 2015-04-19 14:39:00,131 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: >> Registering with RM using containers :[] >> 2015-04-19 14:39:00,176 INFO >> org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecre= tManager: >> Rolling master-key for container-tokens, got key with id -543066608 >> 2015-04-19 14:39:00,178 INFO >> org.apache.hadoop.yarn.server.nodemanager.security.NMTokenSecretManagerI= nNM: >> Rolling master-key for container-tokens, got key with id -1243797706 >> 2015-04-19 14:39:00,179 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registe= red >> with ResourceManager as ip-10-100-70-199.ec2.internal:38230 with total >> resource of >> 2015-04-19 14:39:00,179 INFO >> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifyi= ng >> ContainerManager to unblock new container-requests >> 2015-04-19 19:22:17,729 INFO SecurityLogger.org.apache.hadoop.ipc.Server= : >> Auth successful for appattempt_1429450734039_0010_000001 (auth:SIMPLE) >> 2015-04-19 19:22:17,807 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl: >> Start request for container_1429450734039_0010_01_000001 by user nobody >> 2015-04-19 19:22:17,828 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerMana= gerImpl: >> Creating a new application reference for app application_1429450734039_0= 010 >> 2015-04-19 19:22:17,834 INFO >> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=3Dnobody >> IP=3D10.100.66.251 OPERATION=3DStart Container Request >> TARGET=3DContainerManageImpl RESULT=3DSUCCESS >> APPID=3Dapplication_1429450734039_0010 >> CONTAINERID=3Dcontainer_1429450734039_0010_01_000001 >> 2015-04-19 19:22:17,835 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.A= pplication: >> Application application_1429450734039_0010 transitioned from NEW to INIT= ING >> 2015-04-19 19:22:17,835 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.A= pplication: >> Adding container_1429450734039_0010_01_000001 to application >> application_1429450734039_0010 >> 2015-04-19 19:22:17,839 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.A= pplication: >> Application application_1429450734039_0010 transitioned from INITING to >> RUNNING >> 2015-04-19 19:22:17,843 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Con= tainer: >> Container container_1429450734039_0010_01_000001 transitioned from NEW t= o >> LOCALIZING >> 2015-04-19 19:22:17,843 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: = Got >> event CONTAINER_INIT for appId application_1429450734039_0010 >> 2015-04-19 19:22:17,876 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Loc= alizedResource: >> Resource >> hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.staging/job_142945= 0734039_0010/job.jar >> transitioned from INIT to DOWNLOADING >> 2015-04-19 19:22:17,877 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Loc= alizedResource: >> Resource >> hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.staging/job_142945= 0734039_0010/job.splitmetainfo >> transitioned from INIT to DOWNLOADING >> 2015-04-19 19:22:17,877 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Loc= alizedResource: >> Resource >> hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.staging/job_142945= 0734039_0010/job.split >> transitioned from INIT to DOWNLOADING >> 2015-04-19 19:22:17,877 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Loc= alizedResource: >> Resource >> hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.staging/job_142945= 0734039_0010/job.xml >> transitioned from INIT to DOWNLOADING >> 2015-04-19 19:22:17,877 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService: >> Created localizer for container_1429450734039_0010_01_000001 >> 2015-04-19 19:22:17,880 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService: >> Localizer failed >> java.lang.NullPointerException >> at >> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(L= ocalDirAllocator.java:268) >> at >> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathF= orWrite(LocalDirAllocator.java:344) >> at >> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo= cator.java:150) >> at >> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo= cator.java:131) >> at >> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllo= cator.java:115) >> at >> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLoc= alPathForWrite(LocalDirsHandlerService.java:420) >> at >> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res= ourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.ja= va:1075) >> 2015-04-19 19:22:17,882 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Con= tainer: >> Container container_1429450734039_0010_01_000001 transitioned from >> LOCALIZING to LOCALIZATION_FAILED >> 2015-04-19 19:22:17,886 WARN >> org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=3Dnobody O= PERATION=3DContainer >> Finished - Failed TARGET=3DContainerImpl RESULT=3DFAILURE DESCRIPTION=3D= Container >> failed with state: LOCALIZATION_FAILED >> APPID=3Dapplication_1429450734039_0010 >> CONTAINERID=3Dcontainer_1429450734039_0010_01_000001 >> 2015-04-19 19:22:17,889 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Con= tainer: >> Container container_1429450734039_0010_01_000001 transitioned from >> LOCALIZATION_FAILED to DONE >> 2015-04-19 19:22:17,889 INFO >> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.A= pplication: >> Removing container_1429450734039_0010_01_000001 from application >> application_1429450734039_0010 >> >> On Sun, Apr 19, 2015 at 1:16 PM, Brahma Reddy Battula < >> brahmareddy.battula@hotmail.com> wrote: >> >>> As Alexander Alten-Lorenz pointed, it mostly config issue(yarn.nodemana= ger.local-dirs >>> or mapred.local.dir).. >>> >>> can you able provide full logs..? >>> >>> Bytheway NPE is handled in Trunk ..Please check HADOOP-8436 for more >>> details.. >>> >>> ------------------------------ >>> From: wget.null@gmail.com >>> Subject: Re: ResourceLocalizationService: Localizer failed when running >>> pi example >>> Date: Sun, 19 Apr 2015 17:59:13 +0200 >>> To: user@hadoop.apache.org >>> >>> >>> As you said, that looks like a config issue. I would spot on the NM's >>> local scratch dir (yarn.nodemanager.local-dirs). >>> >>> But without a complete stack trace, its a blind call. >>> >>> BR, >>> AL >>> >>> -- >>> mapredit.blogspot.com >>> >>> On Apr 18, 2015, at 6:24 PM, Fernando O. wrote: >>> >>> Hey All, >>> It's me again with another noob question: I deployed a cluster (HA >>> mode) everything looked good but when I tried to run the pi example: >>> >>> bin/hadoop jar >>> ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 16 100 >>> >>> the same error occurs if I try to generate data with teragen 100000000 >>> /test/data >>> >>> >>> 2015-04-18 15:49:04,090 INFO >>> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Re= sourceLocalizationService: >>> Localizer failed >>> java.lang.NullPointerException >>> at >>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(= LocalDirAllocator.java:268) >>> at >>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPath= ForWrite(LocalDirAllocator.java:344) >>> at >>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAll= ocator.java:150) >>> at >>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAll= ocator.java:131) >>> at >>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAll= ocator.java:115) >>> at >>> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLo= calPathForWrite(LocalDirsHandlerService.java:420) >>> at >>> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Re= sourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.j= ava:1075) >>> >>> >>> I'm guessing it's a configuration issue but I don't know what am I >>> missing :S >>> >>> >>> >> > --001a113ea43e435b6c05142690d1 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I didn't specify it so it's using the default valu= e (in /tmp)

= On Sun, Apr 19, 2015 at 10:21 PM, Drake=EB=AF=BC=EC=98=81=EA=B7=BC <drake= .min@nexr.com> wrote:
Hi,

guess the "yarn.nodemanager.local-d= irs" property is the problem.=C2=A0Can you provide that part of yarn-s= ite.xml?

Thanks.

Dra= ke =EB=AF=BC=EC=98=81=EA=B7=BC Ph.D
kt NexR

On Mon, Apr 20, 2015 at 4:27 AM, Fernando O.= <fotero@gmail.com> wrote:
=
yeah... there's not much there:

-bas= h-4.1$ cd nm-local-dir/
-bash-4.1$ ll *
filecache:
total 0

nmPrivate:
total 0

usercache:
total 0

I'm using Open JDK, would that be a problem?

More log:

STARTUP_MSG: =C2=A0 java = =3D 1.7.0_75
****************************************************= ********/
2015-04-19 14:38:58,168 INFO org.apache.hadoop.yarn.ser= ver.nodemanager.NodeManager: registered UNIX signal handlers for [TERM, HUP= , INT]
2015-04-19 14:38:58,562 WARN org.apache.hadoop.util.Native= CodeLoader: Unable to load native-hadoop library for your platform... using= builtin-java classes where applicable
2015-04-19 14:38:59,018 IN= FO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apac= he.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEvent= Type for class org.apache.hadoop.yarn.server.nodemanager.containermanager.C= ontainerManagerImpl$ContainerEventDispatcher
2015-04-19 14:38:59,= 020 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class or= g.apache.hadoop.yarn.server.nodemanager.containermanager.application.Applic= ationEventType for class org.apache.hadoop.yarn.server.nodemanager.containe= rmanager.ContainerManagerImpl$ApplicationEventDispatcher
2015-04-= 19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Register= ing class org.apache.hadoop.yarn.server.nodemanager.containermanager.locali= zer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nod= emanager.containermanager.localizer.ResourceLocalizationService
2= 015-04-19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: R= egistering class org.apache.hadoop.yarn.server.nodemanager.containermanager= .AuxServicesEventType for class org.apache.hadoop.yarn.server.nodemanager.c= ontainermanager.AuxServices
2015-04-19 14:38:59,022 INFO org.apac= he.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.y= arn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType = for class org.apache.hadoop.yarn.server.nodemanager.containermanager.monito= r.ContainersMonitorImpl
2015-04-19 14:38:59,023 INFO org.apache.h= adoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.= server.nodemanager.containermanager.launcher.ContainersLauncherEventType fo= r class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher= .ContainersLauncher
2015-04-19 14:38:59,054 INFO org.apache.hadoo= p.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.serv= er.nodemanager.ContainerManagerEventType for class org.apache.hadoop.yarn.s= erver.nodemanager.containermanager.ContainerManagerImpl
2015-04-1= 9 14:38:59,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: Registeri= ng class org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for= class org.apache.hadoop.yarn.server.nodemanager.NodeManager
2015= -04-19 14:38:59,109 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loa= ded properties from hadoop-metrics2.properties
2015-04-19 14:38:5= 9,197 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled sna= pshot period at 10 second(s).
2015-04-19 14:38:59,197 INFO org.ap= ache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics system sta= rted
2015-04-19 14:38:59,217 INFO org.apache.hadoop.yarn.event.As= yncDispatcher: Registering class org.apache.hadoop.yarn.server.nodemanager.= containermanager.loghandler.event.LogHandlerEventType for class org.apache.= hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLo= gHandler
2015-04-19 14:38:59,217 INFO org.apache.hadoop.yarn.serv= er.nodemanager.containermanager.localizer.ResourceLocalizationService: per = directory file limit =3D 8192
2015-04-19 14:38:59,227 INFO org.ap= ache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop= .yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventTyp= e for class org.apache.hadoop.yarn.server.nodemanager.containermanager.loca= lizer.ResourceLocalizationService$LocalizerTracker
2015-04-19 14:= 38:59,248 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.A= uxServices: The Auxilurary Service named 'mapreduce_shuffle' in the= configuration is for class class org.apache.hadoop.mapred.ShuffleHandler w= hich has a name of 'httpshuffle'. Because these are not the same to= ols trying to send ServiceData and read Service Meta Data may have issues u= nless the refer to the name in the config.
2015-04-19 14:38:59,24= 8 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServic= es: Adding auxiliary service httpshuffle, "mapreduce_shuffle"
2015-04-19 14:38:59,281 INFO org.apache.hadoop.yarn.server.nodemanag= er.containermanager.monitor.ContainersMonitorImpl: =C2=A0Using ResourceCalc= ulatorPlugin : org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@7f= c514a7
2015-04-19 14:38:59,281 INFO org.apache.hadoop.yarn.server= .nodemanager.containermanager.monitor.ContainersMonitorImpl: =C2=A0Using Re= sourceCalculatorProcessTree : null
2015-04-19 14:38:59,281 INFO o= rg.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Container= sMonitorImpl: Physical memory check enabled: true
2015-04-19 14:3= 8:59,281 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.mo= nitor.ContainersMonitorImpl: Virtual memory check enabled: true
2= 015-04-19 14:38:59,284 WARN org.apache.hadoop.yarn.server.nodemanager.conta= inermanager.monitor.ContainersMonitorImpl: NodeManager configured with 14 G= physical memory allocated to containers, which is more than 80% of the tot= al physical memory available (14.7 G). Thrashing might happen.
20= 15-04-19 14:38:59,287 INFO org.apache.hadoop.yarn.server.nodemanager.NodeSt= atusUpdaterImpl: Initialized nodemanager for null: physical-memory=3D14336 = virtual-memory=3D30106 virtual-cores=3D8
2015-04-19 14:38:59,318 = INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.uti= l.concurrent.LinkedBlockingQueue
2015-04-19 14:38:59,334 INFO org= .apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 38230
2015-04-19 14:38:59,359 INFO org.apache.hadoop.yarn.factories.impl.pb.Rp= cServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ContainerM= anagementProtocolPB to the server
2015-04-19 14:38:59,359 INFO or= g.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerIm= pl: Blocking new container-requests as container manager rpc server is stil= l starting.
2015-04-19 14:38:59,359 INFO org.apache.hadoop.ipc.Se= rver: IPC Server Responder: starting
2015-04-19 14:38:59,359 INFO= org.apache.hadoop.ipc.Server: IPC Server listener on 38230: starting
=
2015-04-19 14:38:59,366 INFO org.apache.hadoop.yarn.server.nodemanager= .security.NMContainerTokenSecretManager: Updating node address : ip-10-100-= 70-199.ec2.internal:38230
2015-04-19 14:38:59,372 INFO org.apache= .hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.Li= nkedBlockingQueue
2015-04-19 14:38:59,373 INFO org.apache.hadoop.= ipc.Server: Starting Socket Reader #1 for port 8040
2015-04-19 14= :38:59,376 INFO org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPB= Impl: Adding protocol org.apache.hadoop.yarn.server.nodemanager.api.Localiz= ationProtocolPB to the server
2015-04-19 14:38:59,376 INFO org.ap= ache.hadoop.ipc.Server: IPC Server Responder: starting
2015-04-19= 14:38:59,376 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 804= 0: starting
2015-04-19 14:38:59,380 INFO org.apache.hadoop.yarn.s= erver.nodemanager.containermanager.localizer.ResourceLocalizationService: L= ocalizer started on port 8040
2015-04-19 14:38:59,391 INFO org.ap= ache.hadoop.mapred.IndexCache: IndexCache created with max memory =3D 10485= 760
2015-04-19 14:38:59,403 INFO org.apache.hadoop.mapred.Shuffle= Handler: httpshuffle listening on port 13562
2015-04-19 14:38:59,= 405 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.Contain= erManagerImpl: ContainerManager started at datanode-03.prod.com/10.100.7= 0.199:38230
2015-04-19 14:38:59,405 INFO org.apache.hadoop.ya= rn.server.nodemanager.containermanager.ContainerManagerImpl: ContainerManag= er bound to 0.0.0.0/= 0.0.0.0:0
2015-04-19 14:38:59,405 INFO org.apache.hadoop.yarn= .server.nodemanager.webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042
2015-04-1= 9 14:38:59,471 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerA= dapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-19 1= 4:38:59,475 INFO org.apache.hadoop.http.HttpRequestLog: Http request log fo= r http.requests.nodemanager is not defined
2015-04-19 14:38:59,48= 7 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety&= #39; (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)
<= div>2015-04-19 14:38:59,489 INFO org.apache.hadoop.http.HttpServer2: Added = filter static_user_filter (class=3Dorg.apache.hadoop.http.lib.StaticUserWeb= Filter$StaticUserFilter) to context node
2015-04-19 14:38:59,489 = INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (c= lass=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to = context logs
2015-04-19 14:38:59,489 INFO org.apache.hadoop.http.= HttpServer2: Added filter static_user_filter (class=3Dorg.apache.hadoop.htt= p.lib.StaticUserWebFilter$StaticUserFilter) to context static
201= 5-04-19 14:38:59,493 INFO org.apache.hadoop.http.HttpServer2: adding path s= pec: /node/*
2015-04-19 14:38:59,493 INFO org.apache.hadoop.http.= HttpServer2: adding path spec: /ws/*
2015-04-19 14:38:59,505 INFO= org.apache.hadoop.http.HttpServer2: Jetty bound to port 8042
201= 5-04-19 14:38:59,505 INFO org.mortbay.log: jetty-6.1.26
2015-04-1= 9 14:38:59,545 INFO org.mortbay.log: Extract jar:file:/opt/test/service/had= oop/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar!/webapps/node to /tmp/Je= tty_0_0_0_0_8042_node____19tj0x/webapp
2015-04-19 14:38:59,778 IN= FO org.mortbay.log: Started HttpServer2$SelectChannelConnector= WithSafeStartup@0.0.0.0:8042
2015-04-19 14:38:59,778 INFO org= .apache.hadoop.yarn.webapp.WebApps: Web app /node started at 8042
2015-04-19 14:39:00,093 INFO org.apache.hadoop.yarn.webapp.WebApps: Regist= ered webapp guice modules
2015-04-19 14:39:00,126 INFO org.apache= .hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM con= tainer statuses: []
2015-04-19 14:39:00,131 INFO org.apache.hadoo= p.yarn.server.nodemanager.NodeStatusUpdaterImpl: Registering with RM using = containers :[]
2015-04-19 14:39:00,176 INFO org.apache.hadoop.yar= n.server.nodemanager.security.NMContainerTokenSecretManager: Rolling master= -key for container-tokens, got key with id -543066608
2015-04-19 = 14:39:00,178 INFO org.apache.hadoop.yarn.server.nodemanager.security.NMToke= nSecretManagerInNM: Rolling master-key for container-tokens, got key with i= d -1243797706
2015-04-19 14:39:00,179 INFO org.apache.hadoop.yarn= .server.nodemanager.NodeStatusUpdaterImpl: Registered with ResourceManager = as ip-10-100-70-199.ec2.internal:38230 with total resource of <memory:14= 336, vCores:8>
2015-04-19 14:39:00,179 INFO org.apache.hadoop.= yarn.server.nodemanager.NodeStatusUpdaterImpl: Notifying ContainerManager t= o unblock new container-requests
2015-04-19 19:22:17,729 INFO Sec= urityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_14= 29450734039_0010_000001 (auth:SIMPLE)
2015-04-19 19:22:17,807 INF= O org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManag= erImpl: Start request for container_1429450734039_0010_01_000001 by user no= body
2015-04-19 19:22:17,828 INFO org.apache.hadoop.yarn.server.n= odemanager.containermanager.ContainerManagerImpl: Creating a new applicatio= n reference for app application_1429450734039_0010
2015-04-19 19:= 22:17,834 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USE= R=3Dnobody IP=3D10.100.66.251 OPERATION=3DStart Container Requ= est TARGET=3DContainerManageImp= l RESULT=3DSUCCESS APPID=3Dapplication_1429450734039_0010 CONTAINERID=3Dcontainer_142945073= 4039_0010_01_000001
2015-04-19 19:22:17,835 INFO org.apache.hadoo= p.yarn.server.nodemanager.containermanager.application.Application: Applica= tion application_1429450734039_0010 transitioned from NEW to INITING
<= div>2015-04-19 19:22:17,835 INFO org.apache.hadoop.yarn.server.nodemanager.= containermanager.application.Application: Adding container_1429450734039_00= 10_01_000001 to application application_1429450734039_0010
2015-0= 4-19 19:22:17,839 INFO org.apache.hadoop.yarn.server.nodemanager.containerm= anager.application.Application: Application application_1429450734039_0010 = transitioned from INITING to RUNNING
2015-04-19 19:22:17,843 INFO= org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Conta= iner: Container container_1429450734039_0010_01_000001 transitioned from NE= W to LOCALIZING
2015-04-19 19:22:17,843 INFO org.apache.hadoop.ya= rn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INI= T for appId application_1429450734039_0010
2015-04-19 19:22:17,87= 6 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer= .LocalizedResource: Resource hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/= nobody/.staging/job_1429450734039_0010/job.jar transitioned from INIT to DO= WNLOADING
2015-04-19 19:22:17,877 INFO org.apache.hadoop.yarn.ser= ver.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs= ://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.staging/job_1429450734039= _0010/job.splitmetainfo transitioned from INIT to DOWNLOADING
201= 5-04-19 19:22:17,877 INFO org.apache.hadoop.yarn.server.nodemanager.contain= ermanager.localizer.LocalizedResource: Resource hdfs://rdcluster:8020/tmp/h= adoop-yarn/staging/nobody/.staging/job_1429450734039_0010/job.split transit= ioned from INIT to DOWNLOADING
2015-04-19 19:22:17,877 INFO org.a= pache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedRe= source: Resource hdfs://rdcluster:8020/tmp/hadoop-yarn/staging/nobody/.stag= ing/job_1429450734039_0010/job.xml transitioned from INIT to DOWNLOADING
2015-04-19 19:22:17,877 INFO org.apache.hadoop.yarn.server.nodemana= ger.containermanager.localizer.ResourceLocalizationService: Created localiz= er for container_1429450734039_0010_01_000001
2015-04-19 19:22:17= ,880 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.locali= zer.ResourceLocalizationService: Localizer failed
java.lang= .NullPointerException
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged= (LocalDirAllocator.java:268)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getL= ocalPathForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocal= PathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPath= ForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForW= rite(LocalDirAllocator.java:115)
at org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerS= ervice.getLocalPathForWrite(LocalDirsHandlerService.java:420)
at org.apache.hadoop.yarn.server.= nodemanager.containermanager.localizer.ResourceLocalizationService$Localize= rRunner.run(ResourceLocalizationService.java:1075)
2015-04= -19 19:22:17,882 INFO org.apache.hadoop.yarn.server.nodemanager.containerma= nager.container.Container: Container container_1429450734039_0010_01_000001= transitioned from LOCALIZING to LOCALIZATION_FAILED
2015-04-19 1= 9:22:17,886 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: U= SER=3Dnobody OPERATION=3DContai= ner Finished - Failed TARGET=3D= ContainerImpl RESULT=3DFAILURE<= span style=3D"white-space:pre-wrap"> DESCRIPTION=3DContainer failed = with state: LOCALIZATION_FAILED APPID=3Dapplication_1429450734039_0010 CONTAINERID=3Dcontainer_1429450734039_0010_01_000001
201= 5-04-19 19:22:17,889 INFO org.apache.hadoop.yarn.server.nodemanager.contain= ermanager.container.Container: Container container_1429450734039_0010_01_00= 0001 transitioned from LOCALIZATION_FAILED to DONE
2015-04-19 19:= 22:17,889 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.a= pplication.Application: Removing container_1429450734039_0010_01_000001 fro= m application application_1429450734039_0010

On Sun, Apr 19, 2015= at 1:16 PM, Brahma Reddy Battula <brahmareddy.battula@hotma= il.com> wrote:

=
can you able provide full logs..?

Bytheway NPE is handled in =C2=A0Trunk ..Please check HADO= OP-8436 for more details..


From: wget.null@gmail.com
Subject: Re: Reso= urceLocalizationService: Localizer failed when running pi example
Date: = Sun, 19 Apr 2015 17:59:13 +0200
To: user@hadoop.apache.org


As= you said, that looks like a config issue. I would spot on the NM's loc= al scratch dir (yarn.nodemanager= .local-dirs).

But without a complete stack tra= ce, its a blind call.

BR,
=C2=A0AL
=

--

O= n Apr 18, 2015, at 6:24 PM, Fernando O. <fotero@gmail.com> wrote:

Hey All,
=C2=A0 =C2=A0 It's me again with ano= ther noob question: I deployed a cluster (HA mode) everything looked good b= ut when I tried to run the pi example:

=C2=A0bin/h= adoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 1= 6 100

the same error occurs if I try to genera= te data with=C2=A0teragen 100000000 /test/data


2015-04-18 15:49:04,090 INFO org.apach= e.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocaliz= ationService: Localizer failed
java.lang.NullPointerException
at org.apach= e.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAlloc= ator.java:268)
at org.apache.hadoop.fs.LocalDirAlloc= ator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPat= hForWrite(LocalDirAllocator.java:150)
at org.apache.= hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131= )
at org.apache.hadoop.fs.LocalDirAllocator.getLocal= PathForWrite(LocalDirAllocator.java:115)
at org.apac= he.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWr= ite(LocalDirsHandlerService.java:420)
at org.apache.= hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizat= ionService$LocalizerRunner.run(ResourceLocalizationService.java:1075)


I'm guessing it's a confi= guration issue but I don't know what am I missing :S




--001a113ea43e435b6c05142690d1--