Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6F6D810D0F for ; Fri, 16 Jan 2015 20:57:29 +0000 (UTC) Received: (qmail 21305 invoked by uid 500); 16 Jan 2015 20:57:21 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 21189 invoked by uid 500); 16 Jan 2015 20:57:21 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 21174 invoked by uid 99); 16 Jan 2015 20:57:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Jan 2015 20:57:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ruhua.jiang@gmail.com designates 209.85.192.43 as permitted sender) Received: from [209.85.192.43] (HELO mail-qg0-f43.google.com) (209.85.192.43) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Jan 2015 20:57:15 +0000 Received: by mail-qg0-f43.google.com with SMTP id z107so18171509qgd.2 for ; Fri, 16 Jan 2015 12:55:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:subject:message-id:date:to:mime-version; bh=Q8vF3I0JZdLVjdqTLWdLiJxB8awkTIZUCVwxTuH9SlM=; b=opk574JHOoHJdqGr42CHM4AquBDafyQeZpwzHd0V2MPkMYqvFcHrGQDwYyt2rCvpZa qW201S0aynSFDmUwxjVGkNB8EZ0A2NR80nqN3IIcMKwtY9UrIvI8uJJKHiDfZnNeeGhd JxZVwx68/Ai8zxpygnstxDZKZJrMQohu8V2P6fxGsTmmsXl6OG0EAbbPWlGwT+STUUbT /OTV/qLq/R/mD8Nm7iVoyBpHSXztCeArVgKxnASKi6/1okHPTNJSBgPt71oY3g+OEQOw iTDzhSsZEDfJWNfBglOB1d66ZUtn16zqjI9m4tjp1hwtPWu9CYJoJmKjVKo8/SMgVfZv tz+Q== X-Received: by 10.140.46.37 with SMTP id j34mr27095366qga.12.1421441724243; Fri, 16 Jan 2015 12:55:24 -0800 (PST) Received: from [67.221.69.55] ([67.221.69.55]) by mx.google.com with ESMTPSA id v16sm5137488qaw.30.2015.01.16.12.55.22 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 16 Jan 2015 12:55:23 -0800 (PST) From: Ruhua Jiang Content-Type: multipart/alternative; boundary="Apple-Mail=_62C4C476-52DB-418B-9D7C-5B6D17B7220D" Subject: Start Hadoop, ERROR security.UserGroupInformation: PriviledgedActionException Message-Id: <40B9B528-8FF8-4337-88AD-9E53FC970A84@gmail.com> Date: Fri, 16 Jan 2015 15:55:21 -0500 To: user@hadoop.apache.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) X-Mailer: Apple Mail (2.1878.6) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_62C4C476-52DB-418B-9D7C-5B6D17B7220D Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Hello=20 I am quite new to Hadoop. I am trying to run Hadoop on top of a HPC = infrastructure using a solution called =93myHadoop=94. Basically what it = does is trying to allocate some nodes from HPC dynamically and run = Hadoop(Use one as NameNode, others as DataNode ). If anybody familiar = with it that would be perfect, but I think my problem is mostly the = Hadoop part.=20 I am using Hadoop 1.2.1 do to the limited support of myHadoop. Here is the log: =3D=3D=3D myHadoop: Using HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1 myHadoop: Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4128 myHadoop: Using JAVA_HOME=3D/usr myHadoop: Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4128... myHadoop: Using directory /home/hpc-ruhua/hadoop/hdfs for persisting = HDFS state... myHadoop: Designating cn53 as master node (namenode, secondary namenode, = and jobtracker) myHadoop: The following nodes will be slaves (datanode, tasktracer): cn53 cn54 cn55 cn56 Linking /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4128/hdfs_data = on cn53 Linking /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4128/hdfs_data = on cn54 Linking /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4128/hdfs_data = on cn55 Warning: Permanently added 'cn55,192.168.100.55' (RSA) to the list of = known hosts. Linking /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4128/hdfs_data = on cn56 Warning: Permanently added 'cn56,192.168.100.56' (RSA) to the list of = known hosts. starting namenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-namenode-cn53.out cn53: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn53.out cn54: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn54.out cn55: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn55.out cn56: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn56.out cn53: starting secondarynamenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-secondarynamenode-cn53.out starting jobtracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-jobtracker-cn53.out cn53: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn53.out cn56: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn56.out cn55: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn55.out cn54: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn54.out mkdir: cannot create directory data: File exists put: Target data/pg2701.txt already exists Found 1 items -rw-r--r-- 3 hpc-ruhua supergroup 0 2015-01-07 00:09 = /user/hpc-ruhua/data/pg2701.txt 15/01/14 12:21:08 ERROR security.UserGroupInformation: = PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199) at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199) at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) at org.apache.hadoop.ipc.Client.call(Client.java:1113) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoca= tionHandler.java:85) at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHa= ndler.java:62) at org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) at = org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at = org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at org.apache.hadoop.examples.WordCount.main(WordCount.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at = org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriv= er.java:68) at = org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at = org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:160) ls: Cannot access wordcount-output: No such file or directory. get: null stopping jobtracker cn54: stopping tasktracker cn55: stopping tasktracker cn53: stopping tasktracker cn56: stopping tasktracker stopping namenode cn53: no datanode to stop cn54: no datanode to stop cn56: no datanode to stop cn55: no datanode to stop =3D=3D=3D The erro is =93ERROR security.UserGroupInformation: = PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: =93, anybody has an idea of = what might be the problem?=20 That=92s the result of using =93$HADOOP_HOME/bin/start-all.sh=94 I tried to split the start phase to =93 $HADOOP_HOME/bin/hadoop namenode $HADOOP_HOME/bin/hadoop datanode =93 Below is the log: myHadoop: Using HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1 myHadoop: Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4178 myHadoop: Using JAVA_HOME=3D/usr myHadoop: Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4178... myHadoop: Using directory /home/hpc-ruhua/hadoop/hdfs for persisting = HDFS state... myHadoop: Designating cn53 as master node (namenode, secondary namenode, = and jobtracker) myHadoop: The following nodes will be slaves (datanode, tasktracer): cn53 cn54 cn55 cn56 Linking /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4178/hdfs_data = on cn53 Linking /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4178/hdfs_data = on cn54 Linking /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4178/hdfs_data = on cn55 Linking /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4178/hdfs_data = on cn56 15/01/16 15:35:14 INFO namenode.NameNode: STARTUP_MSG:=20 /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host =3D cn53/192.168.100.53 STARTUP_MSG: args =3D [] STARTUP_MSG: version =3D 1.2.1 STARTUP_MSG: build =3D = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r = 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013 STARTUP_MSG: java =3D 1.7.0_71 ************************************************************/ 15/01/16 15:35:14 INFO impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source = MetricsSystem,sub=3DStats registered. 15/01/16 15:35:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period = at 10 second(s). 15/01/16 15:35:14 INFO impl.MetricsSystemImpl: NameNode metrics system = started 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source ugi = registered. 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source jvm = registered. 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source = NameNode registered. 15/01/16 15:35:14 INFO util.GSet: Computing capacity for map BlocksMap 15/01/16 15:35:14 INFO util.GSet: VM type =3D 64-bit 15/01/16 15:35:14 INFO util.GSet: 2.0% max memory =3D 932184064 15/01/16 15:35:14 INFO util.GSet: capacity =3D 2^21 =3D 2097152 = entries 15/01/16 15:35:14 INFO util.GSet: recommended=3D2097152, actual=3D2097152 15/01/16 15:35:15 INFO namenode.FSNamesystem: fsOwner=3Dhpc-ruhua 15/01/16 15:35:15 INFO namenode.FSNamesystem: supergroup=3Dsupergroup 15/01/16 15:35:15 INFO namenode.FSNamesystem: isPermissionEnabled=3Dtrue 15/01/16 15:35:15 INFO namenode.FSNamesystem: = dfs.block.invalidate.limit=3D100 15/01/16 15:35:15 INFO namenode.FSNamesystem: isAccessTokenEnabled=3Dfalse= accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 min(s) 15/01/16 15:35:15 INFO namenode.FSNamesystem: Registered = FSNamesystemStateMBean and NameNodeMXBean 15/01/16 15:35:15 INFO namenode.FSEditLog: = dfs.namenode.edits.toleration.length =3D 0 15/01/16 15:35:15 INFO namenode.NameNode: Caching file names occuring = more than 10 times=20 15/01/16 15:35:15 INFO common.Storage: Start loading image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage 15/01/16 15:35:15 INFO common.Storage: Number of files =3D 28 15/01/16 15:35:15 INFO common.Storage: Number of files under = construction =3D 1 15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 2996 bytes = loaded in 0 seconds. 15/01/16 15:35:15 INFO namenode.FSEditLog: Start loading edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits 15/01/16 15:35:15 INFO namenode.FSEditLog: Invalid opcode, reached end = of edit log Number of transactions found: 32. Bytes read: 2579 15/01/16 15:35:15 INFO namenode.FSEditLog: Start checking end of edit = log (/tmp/hpc-ruhua/4178/namenode_data/current/edits) ... 15/01/16 15:35:15 INFO namenode.FSEditLog: Checked the bytes after the = end of edit log (/tmp/hpc-ruhua/4178/namenode_data/current/edits): 15/01/16 15:35:15 INFO namenode.FSEditLog: Padding position =3D 2579 = (-1 means padding not found) 15/01/16 15:35:15 INFO namenode.FSEditLog: Edit log length =3D = 1048580 15/01/16 15:35:15 INFO namenode.FSEditLog: Read length =3D 2579 15/01/16 15:35:15 INFO namenode.FSEditLog: Corruption length =3D 0 15/01/16 15:35:15 INFO namenode.FSEditLog: Toleration length =3D 0 (=3D = dfs.namenode.edits.toleration.length) 15/01/16 15:35:15 INFO namenode.FSEditLog: Summary: |---------- = Read=3D2579 ----------|-- Corrupt=3D0 --|-- Pad=3D1046001 --| 15/01/16 15:35:15 INFO namenode.FSEditLog: Edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits of size 1048580 edits # = 32 loaded in 0 seconds. 15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 3745 bytes = saved in 0 seconds. 15/01/16 15:35:15 INFO namenode.FSEditLog: closing edit log: position=3D4,= editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits 15/01/16 15:35:15 INFO namenode.FSEditLog: close success: truncate to 4, = editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits 15/01/16 15:35:16 INFO namenode.NameCache: initialized with 0 entries 0 = lookups 15/01/16 15:35:16 INFO namenode.FSNamesystem: Finished loading FSImage = in 1162 msecs 15/01/16 15:35:16 INFO namenode.FSNamesystem: dfs.safemode.threshold.pct = =3D 0.9990000128746033 15/01/16 15:35:16 INFO namenode.FSNamesystem: = dfs.namenode.safemode.min.datanodes =3D 0 15/01/16 15:35:16 INFO namenode.FSNamesystem: dfs.safemode.extension = =3D 30000 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of blocks excluded = by safe block count: 0 total blocks: 0 and thus the safe blocks: 0 15/01/16 15:35:16 INFO namenode.FSNamesystem: Total number of blocks =3D = 0 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of invalid blocks =3D= 0 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of under-replicated = blocks =3D 0 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of over-replicated = blocks =3D 0 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Safe mode termination = scan for invalid, over- and under-replicated blocks completed in 7 msec 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Leaving safe mode after = 1 secs 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Network topology has 0 = racks and 0 datanodes 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks = has 0 blocks 15/01/16 15:35:16 INFO util.HostsFileReader: Refreshing hosts = (include/exclude) list 15/01/16 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 15/01/16 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec = processing time, 0 msec clock time, 1 cycles 15/01/16 15:35:16 INFO namenode.FSNamesystem: InvalidateQueue = QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec 15/01/16 15:35:16 INFO namenode.FSNamesystem: InvalidateQueue = QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec = processing time, 0 msec clock time, 1 cycles 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = FSNamesystemMetrics registered. 15/01/16 15:35:16 INFO ipc.Server: Starting SocketReader 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort54310 registered. 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort54310 registered. 15/01/16 15:35:16 INFO namenode.NameNode: Namenode up at: = cn53/192.168.100.53:54310 15/01/16 15:35:16 INFO mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog 15/01/16 15:35:16 INFO http.HttpServer: Added global filtersafety = (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter) 15/01/16 15:35:16 INFO http.HttpServer: dfs.webhdfs.enabled =3D false 15/01/16 15:35:16 INFO http.HttpServer: Port returned by = webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening = the listener on 50070 15/01/16 15:35:16 INFO http.HttpServer: listener.getLocalPort() returned = 50070 webServer.getConnectors()[0].getLocalPort() returned 50070 15/01/16 15:35:16 INFO http.HttpServer: Jetty bound to port 50070 15/01/16 15:35:16 INFO mortbay.log: jetty-6.1.26 15/01/16 15:35:16 INFO mortbay.log: Started = SelectChannelConnector@0.0.0.0:50070 15/01/16 15:35:16 INFO namenode.NameNode: Web-server up at: = 0.0.0.0:50070 15/01/16 15:35:16 INFO ipc.Server: IPC Server Responder: starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server listener on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 0 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 1 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 2 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 3 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 4 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 5 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 6 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 8 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 7 on 54310: = starting 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 9 on 54310: = starting =3D=3D I can also provide the script of running myHadoop or other system = information if that helps. I have been struggling with this problem for = quite long time. Could anyone help?=20 Best, Ruhua --Apple-Mail=_62C4C476-52DB-418B-9D7C-5B6D17B7220D Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Hello 

I am quite new to = Hadoop. I am trying to run Hadoop on top of a HPC infrastructure using a = solution called =93myHadoop=94. Basically what it does is trying =  to allocate some nodes from HPC dynamically and run Hadoop(Use one = as NameNode, others as DataNode ). If anybody familiar with it that = would be perfect, but I think my problem is mostly the Hadoop = part. 
I am using Hadoop 1.2.1 do to the limited support = of myHadoop.

Here is the = log:
=3D=3D=3D
myHadoop: Using = HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1
myHadoop: = Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4128
myHadoop: Using = JAVA_HOME=3D/usr
myHadoop: Generating Hadoop configuration in = directory in /home/hpc-ruhua/hadoop/conf/hadoop-conf.4128...
myHadoop: = Using directory /home/hpc-ruhua/hadoop/hdfs for persisting HDFS = state...
myHadoop: Designating cn53 as master node (namenode, secondary = namenode, and jobtracker)
myHadoop: The following nodes will be slaves = (datanode, tasktracer):
cn53
cn54
cn55
cn56
Linking = /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4128/hdfs_data on = cn53
Linking /home/hpc-ruhua/hadoop/hdfs/1 to = /tmp/hpc-ruhua/4128/hdfs_data on cn54
Linking = /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4128/hdfs_data on = cn55
Warning: Permanently added 'cn55,192.168.100.55' (RSA) to the = list of known hosts.
Linking /home/hpc-ruhua/hadoop/hdfs/3 to = /tmp/hpc-ruhua/4128/hdfs_data on cn56
Warning: Permanently added = 'cn56,192.168.100.56' (RSA) to the list of known hosts.
starting = namenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-namenode-cn53.out
cn53: = starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn53.out
cn54: = starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn54.out
cn55: = starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn55.out
cn56: = starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn56.out
cn53: = starting secondarynamenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-secondarynamenode-cn53.out
=
starting jobtracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-jobtracker-cn53.out
cn53: = starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn53.out
cn56: = starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn56.out
cn55: = starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn55.out
cn54: = starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn54.out
mkdir: = cannot create directory data: File exists
put: Target data/pg2701.txt = already exists
Found 1 items
-rw-r--r--   3 hpc-ruhua = supergroup          0 2015-01-07 00:09 = /user/hpc-ruhua/data/pg2701.txt
15/01/14 12:21:08 ERROR = security.UserGroupInformation: PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING
at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199)
= at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
= at = org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING
at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199)
= at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
= at = org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

at = org.apache.hadoop.ipc.Client.call(Client.java:1113)
at = org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at = org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoca= tionHandler.java:85)
at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHa= ndler.java:62)
at = org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source)
at = org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944)
at = org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
= at = org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
= at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at = org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at = org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriv= er.java:68)
at = org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
<= div style=3D"margin: 0px; font-size: 11px; font-family: Menlo;"> at = org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
=
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
= at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.RunJar.main(RunJar.java:160)
ls: Cannot = access wordcount-output: No such file or directory.
get: = null
stopping jobtracker
cn54: stopping tasktracker
cn55: = stopping tasktracker
cn53: stopping tasktracker
cn56: stopping = tasktracker
stopping namenode
cn53: no datanode to = stop
cn54: no datanode to stop
cn56: no datanode to = stop
cn55: no datanode to = stop
=3D=3D=3D
The erro is =93ERROR = security.UserGroupInformation: PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: =93, anybody = has an idea of what might be the problem? 
That=92s the = result of using =93$HADOOP_HOME/bin/start-all.sh=94

I = tried to split the start phase to =93
$HADOOP_HOME/bin/hadoop namenode
$HADOOP_HOME/bin/hadoop = datanode
=93

Below is the = log:
myHadoop: Using = HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1
myHadoop: = Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4178
myHadoop: Using = JAVA_HOME=3D/usr
myHadoop: Generating Hadoop configuration in = directory in /home/hpc-ruhua/hadoop/conf/hadoop-conf.4178...
myHadoop: = Using directory /home/hpc-ruhua/hadoop/hdfs for persisting HDFS = state...
myHadoop: Designating cn53 as master node (namenode, secondary = namenode, and jobtracker)
myHadoop: The following nodes will be slaves = (datanode, tasktracer):
cn53
cn54
cn55
cn56
Linking = /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4178/hdfs_data on = cn53
Linking /home/hpc-ruhua/hadoop/hdfs/1 to = /tmp/hpc-ruhua/4178/hdfs_data on cn54
Linking = /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4178/hdfs_data on = cn55
Linking /home/hpc-ruhua/hadoop/hdfs/3 to = /tmp/hpc-ruhua/4178/hdfs_data on cn56
15/01/16 15:35:14 INFO = namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host =3D = cn53/192.168.100.53
STARTUP_MSG:   args =3D []
STARTUP_MSG: =   version =3D 1.2.1
STARTUP_MSG:   build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 = -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT = 2013
STARTUP_MSG:   java =3D 1.7.0_71
************************************************************/
15/01/16 15:35:14 INFO impl.MetricsConfig: loaded properties = from hadoop-metrics2.properties
15/01/16 15:35:14 INFO = impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=3DStats = registered.
15/01/16 15:35:14 INFO impl.MetricsSystemImpl: = Scheduled snapshot period at 10 second(s).
15/01/16 15:35:14 INFO = impl.MetricsSystemImpl: NameNode metrics system started
15/01/16 = 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source ugi = registered.
15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: = MBean for source jvm registered.
15/01/16 15:35:14 INFO = impl.MetricsSourceAdapter: MBean for source NameNode = registered.
15/01/16 15:35:14 INFO util.GSet: Computing = capacity for map BlocksMap
15/01/16 15:35:14 INFO util.GSet: VM type =       =3D 64-bit
15/01/16 15:35:14 INFO util.GSet: = 2.0% max memory =3D 932184064
15/01/16 15:35:14 INFO util.GSet: = capacity      =3D 2^21 =3D 2097152 entries
15/01/16 = 15:35:14 INFO util.GSet: recommended=3D2097152, actual=3D2097152
15/01/16 = 15:35:15 INFO namenode.FSNamesystem: fsOwner=3Dhpc-ruhua
15/01/16 = 15:35:15 INFO namenode.FSNamesystem: supergroup=3Dsupergroup
15/01/16 = 15:35:15 INFO namenode.FSNamesystem: isPermissionEnabled=3Dtrue
15/01/16 = 15:35:15 INFO namenode.FSNamesystem: = dfs.block.invalidate.limit=3D100
15/01/16 15:35:15 INFO = namenode.FSNamesystem: isAccessTokenEnabled=3Dfalse = accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 = min(s)
15/01/16 15:35:15 INFO namenode.FSNamesystem: Registered = FSNamesystemStateMBean and NameNodeMXBean
15/01/16 15:35:15 INFO = namenode.FSEditLog: dfs.namenode.edits.toleration.length =3D 0
15/01/16 = 15:35:15 INFO namenode.NameNode: Caching file names occuring more than = 10 times 
15/01/16 15:35:15 INFO common.Storage: Start = loading image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage
15/01/16 = 15:35:15 INFO common.Storage: Number of files =3D 28
15/01/16 = 15:35:15 INFO common.Storage: Number of files under construction =3D = 1
15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 2996 bytes = loaded in 0 seconds.
15/01/16 15:35:15 INFO namenode.FSEditLog: Start = loading edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 15:35:15 INFO = namenode.FSEditLog: Invalid opcode, reached end of edit log Number of = transactions found: 32.  Bytes read: 2579
15/01/16 15:35:15 INFO = namenode.FSEditLog: Start checking end of edit log = (/tmp/hpc-ruhua/4178/namenode_data/current/edits) ...
15/01/16 = 15:35:15 INFO namenode.FSEditLog: Checked the bytes after the end of = edit log (/tmp/hpc-ruhua/4178/namenode_data/current/edits):
15/01/16 = 15:35:15 INFO namenode.FSEditLog:   Padding position  =3D 2579 = (-1 means padding not found)
15/01/16 15:35:15 INFO namenode.FSEditLog: =   Edit log length   =3D 1048580
15/01/16 15:35:15 INFO = namenode.FSEditLog:   Read length       =3D = 2579
15/01/16 15:35:15 INFO namenode.FSEditLog:   Corruption = length =3D 0
15/01/16 15:35:15 INFO namenode.FSEditLog:   = Toleration length =3D 0 (=3D = dfs.namenode.edits.toleration.length)
15/01/16 15:35:15 INFO = namenode.FSEditLog: Summary: |---------- Read=3D2579 ----------|-- = Corrupt=3D0 --|-- Pad=3D1046001 --|
15/01/16 15:35:15 INFO = namenode.FSEditLog: Edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits of size 1048580 edits # = 32 loaded in 0 seconds.
15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 3745 bytes = saved in 0 seconds.
15/01/16 15:35:15 INFO namenode.FSEditLog: closing = edit log: position=3D4, = editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 = 15:35:15 INFO namenode.FSEditLog: close success: truncate to 4, = editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 = 15:35:16 INFO namenode.NameCache: initialized with 0 entries 0 = lookups
15/01/16 15:35:16 INFO namenode.FSNamesystem: Finished loading = FSImage in 1162 msecs
15/01/16 15:35:16 INFO namenode.FSNamesystem: = dfs.safemode.threshold.pct          =3D = 0.9990000128746033
15/01/16 15:35:16 INFO namenode.FSNamesystem: = dfs.namenode.safemode.min.datanodes =3D 0
15/01/16 15:35:16 INFO = namenode.FSNamesystem: dfs.safemode.extension        =       =3D 30000
15/01/16 15:35:16 INFO = namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 = total blocks: 0 and thus the safe blocks: 0
15/01/16 15:35:16 INFO = namenode.FSNamesystem: Total number of blocks =3D 0
15/01/16 = 15:35:16 INFO namenode.FSNamesystem: Number of invalid blocks =3D = 0
15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of = under-replicated blocks =3D 0
15/01/16 15:35:16 INFO namenode.FSNamesystem: = Number of  over-replicated blocks =3D 0
15/01/16 15:35:16 INFO = hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- = and under-replicated blocks completed in 7 msec
15/01/16 15:35:16 INFO = hdfs.StateChange: STATE* Leaving safe mode after 1 secs
15/01/16 = 15:35:16 INFO hdfs.StateChange: STATE* Network topology has 0 racks and = 0 datanodes
15/01/16 15:35:16 INFO hdfs.StateChange: STATE* = UnderReplicatedBlocks has 0 blocks
15/01/16 15:35:16 INFO = util.HostsFileReader: Refreshing hosts (include/exclude) list
15/01/16 = 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: First cycle completed 0 blocks in 0 = msec
15/01/16 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec = processing time, 0 msec clock time, 1 cycles
15/01/16 15:35:16 INFO = namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First = cycle completed 0 blocks in 0 msec
15/01/16 15:35:16 INFO = namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue = flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 = cycles
15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for = source FSNamesystemMetrics registered.
15/01/16 15:35:16 INFO ipc.Server: = Starting SocketReader
15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: = MBean for source RpcDetailedActivityForPort54310 registered.
15/01/16 = 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort54310 registered.
15/01/16 15:35:16 INFO = namenode.NameNode: Namenode up at: cn53/192.168.100.53:54310
15/01/16 = 15:35:16 INFO mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog
15/01/16 15:35:16 INFO http.HttpServer: Added = global filtersafety = (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter)
15/01/16 = 15:35:16 INFO http.HttpServer: dfs.webhdfs.enabled =3D false
15/01/16 = 15:35:16 INFO http.HttpServer: Port returned by = webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening = the listener on 50070
15/01/16 15:35:16 INFO http.HttpServer: = listener.getLocalPort() returned 50070 = webServer.getConnectors()[0].getLocalPort() returned 50070
15/01/16 = 15:35:16 INFO http.HttpServer: Jetty bound to port 50070
15/01/16 = 15:35:16 INFO mortbay.log: jetty-6.1.26
15/01/16 15:35:16 INFO = mortbay.log: Started SelectChannelConnector@0.0.= 0.0:50070
15/01/16 15:35:16 INFO namenode.NameNode: = Web-server up at: 0.0.0.0:50070
15/01/16 15:35:16 INFO ipc.Server: = IPC Server Responder: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server listener on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 0 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 1 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 2 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 3 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 4 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 5 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 6 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 8 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 7 on 54310: starting
15/01/16 15:35:16 INFO ipc.Server: = IPC Server handler 9 on 54310: = starting

=3D=3D
I can also = provide the script of running myHadoop or other system information if = that helps.  I have been struggling with this problem for quite = long time. Could anyone = help? 

Best,
Ruhua

<= /div>



= --Apple-Mail=_62C4C476-52DB-418B-9D7C-5B6D17B7220D--