Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8741D179C6 for ; Sat, 17 Jan 2015 17:11:56 +0000 (UTC) Received: (qmail 13065 invoked by uid 500); 17 Jan 2015 17:11:52 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 12935 invoked by uid 500); 17 Jan 2015 17:11:52 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 12925 invoked by uid 99); 17 Jan 2015 17:11:51 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 17 Jan 2015 17:11:51 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ruhua.jiang@gmail.com designates 209.85.213.46 as permitted sender) Received: from [209.85.213.46] (HELO mail-yh0-f46.google.com) (209.85.213.46) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 17 Jan 2015 17:11:45 +0000 Received: by mail-yh0-f46.google.com with SMTP id t59so12490555yho.5 for ; Sat, 17 Jan 2015 09:10:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:message-id:mime-version:subject:date:references :to:in-reply-to; bh=pEzkeyTyBcDMPkmeo7vWjo29zjHeOvhh/grPOhh/OCI=; b=FFFceN2038Dt3UAs0kwpMjjJLNQ3URrOPWZQkugwBdeb1RQ5BzL4wVDgECUa0Dk/yo esHhfCCMFHdm8UudzRyQcYM8d9FGSI8WiQjxoBiSaU9EjTX54+vuT4UTdni3L1vetN/x /TcpDZ/8iXJEPlo7BH1uvdWD1vmJgp0hzcHp3bDpjIv+j4fi4Uuq6rHt7Ivplv+vV+cm mufhCATXirADQs5eRZzNZ/lgYjMsE3ivoTPGg7oyP5VqzzOvxb/vB5rmatzJHLIX+595 1/aPoEqhCPfpunlBk4bMwVT97ojxWi2Ev+ya9Hnwc5mi209GNGj9Tf/Eca5rEORClNZ1 U0jQ== X-Received: by 10.170.190.21 with SMTP id h21mr503316yke.129.1421514639773; Sat, 17 Jan 2015 09:10:39 -0800 (PST) Received: from [192.168.0.102] (68-118-181-44.dhcp.asfd.ct.charter.com. [68.118.181.44]) by mx.google.com with ESMTPSA id q39sm4533382yhg.43.2015.01.17.09.10.37 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 17 Jan 2015 09:10:38 -0800 (PST) From: Ruhua Jiang Content-Type: multipart/alternative; boundary="Apple-Mail=_A768D325-79D4-4E1E-8F2A-2F2DFB81DC42" Message-Id: Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Start Hadoop, ERROR security.UserGroupInformation: PriviledgedActionException Date: Sat, 17 Jan 2015 12:10:36 -0500 References: <40B9B528-8FF8-4337-88AD-9E53FC970A84@gmail.com> To: user@hadoop.apache.org In-Reply-To: X-Mailer: Apple Mail (2.1878.6) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail=_A768D325-79D4-4E1E-8F2A-2F2DFB81DC42 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Thanks Ted! Ruhua On Jan 16, 2015, at 4:08 PM, Ted Yu wrote: > Have you looked at: > http://sourceforge.net/p/myhadoop/mailman/?source=3Dnavbar >=20 > Cheers >=20 > On Fri, Jan 16, 2015 at 12:55 PM, Ruhua Jiang = wrote: > Hello=20 >=20 > I am quite new to Hadoop. I am trying to run Hadoop on top of a HPC = infrastructure using a solution called =93myHadoop=94. Basically what it = does is trying to allocate some nodes from HPC dynamically and run = Hadoop(Use one as NameNode, others as DataNode ). If anybody familiar = with it that would be perfect, but I think my problem is mostly the = Hadoop part.=20 > I am using Hadoop 1.2.1 do to the limited support of myHadoop. >=20 > Here is the log: > =3D=3D=3D > myHadoop: Using HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1 > myHadoop: Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4128 > myHadoop: Using JAVA_HOME=3D/usr > myHadoop: Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4128... > myHadoop: Using directory /home/hpc-ruhua/hadoop/hdfs for persisting = HDFS state... > myHadoop: Designating cn53 as master node (namenode, secondary = namenode, and jobtracker) > myHadoop: The following nodes will be slaves (datanode, tasktracer): > cn53 > cn54 > cn55 > cn56 > Linking /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4128/hdfs_data = on cn53 > Linking /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4128/hdfs_data = on cn54 > Linking /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4128/hdfs_data = on cn55 > Warning: Permanently added 'cn55,192.168.100.55' (RSA) to the list of = known hosts. > Linking /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4128/hdfs_data = on cn56 > Warning: Permanently added 'cn56,192.168.100.56' (RSA) to the list of = known hosts. > starting namenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-namenode-cn53.out > cn53: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn53.out > cn54: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn54.out > cn55: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn55.out > cn56: starting datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn56.out > cn53: starting secondarynamenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-secondarynamenode-cn53.out > starting jobtracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-jobtracker-cn53.out > cn53: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn53.out > cn56: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn56.out > cn55: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn55.out > cn54: starting tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn54.out > mkdir: cannot create directory data: File exists > put: Target data/pg2701.txt already exists > Found 1 items > -rw-r--r-- 3 hpc-ruhua supergroup 0 2015-01-07 00:09 = /user/hpc-ruhua/data/pg2701.txt > 15/01/14 12:21:08 ERROR security.UserGroupInformation: = PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING > at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199) > at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) > at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >=20 > org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING > at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199) > at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) > at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >=20 > at org.apache.hadoop.ipc.Client.call(Client.java:1113) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) > at org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) > at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoca= tionHandler.java:85) > at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHa= ndler.java:62) > at org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944) > at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190) > at = org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) > at = org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) > at org.apache.hadoop.examples.WordCount.main(WordCount.java:82) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) > at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at = org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriv= er.java:68) > at = org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) > at = org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57) > at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.main(RunJar.java:160) > ls: Cannot access wordcount-output: No such file or directory. > get: null > stopping jobtracker > cn54: stopping tasktracker > cn55: stopping tasktracker > cn53: stopping tasktracker > cn56: stopping tasktracker > stopping namenode > cn53: no datanode to stop > cn54: no datanode to stop > cn56: no datanode to stop > cn55: no datanode to stop > =3D=3D=3D > The erro is =93ERROR security.UserGroupInformation: = PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: =93, anybody has an idea of = what might be the problem?=20 > That=92s the result of using =93$HADOOP_HOME/bin/start-all.sh=94 >=20 > I tried to split the start phase to =93 > $HADOOP_HOME/bin/hadoop namenode > $HADOOP_HOME/bin/hadoop datanode > =93 >=20 > Below is the log: > myHadoop: Using HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1 > myHadoop: Using MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4178 > myHadoop: Using JAVA_HOME=3D/usr > myHadoop: Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4178... > myHadoop: Using directory /home/hpc-ruhua/hadoop/hdfs for persisting = HDFS state... > myHadoop: Designating cn53 as master node (namenode, secondary = namenode, and jobtracker) > myHadoop: The following nodes will be slaves (datanode, tasktracer): > cn53 > cn54 > cn55 > cn56 > Linking /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4178/hdfs_data = on cn53 > Linking /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4178/hdfs_data = on cn54 > Linking /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4178/hdfs_data = on cn55 > Linking /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4178/hdfs_data = on cn56 > 15/01/16 15:35:14 INFO namenode.NameNode: STARTUP_MSG:=20 > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host =3D cn53/192.168.100.53 > STARTUP_MSG: args =3D [] > STARTUP_MSG: version =3D 1.2.1 > STARTUP_MSG: build =3D = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r = 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013 > STARTUP_MSG: java =3D 1.7.0_71 > ************************************************************/ > 15/01/16 15:35:14 INFO impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties > 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source = MetricsSystem,sub=3DStats registered. > 15/01/16 15:35:14 INFO impl.MetricsSystemImpl: Scheduled snapshot = period at 10 second(s). > 15/01/16 15:35:14 INFO impl.MetricsSystemImpl: NameNode metrics system = started > 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source ugi = registered. > 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source jvm = registered. > 15/01/16 15:35:14 INFO impl.MetricsSourceAdapter: MBean for source = NameNode registered. > 15/01/16 15:35:14 INFO util.GSet: Computing capacity for map BlocksMap > 15/01/16 15:35:14 INFO util.GSet: VM type =3D 64-bit > 15/01/16 15:35:14 INFO util.GSet: 2.0% max memory =3D 932184064 > 15/01/16 15:35:14 INFO util.GSet: capacity =3D 2^21 =3D 2097152 = entries > 15/01/16 15:35:14 INFO util.GSet: recommended=3D2097152, = actual=3D2097152 > 15/01/16 15:35:15 INFO namenode.FSNamesystem: fsOwner=3Dhpc-ruhua > 15/01/16 15:35:15 INFO namenode.FSNamesystem: supergroup=3Dsupergroup > 15/01/16 15:35:15 INFO namenode.FSNamesystem: isPermissionEnabled=3Dtrue= > 15/01/16 15:35:15 INFO namenode.FSNamesystem: = dfs.block.invalidate.limit=3D100 > 15/01/16 15:35:15 INFO namenode.FSNamesystem: = isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s), = accessTokenLifetime=3D0 min(s) > 15/01/16 15:35:15 INFO namenode.FSNamesystem: Registered = FSNamesystemStateMBean and NameNodeMXBean > 15/01/16 15:35:15 INFO namenode.FSEditLog: = dfs.namenode.edits.toleration.length =3D 0 > 15/01/16 15:35:15 INFO namenode.NameNode: Caching file names occuring = more than 10 times=20 > 15/01/16 15:35:15 INFO common.Storage: Start loading image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage > 15/01/16 15:35:15 INFO common.Storage: Number of files =3D 28 > 15/01/16 15:35:15 INFO common.Storage: Number of files under = construction =3D 1 > 15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 2996 bytes = loaded in 0 seconds. > 15/01/16 15:35:15 INFO namenode.FSEditLog: Start loading edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits > 15/01/16 15:35:15 INFO namenode.FSEditLog: Invalid opcode, reached end = of edit log Number of transactions found: 32. Bytes read: 2579 > 15/01/16 15:35:15 INFO namenode.FSEditLog: Start checking end of edit = log (/tmp/hpc-ruhua/4178/namenode_data/current/edits) ... > 15/01/16 15:35:15 INFO namenode.FSEditLog: Checked the bytes after the = end of edit log (/tmp/hpc-ruhua/4178/namenode_data/current/edits): > 15/01/16 15:35:15 INFO namenode.FSEditLog: Padding position =3D = 2579 (-1 means padding not found) > 15/01/16 15:35:15 INFO namenode.FSEditLog: Edit log length =3D = 1048580 > 15/01/16 15:35:15 INFO namenode.FSEditLog: Read length =3D = 2579 > 15/01/16 15:35:15 INFO namenode.FSEditLog: Corruption length =3D 0 > 15/01/16 15:35:15 INFO namenode.FSEditLog: Toleration length =3D 0 = (=3D dfs.namenode.edits.toleration.length) > 15/01/16 15:35:15 INFO namenode.FSEditLog: Summary: |---------- = Read=3D2579 ----------|-- Corrupt=3D0 --|-- Pad=3D1046001 --| > 15/01/16 15:35:15 INFO namenode.FSEditLog: Edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits of size 1048580 edits # = 32 loaded in 0 seconds. > 15/01/16 15:35:15 INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 3745 bytes = saved in 0 seconds. > 15/01/16 15:35:15 INFO namenode.FSEditLog: closing edit log: = position=3D4, editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits > 15/01/16 15:35:15 INFO namenode.FSEditLog: close success: truncate to = 4, editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits > 15/01/16 15:35:16 INFO namenode.NameCache: initialized with 0 entries = 0 lookups > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Finished loading FSImage = in 1162 msecs > 15/01/16 15:35:16 INFO namenode.FSNamesystem: = dfs.safemode.threshold.pct =3D 0.9990000128746033 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: = dfs.namenode.safemode.min.datanodes =3D 0 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: dfs.safemode.extension = =3D 30000 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of blocks = excluded by safe block count: 0 total blocks: 0 and thus the safe = blocks: 0 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Total number of blocks =3D= 0 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of invalid blocks = =3D 0 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of = under-replicated blocks =3D 0 > 15/01/16 15:35:16 INFO namenode.FSNamesystem: Number of = over-replicated blocks =3D 0 > 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Safe mode termination = scan for invalid, over- and under-replicated blocks completed in 7 msec > 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Leaving safe mode = after 1 secs > 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* Network topology has 0 = racks and 0 datanodes > 15/01/16 15:35:16 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks = has 0 blocks > 15/01/16 15:35:16 INFO util.HostsFileReader: Refreshing hosts = (include/exclude) list > 15/01/16 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec > 15/01/16 15:35:16 INFO namenode.FSNamesystem: ReplicateQueue = QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec = processing time, 0 msec clock time, 1 cycles > 15/01/16 15:35:16 INFO namenode.FSNamesystem: InvalidateQueue = QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec > 15/01/16 15:35:16 INFO namenode.FSNamesystem: InvalidateQueue = QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec = processing time, 0 msec clock time, 1 cycles > 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = FSNamesystemMetrics registered. > 15/01/16 15:35:16 INFO ipc.Server: Starting SocketReader > 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort54310 registered. > 15/01/16 15:35:16 INFO impl.MetricsSourceAdapter: MBean for source = RpcActivityForPort54310 registered. > 15/01/16 15:35:16 INFO namenode.NameNode: Namenode up at: = cn53/192.168.100.53:54310 > 15/01/16 15:35:16 INFO mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog > 15/01/16 15:35:16 INFO http.HttpServer: Added global filtersafety = (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter) > 15/01/16 15:35:16 INFO http.HttpServer: dfs.webhdfs.enabled =3D false > 15/01/16 15:35:16 INFO http.HttpServer: Port returned by = webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening = the listener on 50070 > 15/01/16 15:35:16 INFO http.HttpServer: listener.getLocalPort() = returned 50070 webServer.getConnectors()[0].getLocalPort() returned = 50070 > 15/01/16 15:35:16 INFO http.HttpServer: Jetty bound to port 50070 > 15/01/16 15:35:16 INFO mortbay.log: jetty-6.1.26 > 15/01/16 15:35:16 INFO mortbay.log: Started = SelectChannelConnector@0.0.0.0:50070 > 15/01/16 15:35:16 INFO namenode.NameNode: Web-server up at: = 0.0.0.0:50070 > 15/01/16 15:35:16 INFO ipc.Server: IPC Server Responder: starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server listener on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 0 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 1 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 2 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 3 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 4 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 5 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 6 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 8 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 7 on 54310: = starting > 15/01/16 15:35:16 INFO ipc.Server: IPC Server handler 9 on 54310: = starting >=20 > =3D=3D > I can also provide the script of running myHadoop or other system = information if that helps. I have been struggling with this problem for = quite long time. Could anyone help?=20 >=20 > Best, > Ruhua >=20 >=20 >=20 >=20 >=20 --Apple-Mail=_A768D325-79D4-4E1E-8F2A-2F2DFB81DC42 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Thanks = Ted!

Ruhua
On Jan 16, 2015, = at 4:08 PM, Ted Yu <yuzhihong@gmail.com> = wrote:


On Fri, Jan 16, 2015 at 12:55 PM, Ruhua Jiang = <ruhua.jiang@gmail.com> = wrote:
Hello 

I am quite = new to Hadoop. I am trying to run Hadoop on top of a HPC infrastructure = using a solution called =93myHadoop=94. Basically what it does is trying =  to allocate some nodes from HPC dynamically and run Hadoop(Use one = as NameNode, others as DataNode ). If anybody familiar with it that = would be perfect, but I think my problem is mostly the Hadoop = part. 
I am using Hadoop 1.2.1 do to the limited support = of myHadoop.

Here is the = log:
=3D=3D=3D
myHadoop: Using = HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1
myHadoop: Using = MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4128
myHadoop: Using = JAVA_HOME=3D/usr
myHadoop: = Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4128...
myHadoop: Using = directory /home/hpc-ruhua/hadoop/hdfs for persisting HDFS = state...
myHadoop: = Designating cn53 as master node (namenode, secondary namenode, and = jobtracker)
myHadoop: The = following nodes will be slaves (datanode, tasktracer):
cn53
cn54
cn55
cn56
Linking = /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4128/hdfs_data on = cn53
Linking = /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4128/hdfs_data on = cn54
Linking = /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4128/hdfs_data on = cn55
Warning: = Permanently added 'cn55,192.168.100.55' (RSA) to the list of known = hosts.
Linking = /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4128/hdfs_data on = cn56
Warning: = Permanently added 'cn56,192.168.100.56' (RSA) to the list of known = hosts.
starting namenode, = logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-namenode-cn53.out
cn53: starting = datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn53.out
cn54: starting = datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn54.out
cn55: starting = datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn55.out
cn56: starting = datanode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-datanode-cn56.out
cn53: starting = secondarynamenode, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-secondarynamenode-cn53.out
=
starting = jobtracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-jobtracker-cn53.out
cn53: starting = tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn53.out
cn56: starting = tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn56.out
cn55: starting = tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn55.out
cn54: starting = tasktracker, logging to = /tmp/hpc-ruhua/4128/logs/hadoop-hpc-ruhua-tasktracker-cn54.out
mkdir: cannot = create directory data: File exists
put: Target = data/pg2701.txt already exists
Found 1 = items
-rw-r--r--   = 3 hpc-ruhua supergroup          0 2015-01-07 = 00:09 /user/hpc-ruhua/data/pg2701.txt
15/01/14 12:21:08 = ERROR security.UserGroupInformation: PriviledgedActionException = as:hpc-ruhua cause:org.apache.hadoop.ipc.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING
at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199)
at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
at = org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

=
org.apache.hadoop.ip= c.RemoteException: = org.apache.hadoop.mapred.JobTrackerNotYetInitializedException: = JobTracker is not yet RUNNING
at = org.apache.hadoop.mapred.JobTracker.checkJobTrackerState(JobTracker.java:5= 199)
at = org.apache.hadoop.mapred.JobTracker.getNewJobId(JobTracker.java:3543)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
at = org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

=
at = org.apache.hadoop.ipc.Client.call(Client.java:1113)
at = org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at = org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoca= tionHandler.java:85)
at = org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHa= ndler.java:62)
at = org.apache.hadoop.mapred.$Proxy2.getNewJobId(Unknown Source)
at = org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:944)
at = org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at = java.security.AccessController.doPrivileged(Native Method)
at = javax.security.auth.Subject.doAs(Subject.java:415)
at = org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.= java:1190)
at = org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at = org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at = org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at = org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriv= er.java:68)
at = org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
<= div style=3D"margin:0px;font-size:11px;font-family:Menlo"> at = org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
=
at = sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at = sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:= 57)
at = sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm= pl.java:43)
at = java.lang.reflect.Method.invoke(Method.java:606)
at = org.apache.hadoop.util.RunJar.main(RunJar.java:160)
ls: Cannot access = wordcount-output: No such file or directory.
get: = null
stopping = jobtracker
cn54: stopping = tasktracker
cn55: stopping = tasktracker
cn53: stopping = tasktracker
cn56: stopping = tasktracker
stopping = namenode
cn53: no datanode = to stop
cn54: no datanode = to stop
cn56: no datanode = to stop
cn55: no datanode = to stop
=3D=3D=3D
The erro is =93ERROR = security.UserGroupInformation: PriviledgedActionException as:hpc-ruhua = cause:org.apache.hadoop.ipc.RemoteException: =93, anybody has = an idea of what might be the problem? 
That=92s the = result of using =93$HADOOP_HO= ME/bin/start-all.sh=94

I tried to split = the start phase to =93
$HADOOP_H= OME/bin/hadoop = namenode
$HADOOP_H= OME/bin/hadoop = datanode
=93

Below is the = log:
myHadoop: Using = HADOOP_HOME=3D/home/hpc-ruhua/hadoop-stack/hadoop-1.2.1
myHadoop: Using = MH_SCRATCH_DIR=3D/tmp/hpc-ruhua/4178
myHadoop: Using = JAVA_HOME=3D/usr
myHadoop: = Generating Hadoop configuration in directory in = /home/hpc-ruhua/hadoop/conf/hadoop-conf.4178...
myHadoop: Using = directory /home/hpc-ruhua/hadoop/hdfs for persisting HDFS = state...
myHadoop: = Designating cn53 as master node (namenode, secondary namenode, and = jobtracker)
myHadoop: The = following nodes will be slaves (datanode, tasktracer):
cn53
cn54
cn55
cn56
Linking = /home/hpc-ruhua/hadoop/hdfs/0 to /tmp/hpc-ruhua/4178/hdfs_data on = cn53
Linking = /home/hpc-ruhua/hadoop/hdfs/1 to /tmp/hpc-ruhua/4178/hdfs_data on = cn54
Linking = /home/hpc-ruhua/hadoop/hdfs/2 to /tmp/hpc-ruhua/4178/hdfs_data on = cn55
Linking = /home/hpc-ruhua/hadoop/hdfs/3 to /tmp/hpc-ruhua/4178/hdfs_data on = cn56
15/01/16 15:35:14 = INFO namenode.NameNode: STARTUP_MSG: 
/*******************= *****************************************
STARTUP_MSG: = Starting NameNode
STARTUP_MSG: =   host =3D cn53/192.168.100.53
STARTUP_MSG: =   args =3D []
STARTUP_MSG: =   version =3D 1.2.1
STARTUP_MSG: =   build =3D https://svn.apache.org/repos/asf/hadoop/common/branches/= branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 = PDT 2013
STARTUP_MSG: =   java =3D 1.7.0_71
********************= ****************************************/
15/01/16 15:35:14 = INFO impl.MetricsConfig: loaded properties from = hadoop-metrics2.properties
15/01/16 15:35:14 = INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=3DStats= registered.
15/01/16 15:35:14 = INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 = second(s).
15/01/16 15:35:14 = INFO impl.MetricsSystemImpl: NameNode metrics system started
15/01/16 15:35:14 = INFO impl.MetricsSourceAdapter: MBean for source ugi = registered.
15/01/16 15:35:14 = INFO impl.MetricsSourceAdapter: MBean for source jvm = registered.
15/01/16 15:35:14 = INFO impl.MetricsSourceAdapter: MBean for source NameNode = registered.
15/01/16 15:35:14 = INFO util.GSet: Computing capacity for map BlocksMap
15/01/16 15:35:14 = INFO util.GSet: VM type       =3D 64-bit
15/01/16 15:35:14 = INFO util.GSet: 2.0% max memory =3D 932184064
15/01/16 15:35:14 = INFO util.GSet: capacity      =3D 2^21 =3D 2097152 = entries
15/01/16 15:35:14 = INFO util.GSet: recommended=3D2097152, actual=3D2097152
15/01/16 15:35:15 = INFO namenode.FSNamesystem: fsOwner=3Dhpc-ruhua
15/01/16 15:35:15 = INFO namenode.FSNamesystem: supergroup=3Dsupergroup
15/01/16 15:35:15 = INFO namenode.FSNamesystem: isPermissionEnabled=3Dtrue
15/01/16 15:35:15 = INFO namenode.FSNamesystem: dfs.block.invalidate.limit=3D100
15/01/16 15:35:15 = INFO namenode.FSNamesystem: isAccessTokenEnabled=3Dfalse = accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 = min(s)
15/01/16 15:35:15 = INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and = NameNodeMXBean
15/01/16 15:35:15 = INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length =3D = 0
15/01/16= 15:35:15 INFO namenode.NameNode: Caching file names occuring more than = 10 times 
15/01/16 15:35:15 = INFO common.Storage: Start loading image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage
15/01/16 15:35:15 = INFO common.Storage: Number of files =3D 28
15/01/16 15:35:15 = INFO common.Storage: Number of files under construction =3D 1
15/01/16 15:35:15 = INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 2996 bytes = loaded in 0 seconds.
15/01/16 15:35:15 = INFO namenode.FSEditLog: Start loading edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 15:35:15 = INFO namenode.FSEditLog: Invalid opcode, reached end of edit log Number = of transactions found: 32.  Bytes read: 2579
15/01/16 15:35:15 = INFO namenode.FSEditLog: Start checking end of edit log = (/tmp/hpc-ruhua/4178/namenode_data/current/edits) ...
15/01/16 15:35:15 = INFO namenode.FSEditLog: Checked the bytes after the end of edit log = (/tmp/hpc-ruhua/4178/namenode_data/current/edits):
15/01/16 15:35:15 = INFO namenode.FSEditLog:   Padding position  =3D 2579 (-1 = means padding not found)
15/01/16 15:35:15 = INFO namenode.FSEditLog:   Edit log length   =3D = 1048580
15/01/16 15:35:15 = INFO namenode.FSEditLog:   Read length       =3D = 2579
15/01/16 15:35:15 = INFO namenode.FSEditLog:   Corruption length =3D 0
15/01/16 15:35:15 = INFO namenode.FSEditLog:   Toleration length =3D 0 (=3D = dfs.namenode.edits.toleration.length)
15/01/16 15:35:15 = INFO namenode.FSEditLog: Summary: |---------- Read=3D2579 ----------|-- = Corrupt=3D0 --|-- Pad=3D1046001 --|
15/01/16 15:35:15 = INFO namenode.FSEditLog: Edits file = /tmp/hpc-ruhua/4178/namenode_data/current/edits of size 1048580 edits # = 32 loaded in 0 seconds.
15/01/16 15:35:15 = INFO common.Storage: Image file = /tmp/hpc-ruhua/4178/namenode_data/current/fsimage of size 3745 bytes = saved in 0 seconds.
15/01/16 15:35:15 = INFO namenode.FSEditLog: closing edit log: position=3D4, = editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 15:35:15 = INFO namenode.FSEditLog: close success: truncate to 4, = editlog=3D/tmp/hpc-ruhua/4178/namenode_data/current/edits
15/01/16 15:35:16 = INFO namenode.NameCache: initialized with 0 entries 0 lookups
15/01/16 15:35:16 = INFO namenode.FSNamesystem: Finished loading FSImage in 1162 = msecs
15/01/16 15:35:16 = INFO namenode.FSNamesystem: dfs.safemode.threshold.pct    =       =3D 0.9990000128746033
15/01/16 15:35:16 = INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes =3D = 0
15/01/16= 15:35:16 INFO namenode.FSNamesystem: dfs.safemode.extension  =             =3D 30000
15/01/16 15:35:16 = INFO namenode.FSNamesystem: Number of blocks excluded by safe block = count: 0 total blocks: 0 and thus the safe blocks: 0
15/01/16 15:35:16 = INFO namenode.FSNamesystem: Total number of blocks =3D 0
15/01/16 15:35:16 = INFO namenode.FSNamesystem: Number of invalid blocks =3D 0
15/01/16 15:35:16 = INFO namenode.FSNamesystem: Number of under-replicated blocks =3D = 0
15/01/16= 15:35:16 INFO namenode.FSNamesystem: Number of  over-replicated = blocks =3D 0
15/01/16 15:35:16 = INFO hdfs.StateChange: STATE* Safe mode termination scan for invalid, = over- and under-replicated blocks completed in 7 msec
15/01/16 15:35:16 = INFO hdfs.StateChange: STATE* Leaving safe mode after 1 secs
15/01/16 15:35:16 = INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 = datanodes
15/01/16 15:35:16 = INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 = blocks
15/01/16 15:35:16 = INFO util.HostsFileReader: Refreshing hosts (include/exclude) = list
15/01/16 15:35:16 = INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: = First cycle completed 0 blocks in 0 msec
15/01/16 15:35:16 = INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: = Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock = time, 1 cycles
15/01/16 15:35:16 = INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: = First cycle completed 0 blocks in 0 msec
15/01/16 15:35:16 = INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: = Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock = time, 1 cycles
15/01/16 15:35:16 = INFO impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics = registered.
15/01/16 15:35:16 = INFO ipc.Server: Starting SocketReader
15/01/16 15:35:16 = INFO impl.MetricsSourceAdapter: MBean for source = RpcDetailedActivityForPort54310 registered.
15/01/16 15:35:16 = INFO impl.MetricsSourceAdapter: MBean for source RpcActivityForPort54310 = registered.
15/01/16 15:35:16 = INFO namenode.NameNode: Namenode up at: cn53/192.168.100.53:54310
15/01/16 15:35:16 = INFO mortbay.log: Logging to = org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via = org.mortbay.log.Slf4jLog
15/01/16 15:35:16 = INFO http.HttpServer: Added global filtersafety = (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter)
15/01/16 15:35:16 = INFO http.HttpServer: dfs.webhdfs.enabled =3D false
15/01/16 15:35:16 = INFO http.HttpServer: Port returned by = webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening = the listener on 50070
15/01/16 15:35:16 = INFO http.HttpServer: listener.getLocalPort() returned 50070 = webServer.getConnectors()[0].getLocalPort() returned 50070
15/01/16 15:35:16 = INFO http.HttpServer: Jetty bound to port 50070
15/01/16 15:35:16 = INFO mortbay.log: jetty-6.1.26
15/01/16 15:35:16 = INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
15/01/16 15:35:16 = INFO namenode.NameNode: Web-server up at: 0.0.0.0:50070
15/01/16 15:35:16 = INFO ipc.Server: IPC Server Responder: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server listener on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 0 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 1 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 2 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 3 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 4 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 5 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 6 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 8 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 7 on 54310: starting
15/01/16 15:35:16 = INFO ipc.Server: IPC Server handler 9 on 54310: = starting

=3D=3D
I can also = provide the script of running myHadoop or other system information if = that helps.  I have been struggling with this problem for quite = long time. Could anyone = help? 

Best,
Ruhua

<= /div>





= --Apple-Mail=_A768D325-79D4-4E1E-8F2A-2F2DFB81DC42--