Return-Path: X-Original-To: apmail-hama-user-archive@www.apache.org Delivered-To: apmail-hama-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E98FFDB3E for ; Thu, 13 Sep 2012 01:28:19 +0000 (UTC) Received: (qmail 88747 invoked by uid 500); 13 Sep 2012 01:28:19 -0000 Delivered-To: apmail-hama-user-archive@hama.apache.org Received: (qmail 88731 invoked by uid 500); 13 Sep 2012 01:28:19 -0000 Mailing-List: contact user-help@hama.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hama.apache.org Delivered-To: mailing list user@hama.apache.org Received: (qmail 88723 invoked by uid 99); 13 Sep 2012 01:28:19 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Sep 2012 01:28:19 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of menonsuraj5@gmail.com designates 209.85.217.175 as permitted sender) Received: from [209.85.217.175] (HELO mail-lb0-f175.google.com) (209.85.217.175) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Sep 2012 01:28:10 +0000 Received: by lban1 with SMTP id n1so1619100lba.34 for ; Wed, 12 Sep 2012 18:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type; bh=Dd/95RtYiUuj92wuvvN6a7yw3jaPpJyr9YM79o/pA+s=; b=I5FIuZj7xVZOowGOVUY83jP4uPyWWZf2mLLIv+5BO0LQWs1eWe1gP6pGBZOeC6z2aG XNoQ3WnZEM6zre0/xpWxXXOzOdlaf4fMcEg6Xitc72QPbgb/wCoxP7QfxYdHXAr8fQSV rYSadoVGor1tHE2YE3/qyrL7g7FAHod4WbuRuM7ob3J+nR/qku23xj5gg4UnrKUZ/eCb T2QMzI1pBXaQqqBU9uZx9mNG5rA+Wrhxyai5EmkXTC3IsdhoUPzVqmxh+VgTz+4kyB5u 367y9ZEoHM71AtEKpvt4R9EMv0ahacsMwK2G4W+7YA+u3vKvBaCkYsXbLFgmjippniNi KX9g== MIME-Version: 1.0 Received: by 10.112.10.135 with SMTP id i7mr293299lbb.27.1347499669645; Wed, 12 Sep 2012 18:27:49 -0700 (PDT) Sender: menonsuraj5@gmail.com Received: by 10.114.11.201 with HTTP; Wed, 12 Sep 2012 18:27:49 -0700 (PDT) In-Reply-To: References: Date: Wed, 12 Sep 2012 21:27:49 -0400 X-Google-Sender-Auth: WeI5ZAz8NcHfC3qs393gN2ZMbNc Message-ID: Subject: Re: failed to run hama pi example From: Suraj Menon To: user@hama.apache.org Content-Type: multipart/alternative; boundary=e0cb4efe33a824e86c04c98b37d3 --e0cb4efe33a824e86c04c98b37d3 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Sorry Walker I am a little late in responding .. For further help you can refer the PDFs here - http://wiki.apache.org/hama/GettingStarted It has installation guide and introduction to programming model. I am in the process of converting it to html ebook. The document contents are still evolving but the installation guide has been just enough to start on Hama. Good luck. -Suraj On Wed, Sep 12, 2012 at 9:17 PM, Edward J. Yoon wrot= e: > No problem, walker. Thanks a lot for your feedbacks. :-) > > On Wed, Sep 12, 2012 at 11:11 PM, =B9=CB=C8=D9 w= rote: > > Hi, Thomas and Edward. > > > > I am sry, I did not copy the Hadop jar into the Hama lib folder. So, > there > > comes a problem when I use hadoop 0.20.2 for Hama 0.5 at first. When I > use > > hadoop 1.0.3, I did not replace the hadoop jar in Hama lib either. > However, > > by default, Hama 0.5 contains hadoop-core-1.0.0.jar in its lib. Maybe > > because hadoop 1.0.0 do not have too much difference with hadoop 1.0.3 = in > > communication protocol, so I passed the pi example fortunately. > > > > By the way, I have tested that the Hama 0.5 can really works well with > > Hadoop 0.20.2 after replace the hadoop jar files in ${HAMA_HOME}/lib > > folder. > > It makes sense. When starting, the Hama bspmaster needs to communicate > the > > Namenode, thus the hadoop jar it used needs match the version of the > > running HDFS. That's why it shows the error message such as can not > connect > > the namenode and RPC failed and so on in the log. > > > > During installation, I just followed the this guide > > http://hama.apache.org/getting_started_with_hama.html and missed its > link > > page http://wiki.apache.org/hama/CompatibilityTable. Sorry again. > > > > Regards. > > Walker. > > > > 2012/9/12 Thomas Jungblut > > > >> Hey walker, > >> > >> did you copy the Hadoop jar into the Hama lib folder? Otherwise I can'= t > >> explain this. > >> > >> 2012/9/12 Edward J. Yoon > >> > >> > I'm use 0.20.2 and 0.20.2-cdh versions. There's no problem. > >> > > >> > Sent from my iPad > >> > > >> > On Sep 12, 2012, at 4:59 PM, Thomas Jungblut < > thomas.jungblut@gmail.com> > >> > wrote: > >> > > >> > > Anyone tested the compatibility to Hadoop 20.2 with Hama 0.5? > >> > > [1] says it is compatible. > >> > > > >> > > [1] http://wiki.apache.org/hama/CompatibilityTable > >> > > > >> > > 2012/9/12 =B9=CB=C8=D9 > >> > > > >> > >> Okay, I'll try Hadoop 1.0.3 with Hama 0.50. > >> > >> Thanks Thomas. I can't wait to explore the Hama world now. > >> > >> > >> > >> walker. > >> > >> > >> > >> 2012/9/12 Thomas Jungblut > >> > >> > >> > >>> oh okay. I'm not sure if 0.5.0 is really compatible to 20.2, > >> > personally I > >> > >>> have installed 1.0.3 and it works fine. > >> > >>> Sorry to let you install all the different versions. > >> > >>> > >> > >>> 2012/9/12 =B9=CB=C8=D9 > >> > >>> > >> > >>>> Thanks Thomas. The HDFS works well. I even put a file from loca= l > to > >> it > >> > >>>> successfully. It absolutely left the safemode. The namenode > starting > >> > >> log > >> > >>> is > >> > >>>> as below: > >> > >>>> > >> > >>>> 2012-09-12 15:10:39,002 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: > >> > >>>> /************************************************************ > >> > >>>> STARTUP_MSG: Starting NameNode > >> > >>>> STARTUP_MSG: host =3D slave021/192.168.1.21 > >> > >>>> STARTUP_MSG: args =3D [] > >> > >>>> STARTUP_MSG: version =3D 0.20.2 > >> > >>>> STARTUP_MSG: build =3D > >> > >>>> > >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-r > >> > >>>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 > >> > >>>> ************************************************************/ > >> > >>>> 2012-09-12 15:10:39,092 INFO > >> org.apache.hadoop.ipc.metrics.RpcMetrics: > >> > >>>> Initializing RPC Metrics with hostName=3DNameNode, port=3D54310 > >> > >>>> 2012-09-12 15:10:39,098 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at= : > >> > >>> slave021/ > >> > >>>> 192.168.1.21:54310 > >> > >>>> 2012-09-12 15:10:39,100 INFO > >> org.apache.hadoop.metrics.jvm.JvmMetrics: > >> > >>>> Initializing JVM Metrics with processName=3DNameNode, > sessionId=3Dnull > >> > >>>> 2012-09-12 15:10:39,101 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: > >> > >>>> Initializing NameNodeMeterics using context > >> > >>>> object:org.apache.hadoop.metrics.spi.NullContext > >> > >>>> 2012-09-12 15:10:39,143 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >> > >>>> fsOwner=3Dhadoop,hadoop_user,wheel > >> > >>>> 2012-09-12 15:10:39,144 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >> > >>> supergroup=3Dsupergroup > >> > >>>> 2012-09-12 15:10:39,144 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > >> > >>>> isPermissionEnabled=3Dtrue > >> > >>>> 2012-09-12 15:10:39,150 INFO > >> > >>>> > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: > >> > >>>> Initializing FSNamesystemMetrics using context > >> > >>>> object:org.apache.hadoop.metrics.spi.NullContext > >> > >>>> 2012-09-12 15:10:39,151 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered > >> > >>>> FSNamesystemStatusMBean > >> > >>>> 2012-09-12 15:10:39,177 INFO > >> > >>> org.apache.hadoop.hdfs.server.common.Storage: > >> > >>>> Number of files =3D 1 > >> > >>>> 2012-09-12 15:10:39,181 INFO > >> > >>> org.apache.hadoop.hdfs.server.common.Storage: > >> > >>>> Number of files under construction =3D 0 > >> > >>>> 2012-09-12 15:10:39,181 INFO > >> > >>> org.apache.hadoop.hdfs.server.common.Storage: > >> > >>>> Image file of size 96 loaded in 0 seconds. > >> > >>>> 2012-09-12 15:10:39,181 INFO > >> > >>> org.apache.hadoop.hdfs.server.common.Storage: > >> > >>>> Edits file > >> > >>>> > /home/hadoop/gurong/hadoop-0.20.2/hadoop_dir/dfs/name/current/edits > >> of > >> > >>> size > >> > >>>> 4 edits # 0 loaded in 0 seconds. > >> > >>>> 2012-09-12 15:10:39,236 INFO > >> > >>> org.apache.hadoop.hdfs.server.common.Storage: > >> > >>>> Image file of size 96 saved in 0 seconds. > >> > >>>> 2012-09-12 15:10:39,439 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished > >> loading > >> > >>>> FSImage in 312 msecs > >> > >>>> 2012-09-12 15:10:39,441 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total > number of > >> > >>> blocks > >> > >>>> =3D 0 > >> > >>>> 2012-09-12 15:10:39,441 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >> invalid > >> > >>>> blocks =3D 0 > >> > >>>> 2012-09-12 15:10:39,441 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >> > >>>> under-replicated blocks =3D 0 > >> > >>>> 2012-09-12 15:10:39,441 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >> > >>>> over-replicated blocks =3D 0 > >> > >>>> 2012-09-12 15:10:39,441 INFO org.apache.hadoop.hdfs.StateChange= : > >> > STATE* > >> > >>>> Leaving safe mode after 0 secs. > >> > >>>> 2012-09-12 15:10:39,441 INFO org.apache.hadoop.hdfs.StateChange= : > >> > STATE* > >> > >>>> Network topology has 0 racks and 0 datanodes > >> > >>>> 2012-09-12 15:10:39,441 INFO org.apache.hadoop.hdfs.StateChange= : > >> > STATE* > >> > >>>> UnderReplicatedBlocks has 0 blocks > >> > >>>> 2012-09-12 15:10:39,554 INFO org.mortbay.log: Logging to > >> > >>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > >> > >>>> org.mortbay.log.Slf4jLog > >> > >>>> 2012-09-12 15:10:39,603 INFO org.apache.hadoop.http.HttpServer: > Port > >> > >>>> returned by webServer.getConnectors()[0].getLocalPort() before > >> open() > >> > >> is > >> > >>>> -1. Opening the listener on 50070 > >> > >>>> 2012-09-12 15:10:39,604 INFO org.apache.hadoop.http.HttpServer: > >> > >>>> listener.getLocalPort() returned 50070 > >> > >>>> webServer.getConnectors()[0].getLocalPort() returned 50070 > >> > >>>> 2012-09-12 15:10:39,604 INFO org.apache.hadoop.http.HttpServer: > >> Jetty > >> > >>> bound > >> > >>>> to port 50070 > >> > >>>> 2012-09-12 15:10:39,604 INFO org.mortbay.log: jetty-6.1.14 > >> > >>>> 2012-09-12 15:10:48,662 INFO org.mortbay.log: Started > >> > >>>> SelectChannelConnector@0.0.0.0:50070 > >> > >>>> 2012-09-12 15:10:48,663 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up > at: > >> > >>>> 0.0.0.0:50070 > >> > >>>> 2012-09-12 15:10:48,666 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> listener on 54310: starting > >> > >>>> 2012-09-12 15:10:48,667 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> Responder: starting > >> > >>>> 2012-09-12 15:10:48,668 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 0 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,668 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 1 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,668 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 2 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,668 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 3 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,668 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 4 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,669 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 5 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,669 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 6 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,669 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 7 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,669 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 8 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,669 INFO org.apache.hadoop.ipc.Server: IPC > >> Server > >> > >>>> handler 9 on 54310: starting > >> > >>>> 2012-09-12 15:10:48,700 INFO org.apache.hadoop.ipc.Server: Erro= r > >> > >> register > >> > >>>> getProtocolVersion > >> > >>>> java.lang.IllegalArgumentException: Duplicate > >> > >>>> metricsName:getProtocolVersion > >> > >>>> at > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:5= 3) > >> > >>>> at > >> > >>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeV= aryingRate.java:89) > >> > >>>> at > >> > >>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.(MetricsTimeV= aryingRate.java:99) > >> > >>>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523) > >> > >>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:95= 9) > >> > >>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:95= 5) > >> > >>>> at java.security.AccessController.doPrivileged(Native Method= ) > >> > >>>> at javax.security.auth.Subject.doAs(Subject.java:416) > >> > >>>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > >> > >>>> 2012-09-12 15:11:05,298 INFO org.apache.hadoop.hdfs.StateChange= : > >> > BLOCK* > >> > >>>> NameSystem.registerDatanode: node registration from > >> > >>>> 192.168.1.21:50010storage > >> > >>>> DS-1416037815-192.168.1.21-50010-1347433865293 > >> > >>>> 2012-09-12 15:11:05,300 INFO > org.apache.hadoop.net.NetworkTopology: > >> > >>> Adding > >> > >>>> a new node: /default-rack/192.168.1.21:50010 > >> > >>>> 2012-09-12 15:11:15,069 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: > >> > >>>> ugi=3Dwebuser,webgroup ip=3D/192.168.1.21 cmd=3DlistStatu= s > src=3D/ > >> > >>>> dst=3Dnull perm=3Dnull > >> > >>>> 2012-09-12 15:12:05,034 WARN org.apache.hadoop.ipc.Server: > Incorrect > >> > >>> header > >> > >>>> or version mismatch from 192.168.1.21:56281 got version 4 > expected > >> > >>>> version 3 > >> > >>>> 2012-09-12 15:14:51,535 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: > >> > >>>> ugi=3Dhadoop,hadoop_user,wheel ip=3D/192.168.1.21 > cmd=3DlistStatus > >> > >>>> src=3D/ dst=3Dnull perm=3Dnull > >> > >>>> 2012-09-12 15:15:10,158 INFO > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of > >> > >>>> transactions: 2 Total time for transactions(ms): 0Number of > >> > >> transactions > >> > >>>> batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0 > >> > >>>> > >> > >>>> I used Hama 0.5 and hadoop 0.20.2. Has somebody test this match > can > >> > >> work > >> > >>>> well? > >> > >>>> > >> > >>>> thanks very much. > >> > >>>> > >> > >>>> walker > >> > >>>> > >> > >>>> 2012/9/12 Thomas Jungblut > >> > >>>> > >> > >>>>> Still it says: > >> > >>>>>> > >> > >>>>>> " 2012-09-12 14:41:16,218 ERROR org.apache.hama.bsp.BSPMaster= : > >> > >> Can't > >> > >>>> get > >> > >>>>>> connection to Hadoop Namenode! " > >> > >>>>> > >> > >>>>> > >> > >>>>> Can you verify that the namenode is not in safemode and has > >> correctly > >> > >>>>> started up? > >> > >>>>> Have a look into the namenode logs please! > >> > >>>>> > >> > >>>>> 2012/9/12 =B9=CB=C8=D9 > >> > >>>>> > >> > >>>>>> By the way, the fs.default.name is 192.168.1.21:54310. I > checked > >> > >> the > >> > >>>>> HDFS, > >> > >>>>>> it works well. I installed and ran both HDFS and Hama using t= he > >> > >> same > >> > >>>>> linux > >> > >>>>>> account. > >> > >>>>>> > >> > >>>>>> 2012/9/12 =B9=CB=C8=D9 > >> > >>>>>> > >> > >>>>>>> Thanks so much, Edward. > >> > >>>>>>> > >> > >>>>>>> I fellowed your suggestion and instanlled a hadoop 0.20.2 > instead > >> > >>> for > >> > >>>>>>> Hama. However, this time when I start Hama, a fatal happened > and > >> > >>> the > >> > >>>>>>> bspmaster daemon can not start up. The corresponding error > >> > >> message > >> > >>> in > >> > >>>>> the > >> > >>>>>>> baspmaster log file shows as below. > >> > >>>>>>> > >> > >>>>>>> ************************************************************= / > >> > >>>>>>> 2012-09-12 14:40:38,238 INFO org.apache.hama.BSPMasterRunner= : > >> > >>>>>> STARTUP_MSG: > >> > >>>>>>> /***********************************************************= * > >> > >>>>>>> STARTUP_MSG: Starting BSPMaster > >> > >>>>>>> STARTUP_MSG: host =3D slave021/192.168.1.21 > >> > >>>>>>> STARTUP_MSG: args =3D [] > >> > >>>>>>> STARTUP_MSG: version =3D 1.0.0 > >> > >>>>>>> STARTUP_MSG: build =3D > >> > >>>>>>> > >> > >>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r > >> > >>>>>>> 1214675; compiled by 'hortonfo' on Fri Dec 16 20:01:27 UTC > 2011 > >> > >>>>>>> ************************************************************= / > >> > >>>>>>> 2012-09-12 14:40:38,414 INFO org.apache.hama.bsp.BSPMaster: > RPC > >> > >>>>>> BSPMaster: > >> > >>>>>>> host slave021 port 40000 > >> > >>>>>>> 2012-09-12 14:40:38,502 INFO org.apache.hadoop.ipc.Server: > >> > >> Starting > >> > >>>>>>> SocketReader > >> > >>>>>>> 2012-09-12 14:40:38,542 INFO org.mortbay.log: Logging to > >> > >>>>>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > >> > >>>>>>> org.mortbay.log.Slf4jLog > >> > >>>>>>> 2012-09-12 14:40:38,583 INFO org.apache.hama.http.HttpServer= : > >> > >> Port > >> > >>>>>>> returned by webServer.getConnectors()[0].getLocalPort() befo= re > >> > >>> open() > >> > >>>>> is > >> > >>>>>>> -1. Opening the listener on 40013 > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO org.apache.hama.http.HttpServer= : > >> > >>>>>>> listener.getLocalPort() returned 40013 > >> > >>>>>>> webServer.getConnectors()[0].getLocalPort() returned 40013 > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO org.apache.hama.http.HttpServer= : > >> > >> Jetty > >> > >>>>> bound > >> > >>>>>>> to port 40013 > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO org.mortbay.log: jetty-6.1.14 > >> > >>>>>>> 2012-09-12 14:40:38,610 INFO org.mortbay.log: Extract > >> > >>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > jar:file:/home/hadoop/hama_installs/hama-0.5.0/hama-core-0.5.0.jar!/webap= p/bspmaster/ > >> > >>>>>>> to /tmp/Jetty_slave021_40013_bspmaster____.1tzgsz/webapp > >> > >>>>>>> 2012-09-12 14:41:16,073 INFO org.mortbay.log: Started > >> > >>>>>>> SelectChannelConnector@slave021:40013 > >> > >>>>>>> 2012-09-12 14:41:16,218 ERROR org.apache.hama.bsp.BSPMaster: > >> > >> Can't > >> > >>>> get > >> > >>>>>>> connection to Hadoop Namenode! > >> > >>>>>>> java.io.IOException: Call to /192.168.1.21:54310 failed on > local > >> > >>>>>>> exception: java.io.EOFException > >> > >>>>>>> at > >> > >> org.apache.hadoop.ipc.Client.wrapException(Client.java:1103) > >> > >>>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1071) > >> > >>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) > >> > >>>>>>> at $Proxy5.getProtocolVersion(Unknown Source) > >> > >>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) > >> > >>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) > >> > >>>>>>> at > >> > >>>>>>> > >> > >>>> > >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) > >> > >>>>>>> at > >> > >> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:238) > >> > >>>>>>> at > >> > >> org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:203) > >> > >>>>>>> at > >> > >>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSy= stem.java:89) > >> > >>>>>>> at > >> > >>>>>>> > >> > >>>> > >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) > >> > >>>>>>> at > >> > >>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > >> > >>>>>>> at > >> > >>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) > >> > >>>>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:25= 4) > >> > >>>>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:12= 3) > >> > >>>>>>> at org.apache.hama.bsp.BSPMaster.(BSPMaster.java:29= 9) > >> > >>>>>>> at > >> > >>> org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:454) > >> > >>>>>>> at > >> > >>> org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:449) > >> > >>>>>>> at > >> > >> org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46) > >> > >>>>>>> at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >> > >>>>>>> at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > >> > >>>>>>> at > >> > >>> org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56) > >> > >>>>>>> Caused by: java.io.EOFException > >> > >>>>>>> at > java.io.DataInputStream.readInt(DataInputStream.java:392) > >> > >>>>>>> at > >> > >>>>>>> > >> > >>>>> > >> > >>> > >> > > org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:800) > >> > >>>>>>> at > >> > >> org.apache.hadoop.ipc.Client$Connection.run(Client.java:745) > >> > >>>>>>> 2012-09-12 14:41:16,222 FATAL org.apache.hama.BSPMasterRunne= r: > >> > >>>>>>> java.lang.NullPointerException > >> > >>>>>>> at > >> > >>> org.apache.hama.bsp.BSPMaster.getSystemDir(BSPMaster.java:862) > >> > >>>>>>> at org.apache.hama.bsp.BSPMaster.(BSPMaster.java:30= 8) > >> > >>>>>>> at > >> > >>> org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:454) > >> > >>>>>>> at > >> > >>> org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:449) > >> > >>>>>>> at > >> > >> org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46) > >> > >>>>>>> at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > >> > >>>>>>> at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > >> > >>>>>>> at > >> > >>> org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56) > >> > >>>>>>> > >> > >>>>>>> 2012-09-12 14:41:16,223 INFO org.apache.hama.BSPMasterRunner= : > >> > >>>>>>> SHUTDOWN_MSG: > >> > >>>>>>> /***********************************************************= * > >> > >>>>>>> SHUTDOWN_MSG: Shutting down BSPMaster at slave021/ > 192.168.1.21 > >> > >>>>>>> ************************************************************= / > >> > >>>>>>> > >> > >>>>>>> Would please give me some tips again? > >> > >>>>>>> > >> > >>>>>>> Thanks,again. > >> > >>>>>>> > >> > >>>>>>> walker > >> > >>>>>>> > >> > >>>>>>> 2012/9/12 Edward J. Yoon > >> > >>>>>>> > >> > >>>>>>> Unfortunately we don't support Hadoop secure version yet. > >> > >>>>>>>> > >> > >>>>>>>> Instead of 0.20.205, Please use hadoop non-secure 0.20.2 or > >> > >> 1.0.3 > >> > >>>>>>>> versions. > >> > >>>>>>>> > >> > >>>>>>>> Thanks. > >> > >>>>>>>> > >> > >>>>>>>> On Wed, Sep 12, 2012 at 11:25 AM, =B9=CB=C8=D9 > > >> > >>>> wrote: > >> > >>>>>>>>> Hi,all. > >> > >>>>>>>>> > >> > >>>>>>>>> I set up a hama cluster of 3 nodes and start hama > >> > >> successfully. > >> > >>>>>> However, > >> > >>>>>>>>> when I run the pi example, the job failed with a very > strange > >> > >>>>> message > >> > >>>>>> as > >> > >>>>>>>>> below. > >> > >>>>>>>>> > >> > >>>>>>>>> hama jar > >> > >>>>>> /home/hadoop/hama_installs/hama-0.5.0/hama-examples-0.5.0.jar > >> > >>>>>>>> pi > >> > >>>>>>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException= : > >> > >>>>>>>>> java.lang.NoSuchMethodException: > >> > >>>>>>>>> > >> > >>>>>> > >> > >>> > >> org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String= , > >> > >>>>>>>>> org.apache.hadoop.fs.permission.FsPermission, > >> > >> java.lang.String, > >> > >>>>>> boolean, > >> > >>>>>>>>> boolean, short, long) > >> > >>>>>>>>> at java.lang.Class.getMethod(Class.java:1605) > >> > >>>>>>>>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > >> > >>>>>>>>> at > >> > >>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) > >> > >>>>>>>>> at > >> > >>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) > >> > >>>>>>>>> at java.security.AccessController.doPrivileged(Native > >> > >>> Method) > >> > >>>>>>>>> at javax.security.auth.Subject.doAs(Subject.java:396) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1059) > >> > >>>>>>>>> at > >> > >>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) > >> > >>>>>>>>> > >> > >>>>>>>>> at org.apache.hadoop.ipc.Client.call(Client.java:1066) > >> > >>>>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:22= 5) > >> > >>>>>>>>> at $Proxy2.create(Unknown Source) > >> > >>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > >> > >>> Method) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > >> > >>>>>>>>> at java.lang.reflect.Method.invoke(Method.java:616) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:82) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:59) > >> > >>>>>>>>> at $Proxy2.create(Unknown Source) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.(DFSClient.java:32= 45) > >> > >>>>>>>>> at > >> > >>> org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:713) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem= .java:182) > >> > >>>>>>>>> at > >> > >>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555) > >> > >>>>>>>>> at > >> > >>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536) > >> > >>>>>>>>> at > >> > >>> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443) > >> > >>>>>>>>> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:229= ) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>> > >> > >>> > >> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1195= ) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>> > >> > >>> > >> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1171= ) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>> > >> > >>> > >> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1143= ) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>> > >> > >> > >> > > org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:349) > >> > >>>>>>>>> at > >> > >>>>>> > org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:294) > >> > >>>>>>>>> at org.apache.hama.bsp.BSPJob.submit(BSPJob.java:218) > >> > >>>>>>>>> at > >> > >>>> org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:225) > >> > >>>>>>>>> at > >> > >>>>> org.apache.hama.examples.PiEstimator.main(PiEstimator.java:139= ) > >> > >>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > >> > >>> Method) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > >> > >>>>>>>>> at java.lang.reflect.Method.invoke(Method.java:616) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDri= ver.java:68) > >> > >>>>>>>>> at > >> > >>>>>>>> > >> > >>> > org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) > >> > >>>>>>>>> at > >> > >>>>>>>> > >> > >> org.apache.hama.examples.ExampleDriver.main(ExampleDriver.java:39= ) > >> > >>>>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > >> > >>> Method) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :57) > >> > >>>>>>>>> at > >> > >>>>>>>>> > >> > >>>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:43) > >> > >>>>>>>>> at java.lang.reflect.Method.invoke(Method.java:616) > >> > >>>>>>>>> at org.apache.hama.util.RunJar.main(RunJar.java:147) > >> > >>>>>>>>> > >> > >>>>>>>>> My hama verison is 0.5 and hadoop version is 0.20.205. Thi= s > >> > >>> error > >> > >>>>>> seems > >> > >>>>>>>> to > >> > >>>>>>>>> comes from the > >> > >>>>> "org.apache.hadoop.hdfs.protocol.ClientProtocol.create" > >> > >>>>>>>>> method, this is a normal method. I am kind of confused... > >> > >>>>>>>>> > >> > >>>>>>>>> Thanks in advance. > >> > >>>>>>>>> > >> > >>>>>>>>> walker > >> > >>>>>>>> > >> > >>>>>>>> > >> > >>>>>>>> > >> > >>>>>>>> -- > >> > >>>>>>>> Best Regards, Edward J. Yoon > >> > >>>>>>>> @eddieyoon > >> > >>>>>>>> > >> > >>>>>>> > >> > >>>>>>> > >> > >>>>>> > >> > >>>>> > >> > >>>> > >> > >>> > >> > >> > >> > > >> > > > > -- > Best Regards, Edward J. Yoon > @eddieyoon > --e0cb4efe33a824e86c04c98b37d3--