hama-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Jungblut <thomas.jungb...@gmail.com>
Subject Re: failed to run hama pi example
Date Mon, 17 Sep 2012 05:35:25 GMT
Done. Thanks for pointing, walker!

2012/9/17 顾荣 <gurongwalker@gmail.com>

> Hi, Thomas.
>
> If you want to update the information, please also check and update the
> corresponding wiki web page here http://wiki.apache.org/hama/BSPModel
>
> Walker.
>
> 2012/9/17 Thomas Jungblut <thomas.jungblut@gmail.com>
>
> > Hi,
> >
> > types in Java are not optional and the last is the Message type. I will
> > edit this in our internal doc.
> > Thanks for finding ;)
> >
> > 2012/9/16 顾荣 <gurongwalker@gmail.com>
> >
> > > Thanks, Yuesheng.
> > > However, I also see the declaration "BSPPeer<LongWritable, Text,
> KEYOUT,
> > > VALUEOUT>" in page 15 and "BSPPeer<NullWritable, NullWritable, Text,
> > > DoubleWritable>" in page 17.
> > > Is the message type optional? and what's it used for?
> > >
> > > Walker.
> > >
> > >
> > >
> > > 2012/9/16 Yuesheng Hu <yueshenghu@gmail.com>
> > >
> > > > I think there is no error, the fifth parameter is message type.
> > > > 在 2012-9-16 晚上11:35,"顾荣" <gurongwalker@gmail.com>写道:
> > > >
> > > > > Hi, Suraj.
> > > > >
> > > > > I am reading the "BSP Programming
> > > > > Model<
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/attachment/12528218/ApacheHamaBSPProgrammingmodel.pdf
> > > > > >"
> > > > > pdf in Hama wiki you introduced to me.
> > > > > I wonder there is a type error in this pdf.
> > > > > See the "BSPPeer<NullWritable, NullWritable, Text,
> > > > > DoubleWritable,DoubleWritable> peer" in the code block in page 16.
> Is
> > > it
> > > > > should be "BSPPeer<NullWritable, NullWritable, Text,
> DoubleWritable>
> > > > peer"
> > > > > ?
> > > > >
> > > > > BTW, this pdf file is a wonderful guide for me,thanks .
> > > > >
> > > > > Walker.
> > > > >
> > > > > 2012/9/13 顾荣 <gurongwalker@gmail.com>
> > > > >
> > > > > > Hi, Suraj.
> > > > > >
> > > > > > Thanks so much. I am just looking for the introdution material of
> > the
> > > > > > programming model and the execution mechanism in hama.You guys
> are
> > so
> > > > > kind.
> > > > > >
> > > > > > Regards
> > > > > > Walker.
> > > > > >
> > > > > >
> > > > > > 2012/9/13 Suraj Menon <surajsmenon@apache.org>
> > > > > >
> > > > > >> Sorry Walker I am a little late in responding .. For further
> help
> > > you
> > > > > can
> > > > > >> refer the PDFs here -
> http://wiki.apache.org/hama/GettingStarted
> > > > > >> It has installation guide and introduction to programming
> model. I
> > > am
> > > > in
> > > > > >> the process of converting it to html ebook.
> > > > > >> The document contents are still evolving but the installation
> > guide
> > > > has
> > > > > >> been just enough to start on Hama.
> > > > > >>
> > > > > >> Good luck.
> > > > > >>
> > > > > >> -Suraj
> > > > > >>
> > > > > >> On Wed, Sep 12, 2012 at 9:17 PM, Edward J. Yoon <
> > > > edwardyoon@apache.org
> > > > > >> >wrote:
> > > > > >>
> > > > > >> > No problem, walker. Thanks a lot for your feedbacks. :-)
> > > > > >> >
> > > > > >> > On Wed, Sep 12, 2012 at 11:11 PM, 顾荣 <gurongwalker@gmail.com>
> > > > wrote:
> > > > > >> > > Hi, Thomas and Edward.
> > > > > >> > >
> > > > > >> > > I am sry, I did not copy the Hadop jar into the Hama lib
> > folder.
> > > > So,
> > > > > >> > there
> > > > > >> > > comes a problem when I use hadoop 0.20.2 for Hama 0.5 at
> > first.
> > > > > When I
> > > > > >> > use
> > > > > >> > > hadoop 1.0.3, I did not replace the hadoop jar in Hama lib
> > > either.
> > > > > >> > However,
> > > > > >> > > by default, Hama 0.5 contains hadoop-core-1.0.0.jar in its
> > lib.
> > > > > Maybe
> > > > > >> > > because hadoop 1.0.0 do not have too much difference with
> > hadoop
> > > > > >> 1.0.3 in
> > > > > >> > > communication protocol, so I passed the pi example
> > fortunately.
> > > > > >> > >
> > > > > >> > > By the way, I have tested that the Hama 0.5 can really works
> > > well
> > > > > >>  with
> > > > > >> > > Hadoop 0.20.2 after replace the hadoop jar files in
> > > > ${HAMA_HOME}/lib
> > > > > >> > > folder.
> > > > > >> > > It makes sense. When starting, the Hama bspmaster needs to
> > > > > communicate
> > > > > >> > the
> > > > > >> > > Namenode, thus the hadoop jar it used needs  match the
> version
> > > of
> > > > > the
> > > > > >> > > running HDFS. That's why it shows the error message such as
> > can
> > > > not
> > > > > >> > connect
> > > > > >> > > the namenode and RPC failed and so on in the log.
> > > > > >> > >
> > > > > >> > > During installation, I just followed the this guide
> > > > > >> > > http://hama.apache.org/getting_started_with_hama.html and
> > > missed
> > > > > its
> > > > > >> > link
> > > > > >> > > page http://wiki.apache.org/hama/CompatibilityTable. Sorry
> > > again.
> > > > > >> > >
> > > > > >> > > Regards.
> > > > > >> > > Walker.
> > > > > >> > >
> > > > > >> > > 2012/9/12 Thomas Jungblut <thomas.jungblut@gmail.com>
> > > > > >> > >
> > > > > >> > >> Hey walker,
> > > > > >> > >>
> > > > > >> > >> did you copy the Hadoop jar into the Hama lib folder?
> > > Otherwise I
> > > > > >> can't
> > > > > >> > >> explain this.
> > > > > >> > >>
> > > > > >> > >> 2012/9/12 Edward J. Yoon <edwardyoon@apache.org>
> > > > > >> > >>
> > > > > >> > >> > I'm use 0.20.2 and 0.20.2-cdh versions. There's no
> problem.
> > > > > >> > >> >
> > > > > >> > >> > Sent from my iPad
> > > > > >> > >> >
> > > > > >> > >> > On Sep 12, 2012, at 4:59 PM, Thomas Jungblut <
> > > > > >> > thomas.jungblut@gmail.com>
> > > > > >> > >> > wrote:
> > > > > >> > >> >
> > > > > >> > >> > > Anyone tested the compatibility to Hadoop 20.2 with
> Hama
> > > 0.5?
> > > > > >> > >> > > [1] says it is compatible.
> > > > > >> > >> > >
> > > > > >> > >> > > [1] http://wiki.apache.org/hama/CompatibilityTable
> > > > > >> > >> > >
> > > > > >> > >> > > 2012/9/12 顾荣 <gurongwalker@gmail.com>
> > > > > >> > >> > >
> > > > > >> > >> > >> Okay, I'll try Hadoop 1.0.3 with Hama 0.50.
> > > > > >> > >> > >> Thanks Thomas. I can't wait to explore the Hama world
> > now.
> > > > > >> > >> > >>
> > > > > >> > >> > >> walker.
> > > > > >> > >> > >>
> > > > > >> > >> > >> 2012/9/12 Thomas Jungblut <thomas.jungblut@gmail.com>
> > > > > >> > >> > >>
> > > > > >> > >> > >>> oh okay. I'm not sure if 0.5.0 is really compatible
> to
> > > > 20.2,
> > > > > >> > >> > personally I
> > > > > >> > >> > >>> have installed 1.0.3 and it works fine.
> > > > > >> > >> > >>> Sorry to let you install all the different versions.
> > > > > >> > >> > >>>
> > > > > >> > >> > >>> 2012/9/12 顾荣 <gurongwalker@gmail.com>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>>> Thanks Thomas. The HDFS works well. I even put a
> file
> > > from
> > > > > >> local
> > > > > >> > to
> > > > > >> > >> it
> > > > > >> > >> > >>>> successfully. It absolutely left the safemode. The
> > > > namenode
> > > > > >> > starting
> > > > > >> > >> > >> log
> > > > > >> > >> > >>> is
> > > > > >> > >> > >>>> as below:
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,002 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
> > > > > STARTUP_MSG:
> > > > > >> > >> > >>>>
> > > > > /************************************************************
> > > > > >> > >> > >>>> STARTUP_MSG: Starting NameNode
> > > > > >> > >> > >>>> STARTUP_MSG:   host = slave021/192.168.1.21
> > > > > >> > >> > >>>> STARTUP_MSG:   args = []
> > > > > >> > >> > >>>> STARTUP_MSG:   version = 0.20.2
> > > > > >> > >> > >>>> STARTUP_MSG:   build =
> > > > > >> > >> > >>>>
> > > > > >> > >>
> > > > > >>
> > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-r
> > > > > >> > >> > >>>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34
> > UTC
> > > > > 2010
> > > > > >> > >> > >>>>
> > > > > ************************************************************/
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,092 INFO
> > > > > >> > >> org.apache.hadoop.ipc.metrics.RpcMetrics:
> > > > > >> > >> > >>>> Initializing RPC Metrics with hostName=NameNode,
> > > > port=54310
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,098 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
> > > Namenode
> > > > up
> > > > > >> at:
> > > > > >> > >> > >>> slave021/
> > > > > >> > >> > >>>> 192.168.1.21:54310
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,100 INFO
> > > > > >> > >> org.apache.hadoop.metrics.jvm.JvmMetrics:
> > > > > >> > >> > >>>> Initializing JVM Metrics with processName=NameNode,
> > > > > >> > sessionId=null
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,101 INFO
> > > > > >> > >> > >>>>
> > > > > >> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
> > > > > >> > >> > >>>> Initializing NameNodeMeterics using context
> > > > > >> > >> > >>>> object:org.apache.hadoop.metrics.spi.NullContext
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,143 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > > >> > >> > >>>> fsOwner=hadoop,hadoop_user,wheel
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,144 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > > >> > >> > >>> supergroup=supergroup
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,144 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > > >> > >> > >>>> isPermissionEnabled=true
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,150 INFO
> > > > > >> > >> > >>>>
> > > > > >> >
> > > org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> > > > > >> > >> > >>>> Initializing FSNamesystemMetrics using context
> > > > > >> > >> > >>>> object:org.apache.hadoop.metrics.spi.NullContext
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,151 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > > >> Registered
> > > > > >> > >> > >>>> FSNamesystemStatusMBean
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,177 INFO
> > > > > >> > >> > >>> org.apache.hadoop.hdfs.server.common.Storage:
> > > > > >> > >> > >>>> Number of files = 1
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,181 INFO
> > > > > >> > >> > >>> org.apache.hadoop.hdfs.server.common.Storage:
> > > > > >> > >> > >>>> Number of files under construction = 0
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,181 INFO
> > > > > >> > >> > >>> org.apache.hadoop.hdfs.server.common.Storage:
> > > > > >> > >> > >>>> Image file of size 96 loaded in 0 seconds.
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,181 INFO
> > > > > >> > >> > >>> org.apache.hadoop.hdfs.server.common.Storage:
> > > > > >> > >> > >>>> Edits file
> > > > > >> > >> > >>>>
> > > > > >> >
> > > /home/hadoop/gurong/hadoop-0.20.2/hadoop_dir/dfs/name/current/edits
> > > > > >> > >> of
> > > > > >> > >> > >>> size
> > > > > >> > >> > >>>> 4 edits # 0 loaded in 0 seconds.
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,236 INFO
> > > > > >> > >> > >>> org.apache.hadoop.hdfs.server.common.Storage:
> > > > > >> > >> > >>>> Image file of size 96 saved in 0 seconds.
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,439 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > > Finished
> > > > > >> > >> loading
> > > > > >> > >> > >>>> FSImage in 312 msecs
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > Total
> > > > > >> > number of
> > > > > >> > >> > >>> blocks
> > > > > >> > >> > >>>> = 0
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > Number
> > > > > of
> > > > > >> > >> invalid
> > > > > >> > >> > >>>> blocks = 0
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > Number
> > > > > of
> > > > > >> > >> > >>>> under-replicated blocks = 0
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > Number
> > > > > of
> > > > > >> > >> > >>>> over-replicated blocks = 0
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> org.apache.hadoop.hdfs.StateChange:
> > > > > >> > >> > STATE*
> > > > > >> > >> > >>>> Leaving safe mode after 0 secs.
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> org.apache.hadoop.hdfs.StateChange:
> > > > > >> > >> > STATE*
> > > > > >> > >> > >>>> Network topology has 0 racks and 0 datanodes
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,441 INFO
> > > > > >> org.apache.hadoop.hdfs.StateChange:
> > > > > >> > >> > STATE*
> > > > > >> > >> > >>>> UnderReplicatedBlocks has 0 blocks
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,554 INFO org.mortbay.log:
> Logging
> > to
> > > > > >> > >> > >>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
> via
> > > > > >> > >> > >>>> org.mortbay.log.Slf4jLog
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,603 INFO
> > > > > >> org.apache.hadoop.http.HttpServer:
> > > > > >> > Port
> > > > > >> > >> > >>>> returned by
> > webServer.getConnectors()[0].getLocalPort()
> > > > > before
> > > > > >> > >> open()
> > > > > >> > >> > >> is
> > > > > >> > >> > >>>> -1. Opening the listener on 50070
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,604 INFO
> > > > > >> org.apache.hadoop.http.HttpServer:
> > > > > >> > >> > >>>> listener.getLocalPort() returned 50070
> > > > > >> > >> > >>>> webServer.getConnectors()[0].getLocalPort() returned
> > > 50070
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,604 INFO
> > > > > >> org.apache.hadoop.http.HttpServer:
> > > > > >> > >> Jetty
> > > > > >> > >> > >>> bound
> > > > > >> > >> > >>>> to port 50070
> > > > > >> > >> > >>>> 2012-09-12 15:10:39,604 INFO org.mortbay.log:
> > > jetty-6.1.14
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,662 INFO org.mortbay.log:
> Started
> > > > > >> > >> > >>>> SelectChannelConnector@0.0.0.0:50070
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,663 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
> > > > Web-server
> > > > > up
> > > > > >> > at:
> > > > > >> > >> > >>>> 0.0.0.0:50070
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,666 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> listener on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,667 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> Responder: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,668 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 0 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,668 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 1 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,668 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 2 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,668 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 3 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,668 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 4 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,669 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 5 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,669 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 6 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,669 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 7 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,669 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 8 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,669 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > IPC
> > > > > >> > >> Server
> > > > > >> > >> > >>>> handler 9 on 54310: starting
> > > > > >> > >> > >>>> 2012-09-12 15:10:48,700 INFO
> > > org.apache.hadoop.ipc.Server:
> > > > > >> Error
> > > > > >> > >> > >> register
> > > > > >> > >> > >>>> getProtocolVersion
> > > > > >> > >> > >>>> java.lang.IllegalArgumentException: Duplicate
> > > > > >> > >> > >>>> metricsName:getProtocolVersion
> > > > > >> > >> > >>>>    at
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
> > > > > >> > >> > >>>>    at
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
> > > > > >> > >> > >>>>    at
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
> > > > > >> > >> > >>>>    at
> > > org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
> > > > > >> > >> > >>>>    at
> > > > > >> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> > > > > >> > >> > >>>>    at
> > > > > >> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> > > > > >> > >> > >>>>    at
> > java.security.AccessController.doPrivileged(Native
> > > > > >> Method)
> > > > > >> > >> > >>>>    at
> > javax.security.auth.Subject.doAs(Subject.java:416)
> > > > > >> > >> > >>>>    at
> > > > > >> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > > > > >> > >> > >>>> 2012-09-12 15:11:05,298 INFO
> > > > > >> org.apache.hadoop.hdfs.StateChange:
> > > > > >> > >> > BLOCK*
> > > > > >> > >> > >>>> NameSystem.registerDatanode: node registration from
> > > > > >> > >> > >>>> 192.168.1.21:50010storage
> > > > > >> > >> > >>>> DS-1416037815-192.168.1.21-50010-1347433865293
> > > > > >> > >> > >>>> 2012-09-12 15:11:05,300 INFO
> > > > > >> > org.apache.hadoop.net.NetworkTopology:
> > > > > >> > >> > >>> Adding
> > > > > >> > >> > >>>> a new node: /default-rack/192.168.1.21:50010
> > > > > >> > >> > >>>> 2012-09-12 15:11:15,069 INFO
> > > > > >> > >> > >>>>
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > > >> > >> > >>>> ugi=webuser,webgroup    ip=/192.168.1.21
> > > >  cmd=listStatus
> > > > > >> >  src=/
> > > > > >> > >> > >>>> dst=null    perm=null
> > > > > >> > >> > >>>> 2012-09-12 15:12:05,034 WARN
> > > org.apache.hadoop.ipc.Server:
> > > > > >> > Incorrect
> > > > > >> > >> > >>> header
> > > > > >> > >> > >>>> or version mismatch from 192.168.1.21:56281 got
> > > version 4
> > > > > >> > expected
> > > > > >> > >> > >>>> version 3
> > > > > >> > >> > >>>> 2012-09-12 15:14:51,535 INFO
> > > > > >> > >> > >>>>
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
> > > > > >> > >> > >>>> ugi=hadoop,hadoop_user,wheel    ip=/192.168.1.21
> > > > > >> >  cmd=listStatus
> > > > > >> > >> > >>>> src=/    dst=null    perm=null
> > > > > >> > >> > >>>> 2012-09-12 15:15:10,158 INFO
> > > > > >> > >> > >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > > Number
> > > > > of
> > > > > >> > >> > >>>> transactions: 2 Total time for transactions(ms):
> > 0Number
> > > > of
> > > > > >> > >> > >> transactions
> > > > > >> > >> > >>>> batched in Syncs: 0 Number of syncs: 0
> SyncTimes(ms):
> > 0
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>> I used Hama 0.5 and hadoop 0.20.2. Has somebody test
> > > this
> > > > > >> match
> > > > > >> > can
> > > > > >> > >> > >> work
> > > > > >> > >> > >>>> well?
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>> thanks very much.
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>> walker
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>> 2012/9/12 Thomas Jungblut <
> thomas.jungblut@gmail.com>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>>> Still it says:
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>> " 2012-09-12 14:41:16,218 ERROR
> > > > > >> org.apache.hama.bsp.BSPMaster:
> > > > > >> > >> > >> Can't
> > > > > >> > >> > >>>> get
> > > > > >> > >> > >>>>>> connection to Hadoop Namenode! "
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>> Can you verify that the namenode is not in safemode
> > and
> > > > has
> > > > > >> > >> correctly
> > > > > >> > >> > >>>>> started up?
> > > > > >> > >> > >>>>> Have a look into the namenode logs please!
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>> 2012/9/12 顾荣 <gurongwalker@gmail.com>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>>> By the way, the fs.default.name is
> > 192.168.1.21:54310
> > > .
> > > > I
> > > > > >> > checked
> > > > > >> > >> > >> the
> > > > > >> > >> > >>>>> HDFS,
> > > > > >> > >> > >>>>>> it works well. I installed and ran both HDFS and
> > Hama
> > > > > using
> > > > > >> the
> > > > > >> > >> > >> same
> > > > > >> > >> > >>>>> linux
> > > > > >> > >> > >>>>>> account.
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>> 2012/9/12 顾荣 <gurongwalker@gmail.com>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>>> Thanks so much, Edward.
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> I fellowed your suggestion and instanlled a
> hadoop
> > > > 0.20.2
> > > > > >> > instead
> > > > > >> > >> > >>> for
> > > > > >> > >> > >>>>>>> Hama. However, this time when I start Hama, a
> fatal
> > > > > >> happened
> > > > > >> > and
> > > > > >> > >> > >>> the
> > > > > >> > >> > >>>>>>> bspmaster daemon can not start up. The
> > corresponding
> > > > > error
> > > > > >> > >> > >> message
> > > > > >> > >> > >>> in
> > > > > >> > >> > >>>>> the
> > > > > >> > >> > >>>>>>> baspmaster log file shows as below.
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>>
> > > > > >> ************************************************************/
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,238 INFO
> > > > > >> org.apache.hama.BSPMasterRunner:
> > > > > >> > >> > >>>>>> STARTUP_MSG:
> > > > > >> > >> > >>>>>>>
> > > > > >> /************************************************************
> > > > > >> > >> > >>>>>>> STARTUP_MSG: Starting BSPMaster
> > > > > >> > >> > >>>>>>> STARTUP_MSG:   host = slave021/192.168.1.21
> > > > > >> > >> > >>>>>>> STARTUP_MSG:   args = []
> > > > > >> > >> > >>>>>>> STARTUP_MSG:   version = 1.0.0
> > > > > >> > >> > >>>>>>> STARTUP_MSG:   build =
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>
> > > > > >> >
> > > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0-r
> > > > > >> > >> > >>>>>>> 1214675; compiled by 'hortonfo' on Fri Dec 16
> > > 20:01:27
> > > > > UTC
> > > > > >> > 2011
> > > > > >> > >> > >>>>>>>
> > > > > >> ************************************************************/
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,414 INFO
> > > > > org.apache.hama.bsp.BSPMaster:
> > > > > >> > RPC
> > > > > >> > >> > >>>>>> BSPMaster:
> > > > > >> > >> > >>>>>>> host slave021 port 40000
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,502 INFO
> > > > > org.apache.hadoop.ipc.Server:
> > > > > >> > >> > >> Starting
> > > > > >> > >> > >>>>>>> SocketReader
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,542 INFO org.mortbay.log:
> > Logging
> > > > to
> > > > > >> > >> > >>>>>>>
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
> > > via
> > > > > >> > >> > >>>>>>> org.mortbay.log.Slf4jLog
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,583 INFO
> > > > > >> org.apache.hama.http.HttpServer:
> > > > > >> > >> > >> Port
> > > > > >> > >> > >>>>>>> returned by
> > > webServer.getConnectors()[0].getLocalPort()
> > > > > >> before
> > > > > >> > >> > >>> open()
> > > > > >> > >> > >>>>> is
> > > > > >> > >> > >>>>>>> -1. Opening the listener on 40013
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO
> > > > > >> org.apache.hama.http.HttpServer:
> > > > > >> > >> > >>>>>>> listener.getLocalPort() returned 40013
> > > > > >> > >> > >>>>>>> webServer.getConnectors()[0].getLocalPort()
> > returned
> > > > > 40013
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO
> > > > > >> org.apache.hama.http.HttpServer:
> > > > > >> > >> > >> Jetty
> > > > > >> > >> > >>>>> bound
> > > > > >> > >> > >>>>>>> to port 40013
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,584 INFO org.mortbay.log:
> > > > > jetty-6.1.14
> > > > > >> > >> > >>>>>>> 2012-09-12 14:40:38,610 INFO org.mortbay.log:
> > Extract
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> jar:file:/home/hadoop/hama_installs/hama-0.5.0/hama-core-0.5.0.jar!/webapp/bspmaster/
> > > > > >> > >> > >>>>>>> to
> > > > /tmp/Jetty_slave021_40013_bspmaster____.1tzgsz/webapp
> > > > > >> > >> > >>>>>>> 2012-09-12 14:41:16,073 INFO org.mortbay.log:
> > Started
> > > > > >> > >> > >>>>>>> SelectChannelConnector@slave021:40013
> > > > > >> > >> > >>>>>>> 2012-09-12 14:41:16,218 ERROR
> > > > > >> org.apache.hama.bsp.BSPMaster:
> > > > > >> > >> > >> Can't
> > > > > >> > >> > >>>> get
> > > > > >> > >> > >>>>>>> connection to Hadoop Namenode!
> > > > > >> > >> > >>>>>>> java.io.IOException: Call to /192.168.1.21:54310
> > > failed
> > > > > on
> > > > > >> > local
> > > > > >> > >> > >>>>>>> exception: java.io.EOFException
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
> > > > > >> > >> > >>>>>>>    at
> > > > org.apache.hadoop.ipc.Client.call(Client.java:1071)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > > > >> > >> > >>>>>>>    at $Proxy5.getProtocolVersion(Unknown Source)
> > > > > >> > >> > >>>>>>>    at
> > > org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> > > > > >> > >> > >>>>>>>    at
> > > org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >>
> > > > > >>
> > > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >>
> > > > > >>
> > > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>>
> > > > > >> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:299)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:454)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:449)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> > > > > >> > >> > >>>>>>> Caused by: java.io.EOFException
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > java.io.DataInputStream.readInt(DataInputStream.java:392)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>
> > > > > >> > >> >
> > > > > >> >
> > > > >
> > >
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:800)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hadoop.ipc.Client$Connection.run(Client.java:745)
> > > > > >> > >> > >>>>>>> 2012-09-12 14:41:16,222 FATAL
> > > > > >> org.apache.hama.BSPMasterRunner:
> > > > > >> > >> > >>>>>>> java.lang.NullPointerException
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > > org.apache.hama.bsp.BSPMaster.getSystemDir(BSPMaster.java:862)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:308)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:454)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:449)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>
> > > org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> > > > > >> > >> > >>>>>>>    at
> > > > > >> > >> > >>>
> > > > org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> 2012-09-12 14:41:16,223 INFO
> > > > > >> org.apache.hama.BSPMasterRunner:
> > > > > >> > >> > >>>>>>> SHUTDOWN_MSG:
> > > > > >> > >> > >>>>>>>
> > > > > >> /************************************************************
> > > > > >> > >> > >>>>>>> SHUTDOWN_MSG: Shutting down BSPMaster at
> slave021/
> > > > > >> > 192.168.1.21
> > > > > >> > >> > >>>>>>>
> > > > > >> ************************************************************/
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> Would please give me some tips again?
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> Thanks,again.
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> walker
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> 2012/9/12 Edward J. Yoon <edwardyoon@apache.org>
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>> Unfortunately we don't support Hadoop secure
> > version
> > > > yet.
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>> Instead of 0.20.205, Please use hadoop
> non-secure
> > > > 0.20.2
> > > > > >> or
> > > > > >> > >> > >> 1.0.3
> > > > > >> > >> > >>>>>>>> versions.
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>> Thanks.
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>> On Wed, Sep 12, 2012 at 11:25 AM, 顾荣 <
> > > > > >> gurongwalker@gmail.com
> > > > > >> > >
> > > > > >> > >> > >>>> wrote:
> > > > > >> > >> > >>>>>>>>> Hi,all.
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>> I set up a hama cluster of 3 nodes and start
> hama
> > > > > >> > >> > >> successfully.
> > > > > >> > >> > >>>>>> However,
> > > > > >> > >> > >>>>>>>>> when I run the pi example, the job failed with
> a
> > > very
> > > > > >> > strange
> > > > > >> > >> > >>>>> message
> > > > > >> > >> > >>>>>> as
> > > > > >> > >> > >>>>>>>>> below.
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>> hama jar
> > > > > >> > >> > >>>>>>
> > > > > >> /home/hadoop/hama_installs/hama-0.5.0/hama-examples-0.5.0.jar
> > > > > >> > >> > >>>>>>>> pi
> > > > > >> > >> > >>>>>>>>> org.apache.hadoop.ipc.RemoteException:
> > > > > >> java.io.IOException:
> > > > > >> > >> > >>>>>>>>> java.lang.NoSuchMethodException:
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>
> > > > > >> > >>
> > > > > >>
> > > >
> org.apache.hadoop.hdfs.protocol.ClientProtocol.create(java.lang.String,
> > > > > >> > >> > >>>>>>>>> org.apache.hadoop.fs.permission.FsPermission,
> > > > > >> > >> > >> java.lang.String,
> > > > > >> > >> > >>>>>> boolean,
> > > > > >> > >> > >>>>>>>>> boolean, short, long)
> > > > > >> > >> > >>>>>>>>>    at
> java.lang.Class.getMethod(Class.java:1605)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>
> > > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>
> > > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > java.security.AccessController.doPrivileged(Native
> > > > > >> > >> > >>> Method)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > javax.security.auth.Subject.doAs(Subject.java:396)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>
> > > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>>    at
> > > > > org.apache.hadoop.ipc.Client.call(Client.java:1066)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > > > >> > >> > >>>>>>>>>    at $Proxy2.create(Unknown Source)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > > >> > >> > >>> Method)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > >> > >> > >>>>>>>>>    at
> > > > java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> > > > > >> > >> > >>>>>>>>>    at $Proxy2.create(Unknown Source)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:3245)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>
> > > org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:713)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:182)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>
> > > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:555)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>
> > > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:536)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>
> > > org.apache.hadoop.fs.FileSystem.create(FileSystem.java:443)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:229)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>
> > > > > >> > >>
> > > > > >>
> > > >
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1195)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>
> > > > > >> > >>
> > > > > >>
> > > >
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1171)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>
> > > > > >> > >>
> > > > > >>
> > > >
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1143)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> >
> > > > > >>
> > > > >
> > >
> org.apache.hama.bsp.BSPJobClient.submitJobInternal(BSPJobClient.java:349)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>
> > > > > >> >
> > org.apache.hama.bsp.BSPJobClient.submitJob(BSPJobClient.java:294)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > org.apache.hama.bsp.BSPJob.submit(BSPJob.java:218)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>
> > > > > org.apache.hama.bsp.BSPJob.waitForCompletion(BSPJob.java:225)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>
> > > > > >> org.apache.hama.examples.PiEstimator.main(PiEstimator.java:139)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > > >> > >> > >>> Method)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > >> > >> > >>>>>>>>>    at
> > > > java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>
> > > > > >> >
> > > org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>
> > > > > >>
> org.apache.hama.examples.ExampleDriver.main(ExampleDriver.java:39)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > > >> > >> > >>> Method)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > > > >> > >> > >>>>>>>>>    at
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > >> > >> > >>>>>>>>>    at
> > > > java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >> > >> > >>>>>>>>>    at
> > > > org.apache.hama.util.RunJar.main(RunJar.java:147)
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>> My hama verison is 0.5 and hadoop version is
> > > > 0.20.205.
> > > > > >> This
> > > > > >> > >> > >>> error
> > > > > >> > >> > >>>>>> seems
> > > > > >> > >> > >>>>>>>> to
> > > > > >> > >> > >>>>>>>>> comes from the
> > > > > >> > >> > >>>>>
> > "org.apache.hadoop.hdfs.protocol.ClientProtocol.create"
> > > > > >> > >> > >>>>>>>>> method, this is a normal method. I am kind of
> > > > > confused...
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>> Thanks in advance.
> > > > > >> > >> > >>>>>>>>>
> > > > > >> > >> > >>>>>>>>> walker
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>> --
> > > > > >> > >> > >>>>>>>> Best Regards, Edward J. Yoon
> > > > > >> > >> > >>>>>>>> @eddieyoon
> > > > > >> > >> > >>>>>>>>
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>>
> > > > > >> > >> > >>>>>>
> > > > > >> > >> > >>>>>
> > > > > >> > >> > >>>>
> > > > > >> > >> > >>>
> > > > > >> > >> > >>
> > > > > >> > >> >
> > > > > >> > >>
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > --
> > > > > >> > Best Regards, Edward J. Yoon
> > > > > >> > @eddieyoon
> > > > > >> >
> > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message