hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4501) GetImage failed
Date Thu, 14 Feb 2013 22:03:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13578728#comment-13578728
] 

Kihwal Lee commented on HDFS-4501:
----------------------------------

Are you running the SNN on a separate node? If so, is the SNN http address set correctly?
I think it defaults to 0.0.0.0 and when it tells NN to get the image from 0.0.0.0, NN won't
be able to. 

Please direct questions to the user mailing list. JIRA is for bugs.
                
> GetImage failed
> ---------------
>
>                 Key: HDFS-4501
>                 URL: https://issues.apache.org/jira/browse/HDFS-4501
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: janesh mishra
>
> fsimage and editslog are not updating 
> Following are the logs 
> SNN Logs:
> ----------------
> 2013-02-14 17:29:56,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number
of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
0 Number of syncs: 0 SyncTimes(ms): 0
> 2013-02-14 17:29:57,039 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Downloaded file fsimage size 10181 bytes.
> 2013-02-14 17:29:57,042 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Downloaded file edits size 521 bytes.
> 2013-02-14 17:29:57,042 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
> 2013-02-14 17:29:57,042 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875
MB
> 2013-02-14 17:29:57,042 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 =
2097152 entries
> 2013-02-14 17:29:57,042 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> 2013-02-14 17:29:57,044 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching
file names occuring more than 10 times
> 2013-02-14 17:29:57,045 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of
files = 89
> 2013-02-14 17:29:57,059 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of
files under construction = 0
> 2013-02-14 17:29:57,061 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file
/app/hadoop/tmp/dfs/namesecondary/current/edits of size 521 edits # 7 loaded in 0 seconds.
> 2013-02-14 17:29:57,061 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number
of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
0 Number of syncs: 0 SyncTimes(ms): 0
> 2013-02-14 17:29:57,121 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file
of size 10181 saved in 0 seconds.
> 2013-02-14 17:29:57,673 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file
of size 10181 saved in 0 seconds.
> 2013-02-14 17:29:58,121 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Posted URL ramesh:50070putimage=1&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360842594000:1360842284984
> 2013-02-14 17:29:58,128 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
Exception in doCheckpoint:
> 2013-02-14 17:29:58,129 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode:
java.io.FileNotFoundException: http://ramesh:50070/getimage?putimage=1&port=50090&machine=0.0.0.0&token=-32:1989419481:0:1360842594000:1360842284984
>         at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1613)
>         at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.putFSImage(SecondaryNameNode.java:377)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:418)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:312)
>         at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:275)
>         at java.lang.Thread.run(Thread.java:722)
> NN Logs:
> -----------
> 2013-02-14 18:15:08,127 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused
> 2013-02-14 18:15:08,128 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
as:hduser cause:java.net.ConnectException: Connection refused
> 2013-02-14 18:15:08,129 WARN org.mortbay.log: /getimage: java.io.IOException: GetImage
failed. java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:198)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:378)
> 	at sun.net.www.http.HttpClient.openServer(HttpClient.java:473)
> 	at sun.net.www.http.HttpClient.<init>(HttpClient.java:203)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:290)
> 	at sun.net.www.http.HttpClient.New(HttpClient.java:306)
> 	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995)
> 	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931)
> 	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849)
> 	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1299)
> 	at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:160)
> 	at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1$1.run(GetImageServlet.java:88)
> 	at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1$1.run(GetImageServlet.java:85)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> 	at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1.run(GetImageServlet.java:85)
> 	at org.apache.hadoop.hdfs.server.namenode.GetImageServlet$1.run(GetImageServlet.java:70)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> 	at org.apache.hadoop.hdfs.server.namenode.GetImageServlet.doGet(GetImageServlet.java:70)
> 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> 	at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> 	at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> 	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
> 	at org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:835)
> 	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> 	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> 	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> 	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> 	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> 	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> 	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> 	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> 	at org.mortbay.jetty.Server.handle(Server.java:326)
> 	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> 	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> 	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> 	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> 	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> 	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> 	at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Please help....

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message