hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-720) NPE in BlockReceiver$PacketResponder.run(BlockReceiver.java:923)
Date Fri, 23 Oct 2009 20:33:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12769416#action_12769416
] 

Hairong Kuang commented on HDFS-720:
------------------------------------

Stack, thanks a lot for your tests and effort on resolving the issue!

> NPE in BlockReceiver$PacketResponder.run(BlockReceiver.java:923)
> ----------------------------------------------------------------
>
>                 Key: HDFS-720
>                 URL: https://issues.apache.org/jira/browse/HDFS-720
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.21.0
>         Environment: Current branch-0.21 of hdfs, mapreduce, and common.  Here is svn
info:
> URL: https://svn.apache.org/repos/asf/hadoop/hdfs/branches/branch-0.21
> Repository Root: https://svn.apache.org/repos/asf
> Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68
> Revision: 827883
> Node Kind: directory
> Schedule: normal
> Last Changed Author: szetszwo
> Last Changed Rev: 826906
> Last Changed Date: 2009-10-20 00:16:25 +0000 (Tue, 20 Oct 2009)
>            Reporter: stack
>             Fix For: 0.21.0
>
>         Attachments: dn.log
>
>
> Running some loadings on hdfs I had one of these on the DN XX.XX.XX.139:51010:
> {code}
> 2009-10-21 04:57:02,755 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
block blk_6345892463926159834_1029 src: /XX,XX,XX.140:37890 dest: /XX.XX.XX.139:51010
> 2009-10-21 04:57:02,829 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
blk_6345892463926159834_1029 1 Exception java.lang.NullPointerException
>         at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:923)
>         at java.lang.Thread.run(Thread.java:619)
> {code}
> On XX,XX,XX.140 side, it looks like this:
> {code}
> 10-21 04:57:01,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block
blk_6345892463926159834_1029 src: /XX.XX.XX.140:37385 dest: /XX.XX.XX140:51010
> 2009-10-21 04:57:02,836 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
2 for block blk_6345892463926159834_1029 terminating
> 2009-10-21 04:57:02,885 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(XX.XX.XX.140:51010,
storageID=DS-1292310101-208.76.44.140-51010-1256100924816, infoPort=51075, ipcPort=51020):Exception
writing block blk_6345892463926159834_1029 to mirror XX.XX.XX.139:51010
> java.io.IOException: Connection reset by peer
>     at sun.nio.ch.FileDispatcher.write0(Native Method)
>     at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>     at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
>     at sun.nio.ch.IOUtil.write(IOUtil.java:75)
>     at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>     at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>     at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>     at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>     at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>     at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>     at java.io.DataOutputStream.write(DataOutputStream.java:90)
>     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:466)
>     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:434)
>     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:573)
>     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:352)
>     at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
>     at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
>     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)
>     at java.lang.Thread.run(Thread.java:619)
> {code}
> Here is the bit of code inside the run method:
> {code}
>  922                   pkt = ackQueue.getFirst();
>  923                   expected = pkt.seqno;
> {code}
> So 'pkt' is null?  But LinkedList API says that it throws NoSuchElementException if list
is empty so you'd think we wouldn't get a NPE here.  What am I missing?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message