hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Issue Comment Edited: (HADOOP-2971) SocketTimeoutException in unit tests
Date Sat, 08 Mar 2008 01:35:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12576465#action_12576465
] 

rangadi edited comment on HADOOP-2971 at 3/7/08 5:35 PM:
--------------------------------------------------------------

I thought I could avoid calling System.currentTimeMillis() while waiting and depend on select().
Tough luck.

The attached patch polls in a loop until timeout passes. Also removes a large block for setting
"channeStr".  we use channel.toString() instead.

      was (Author: rangadi):
    I thought I could avoid calling System.currentTimeMillis() while waiting and depend on
select(). Tough luck.

The attached patch polls in a loop until timeout passes. Also remove a large block for setting
"channeStr" and use use channel.toString() instead.
  
> SocketTimeoutException in unit tests
> ------------------------------------
>
>                 Key: HADOOP-2971
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2971
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>         Attachments: HADOOP-2971.patch
>
>
> TestJobStatusPersistency failed and contained DataNode stacktraces similar to the following
:
> {noformat}
> 2008-03-07 21:27:00,410 ERROR dfs.DataNode (DataNode.java:run(976)) - 127.0.0.1:57790:DataXceiver:
java.net.SocketTimeoutException: 0 millis 
> timeout while waiting for Unknown Addr (local: /127.0.0.1:57790) to be ready for read
>         at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:188)
>         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:135)
>         at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:121)
>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>         at java.io.DataInputStream.readInt(DataInputStream.java:370)
>         at org.apache.hadoop.dfs.DataNode$BlockReceiver.receiveBlock(DataNode.java:2434)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1170)
>         at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:953)
>         at java.lang.Thread.run(Thread.java:619)
> {noformat}
> This is mostly related to HADOOP-2346. The error is strange. socket.getRemoteSocketAddress()
returned null implying this socket is not connected yet. But we have already read a few bytes
from it!.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message