hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rushabh S Shah (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details
Date Wed, 20 Jan 2016 20:48:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15109364#comment-15109364
] 

Rushabh S Shah commented on HDFS-9634:
--------------------------------------

The patch looks good to me.
Ran the  test case on trunk and on branch-2.7 multiple times.
It ran successfully everytime.
+1 (non-binding).

> webhdfs client side exceptions don't provide enough details
> -----------------------------------------------------------
>
>                 Key: HDFS-9634
>                 URL: https://issues.apache.org/jira/browse/HDFS-9634
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 3.0.0, 2.8.0, 2.7.1
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there are no
details beyond the fact that a timeout occurred. Ideally it should say which node is responsible
for the timeout, but failing that it should at least say which node we're talking to so we
can examine that node's logs to further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
>     at java.net.SocketInputStream.socketRead0(Native Method)
>     at java.net.SocketInputStream.read(SocketInputStream.java:150)
>     at java.net.SocketInputStream.read(SocketInputStream.java:121)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at sun.net.www.MeteredStream.read(MeteredStream.java:134)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
>     at org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
>     at org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
>     at java.io.DataInputStream.read(DataInputStream.java:149)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
>     at java.io.FilterInputStream.read(FilterInputStream.java:107)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode was responsible
for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message