hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9634) webhdfs client side exceptions don't provide enough details
Date Wed, 20 Jan 2016 18:24:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15109067#comment-15109067
] 

Kihwal Lee commented on HDFS-9634:
----------------------------------

When I tried it on 2.7, the new test case fails.  It is passing in trunk, of course.
{noformat}
TestWebHdfsTimeouts.testReadTimeout:131 expected:<localhost:58086: [Read timed out]>
but was:<localhost:58086: [null]>
{noformat}


> webhdfs client side exceptions don't provide enough details
> -----------------------------------------------------------
>
>                 Key: HDFS-9634
>                 URL: https://issues.apache.org/jira/browse/HDFS-9634
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 3.0.0, 2.8.0, 2.7.1
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>         Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there are no
details beyond the fact that a timeout occurred. Ideally it should say which node is responsible
for the timeout, but failing that it should at least say which node we're talking to so we
can examine that node's logs to further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
>     at java.net.SocketInputStream.socketRead0(Native Method)
>     at java.net.SocketInputStream.read(SocketInputStream.java:150)
>     at java.net.SocketInputStream.read(SocketInputStream.java:121)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at sun.net.www.MeteredStream.read(MeteredStream.java:134)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
>     at org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
>     at org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
>     at java.io.DataInputStream.read(DataInputStream.java:149)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
>     at java.io.FilterInputStream.read(FilterInputStream.java:107)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode was responsible
for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message