hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Junping Du (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9634) webhdfs client side exceptions don't provide enough details
Date Fri, 06 Jan 2017 01:43:59 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Junping Du updated HDFS-9634:
-----------------------------
    Fix Version/s: 2.8.0

> webhdfs client side exceptions don't provide enough details
> -----------------------------------------------------------
>
>                 Key: HDFS-9634
>                 URL: https://issues.apache.org/jira/browse/HDFS-9634
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 2.8.0, 2.7.1, 3.0.0-alpha1
>            Reporter: Eric Payne
>            Assignee: Eric Payne
>             Fix For: 2.8.0, 2.7.3, 3.0.0-alpha1
>
>         Attachments: HDFS-9634.001.patch, HDFS-9634.002.patch
>
>
> When a WebHDFS client side exception (for example, read timeout) occurs there are no
details beyond the fact that a timeout occurred. Ideally it should say which node is responsible
for the timeout, but failing that it should at least say which node we're talking to so we
can examine that node's logs to further investigate.
> {noformat}
> java.net.SocketTimeoutException: Read timed out
>     at java.net.SocketInputStream.socketRead0(Native Method)
>     at java.net.SocketInputStream.read(SocketInputStream.java:150)
>     at java.net.SocketInputStream.read(SocketInputStream.java:121)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at sun.net.www.MeteredStream.read(MeteredStream.java:134)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3035)
>     at org.apache.commons.io.input.BoundedInputStream.read(BoundedInputStream.java:121)
>     at org.apache.hadoop.hdfs.web.ByteRangeInputStream.read(ByteRangeInputStream.java:188)
>     at java.io.DataInputStream.read(DataInputStream.java:149)
>     at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>     at com.yahoo.grid.tools.util.io.ThrottledBufferedInputStream.read(ThrottledBufferedInputStream.java:58)
>     at java.io.FilterInputStream.read(FilterInputStream.java:107)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.copyBytes(HFTPDistributedCopy.java:495)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.doCopy(HFTPDistributedCopy.java:440)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy.access$200(HFTPDistributedCopy.java:57)
>     at com.yahoo.grid.replication.distcopy.tasklet.HFTPDistributedCopy$1.doExecute(HFTPDistributedCopy.java:387)
> ... 12 more
> {noformat}
> There are no clues as to which datanode we're talking to nor which datanode was responsible
for the timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message