hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rod (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8162) Stack trace routed to standard out
Date Mon, 20 Apr 2015 19:42:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503489#comment-14503489
] 

Rod commented on HDFS-8162:
---------------------------

Re-opening for comment on how a c/cpp program utilizing libhdfs without any direct involvement
with java or log4j can overwrite the libhdfs target console?

This is still a valid issue affecting users of the libhdfs library and should at least be
documented. thank you.

> Stack trace routed to standard out
> ----------------------------------
>
>                 Key: HDFS-8162
>                 URL: https://issues.apache.org/jira/browse/HDFS-8162
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: libhdfs
>    Affects Versions: 2.5.2
>            Reporter: Rod
>            Priority: Minor
>
> Calling hdfsOpenFile() can generate a stacktrace printout to standard out, which can
be problematic for caller program which is making use of standard out. libhdfs stacktraces
should be routed to standard error.
> Example of stacktrace:
> WARN  [main] hdfs.BlockReaderFactory (BlockReaderFactory.java:getRemoteBlockReaderFromTcp(693))
- I/O error constructing remote block reader.
> org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for
channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> 	at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
> 	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
> 	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
> 	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)
> 2015-04-16 10:32:13,946 WARN  [main] hdfs.DFSClient (DFSInputStream.java:blockSeekTo(612))
- Failed to connect to /x.x.x.10:50010 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException:
60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending
remote=/x.x.x.10:50010]
> org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for
channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/x.x.x.10:50010]
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:533)
> 	at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3101)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:755)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:670)
> 	at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:337)
> 	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:576)
> 	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:800)
> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:854)
> 	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:143)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message