hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wei-Chiu Chuang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-13524) Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize
Date Thu, 03 May 2018 01:22:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Wei-Chiu Chuang updated HDFS-13524:
-----------------------------------
    Description: 
TestLargeBlock#testLargeBlockSize may fail with error:
{quote}
All datanodes [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
are bad. Aborting...
{quote}

Tracing back, the error is due to the stress applied to the host sending a 2GB block, causing
write pipeline ack read timeout:
{quote}
2017-09-10 22:16:07,285 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  datanode.DataNode
(DataXceiver.java:writeBlock(742)) - Receiving BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
src: /127.0.0.1:57794 dest: /127.0.0.1:44968
2017-09-10 22:16:50,402 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN  datanode.DataNode
(BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync took 5383ms (threshold=300ms), isSync:false,
flushTotalNanos=5383638982ns, volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
2017-09-10 22:17:54,427 [ResponseProcessor for block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]
WARN  hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready
for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
2017-09-10 22:17:54,432 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  datanode.DataNode
(BlockReceiver.java:receiveBlock(1000)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
java.io.IOException: Connection reset by peer
{quote}

Instead of raising read timeout, I suggest increasing cluster size from default=1 to 3, so
that it has the opportunity to choose a different DN and retry.

Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we introduced client
acknowledgement read timeout.

  was:
TestLargeBlock#testLargeBlockSize may fail with error:
{quote}
All datanodes [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
are bad. Aborting...
{quote}

Tracing back, the error is due to the stress applied to the host sending a 2GB block, causing
write pipeline ack read timeout:
{quote}
2017-09-10 22:16:07,285 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  datanode.DataNode
(DataXceiver.java:writeBlock(742)) - Receiving BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
src: /127.0.0.1:57794 dest: /127.0.0.1:44968
2017-09-10 22:16:50,402 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] WARN  datanode.DataNode
(BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync took 5383ms (threshold=300ms), isSync:false,
flushTotalNanos=5383638982ns, volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
2017-09-10 22:17:54,427 [ResponseProcessor for block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]
WARN  hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready
for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at java.io.FilterInputStream.read(FilterInputStream.java:83)
	at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
2017-09-10 22:17:54,432 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at /127.0.0.1:57794
[Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]] INFO  datanode.DataNode
(BlockReceiver.java:receiveBlock(1000)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
java.io.IOException: Connection reset by peer
{quote}

Instead of raising read timeout, I suggest increasing cluster size from default=1 to 3, so
that it has the opportunity to choose a different DN and resend.

Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we introduced client
acknowledgement read timeout.


> Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize
> -----------------------------------------------------------------------------
>
>                 Key: HDFS-13524
>                 URL: https://issues.apache.org/jira/browse/HDFS-13524
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.8.0, 3.0.0-alpha1
>            Reporter: Wei-Chiu Chuang
>            Priority: Major
>
> TestLargeBlock#testLargeBlockSize may fail with error:
> {quote}
> All datanodes [DatanodeInfoWithStorage[127.0.0.1:44968,DS-acddd79e-cdf1-4ac5-aac5-e804a2e61600,DISK]]
are bad. Aborting...
> {quote}
> Tracing back, the error is due to the stress applied to the host sending a 2GB block,
causing write pipeline ack read timeout:
> {quote}
> 2017-09-10 22:16:07,285 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at
/127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]]
INFO  datanode.DataNode (DataXceiver.java:writeBlock(742)) - Receiving BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
src: /127.0.0.1:57794 dest: /127.0.0.1:44968
> 2017-09-10 22:16:50,402 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at
/127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]]
WARN  datanode.DataNode (BlockReceiver.java:flushOrSync(434)) - Slow flushOrSync took 5383ms
(threshold=300ms), isSync:false, flushTotalNanos=5383638982ns, volume=file:/tmp/tmp.1oS3ZfDCwq/src/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/
> 2017-09-10 22:17:54,427 [ResponseProcessor for block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]
WARN  hdfs.DataStreamer (DataStreamer.java:run(1214)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be
ready for read. ch : java.nio.channels.SocketChannel[connected local=/127.0.0.1:57794 remote=/127.0.0.1:44968]
> 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at java.io.FilterInputStream.read(FilterInputStream.java:83)
> 	at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:434)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
> 	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1104)
> 2017-09-10 22:17:54,432 [DataXceiver for client DFSClient_NONMAPREDUCE_998779779_9 at
/127.0.0.1:57794 [Receiving block BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001]]
INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(1000)) - Exception for BP-682118952-172.26.15.143-1505106964162:blk_1073741825_1001
> java.io.IOException: Connection reset by peer
> {quote}
> Instead of raising read timeout, I suggest increasing cluster size from default=1 to
3, so that it has the opportunity to choose a different DN and retry.
> Suspect this fails after HDFS-13103, in Hadoop 2.8/3.0.0-alpha1 when we introduced client
acknowledgement read timeout.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message