hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "gao shan (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10586) Erasure Code misfunctions when 3 DataNode down
Date Fri, 01 Jul 2016 09:46:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15358715#comment-15358715
] 

gao shan commented on HDFS-10586:
---------------------------------

I check the log, find the following errors.  All the datanodes are alive, but what's the meaning
of the WARN  "Failed to find datanode " ?   172.16.1.85 is the namenode,  The other IPs are
for datanodes.

2016-06-28 10:44:57,995 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:57,996 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:57,996 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073744759_7794,
replicas=172.16.1.143:9866, 172.16.1.92:9866, 172.16.1.87:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.jar
2016-06-28 10:44:58,233 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.jar
is closed by DFSClient_NONMAPREDUCE_1881763906_1
2016-06-28 10:44:58,239 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing
replication from 3 to 10 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.jar
2016-06-28 10:44:58,365 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing
replication from 3 to 10 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.split
2016-06-28 10:44:58,368 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,368 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,369 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073744760_7795,
replicas=172.16.1.87:9866, 172.16.1.90:9866, 172.16.1.91:9866, 172.16.1.88:9866, 172.16.1.89:9866,
172.16.1.86:9866, 172.16.1.93:9866, 172.16.1.92:9866, 172.16.1.143:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.split
2016-06-28 10:44:58,541 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.split
is closed by DFSClient_NONMAPREDUCE_1881763906_1
2016-06-28 10:44:58,548 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,549 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,549 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073744761_7796,
replicas=172.16.1.93:9866, 172.16.1.88:9866, 172.16.1.143:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.splitmetainfo
2016-06-28 10:44:58,632 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.splitmetainfo
is closed by DFSClient_NONMAPREDUCE_1881763906_1
2016-06-28 10:44:58,773 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,773 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:44:58,774 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073744762_7797,
replicas=172.16.1.91:9866, 172.16.1.143:9866, 172.16.1.86:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.xml
2016-06-28 10:44:58,857 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job.xml
is closed by DFSClient_NONMAPREDUCE_1881763906_1
2016-06-28 10:45:06,285 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:45:06,285 WARN org.apache.hadoop.net.NetworkTopology: Failed to find datanode
(scope="" excludedScope="/default-rack").
2016-06-28 10:45:06,285 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073744763_7798,
replicas=172.16.1.90:9866, 172.16.1.86:9866, 172.16.1.91:9866 for /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job_1467124628054_0001_1_conf.xml
2016-06-28 10:45:06,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /tmp/hadoop-yarn/staging/root/.staging/job_1467124628054_0001/job_1467124628054_0001_1_conf.xml
is closed by DFSClient_NONMAPREDUCE_2078921355_1
2016-06-28 10:45:12,227 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to place enough replicas, still in need of 1 to reach 9 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true)
For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-06-28 10:45:12,228 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to
place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=9,
selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK],
creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-06-28 10:45:12,228 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy:
Failed to place enough replicas, still in need of 1 to reach 9 (unavailableStorages=[DISK],
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]},
newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2016-06-28 10:45:12,228 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_-9223372036854736880_7799,
replicas=172.16.1.90:9866, 172.16.1.87:9866, 172.16.1.93:9866, 172.16.1.86:9866, 172.16.1.88:9866,
172.16.1.143:9866, 172.16.1.92:9866, 172.16.1.89:9866 for /gaos/io_data/test_io_12
2016-06-28 10:45:12,541 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: updatePipeline(blk_-9223372036854736880_7799,
newGS=7800, newLength=393216, newNodes=[172.16.1.90:9866, 172.16.1.87:9866, 172.16.1.93:9866,
172.16.1.86:9866, 172.16.1.88:9866, 172.16.1.143:9866, 172.16.1.92:9866, 172.16.1.89:9866,
null:0], client=DFSClient_attempt_1467124628054_0001_m_000002_0_1932721620_1)
2016-06-28 10:45:12,542 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: updatePipeline(blk_-9223372036854736880_7799
=> blk_-9223372036854736880_7800) success
2016-06-28 10:45:14,660 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_-9223372036854736864_7801,
replicas=172.16.1.92:9866, 172.16.1.93:9866, 172.16.1.86:9866, 172.16.1.143:9866, 172.16.1.91:9866,
172.16.1.87:9866, 172.16.1.89:9866, 172.16.1.90:9866, 172.16.1.88:9866 for /gaos/io_data/test_io_4
2016-06-28 10:45:14,722 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_-9223372036854736848_7802,
replicas=172.16.1.91:9866, 172.16.1.87:9866, 172.16.1.90:9866, 172.16.1.86:9866, 172.16.1.88:9866,
172.16.1.93:9866, 172.16.1.92:9866, 172.16.1.143:9866, 172.16.1.89:9866 for /gaos/io_data/test_io_8
2016-06-28 10:45:14,749 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_-9223372036854736832_7803,
replicas=172.16.1.86:9866, 172.16.1.93:9866, 172.16.1.89:9866, 172.16.1.92:9866, 172.16.1.143:9866,
172.16.1.90:9866, 172.16.1.87:9866, 172.16.1.91:9866, 172.16.1.88:9866 for /gaos/io_data/test_io_18
............................


Also on the datanodes, ( e.g. 172.16.1.143),   there are some errors:,

java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:204)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:923)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:846)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:171)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:105)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
	at java.lang.Thread.run(Thread.java:745)
2016-06-28 10:56:44,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-257845079-172.16.1.85-1466418599731:blk_-9223372036854736496_7847, type=LAST_IN_PIPELINE:
Thread is interrupted.
2016-06-28 10:56:44,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-257845079-172.16.1.85-1466418599731:blk_-9223372036854736496_7847, type=LAST_IN_PIPELINE
terminating
2016-06-28 10:56:44,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
BP-257845079-172.16.1.85-1466418599731:blk_-9223372036854736496_7847 received exception java.io.IOException:
Premature EOF from inputStream
2016-06-28 10:56:44,830 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: host-172-16-1-143:9866:DataXceiver
error processing WRITE_BLOCK operation  src: /172.16.1.85:8185 dst: /172.16.1.143:9866
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:204)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:522)
.................................

> Erasure Code misfunctions when 3 DataNode down
> ----------------------------------------------
>
>                 Key: HDFS-10586
>                 URL: https://issues.apache.org/jira/browse/HDFS-10586
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha1
>         Environment: 9 DataNode and 1 NameNode,    Erasured code policy is set as "6--3",
  When 3 DataNode down,  erasured code fails and an exception is thrown
>            Reporter: gao shan
>
> The following is the steps to reproduce:
> 1) hadoop fs -mkdir /ec
> 2) set erasured code policy as "6-3"
> 3) "write" data by : 
> time hadoop jar /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
 TestDFSIO -D test.build.data=/ec -write -nrFiles 30 -fileSize 12288 -bufferSize 1073741824
> 4) Manually down 3 nodes.  Kill the threads of  "datanode" and "nodemanager" in 3 DataNode.
> 5) By using erasured code to "read" data by:
> time hadoop jar /opt/hadoop/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar
 TestDFSIO -D test.build.data=/ec -read -nrFiles 30 -fileSize 12288 -bufferSize 1073741824
> then the failure occurs and the exception is thrown as:
> INFO mapreduce.Job: Task Id : attempt_1465445965249_0008_m_000034_2, Status : FAILED
> Error: java.io.IOException: 4 missing blocks, the stripe is: Offset=0, length=8388608,
fetchedChunksNum=0, missingChunksNum=4
> 	at org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.checkMissingBlocks(DFSStripedInputStream.java:614)
> 	at org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readParityChunks(DFSStripedInputStream.java:647)
> 	at org.apache.hadoop.hdfs.DFSStripedInputStream$StripeReader.readStripe(DFSStripedInputStream.java:762)
> 	at org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:316)
> 	at org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:450)
> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
> 	at java.io.DataInputStream.read(DataInputStream.java:149)
> 	at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:531)
> 	at org.apache.hadoop.fs.TestDFSIO$ReadMapper.doIO(TestDFSIO.java:508)
> 	at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
> 	at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message