hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daniel Pol (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8198) Erasure Coding: system test of TeraSort
Date Fri, 10 Nov 2017 14:26:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16247558#comment-16247558
] 

Daniel Pol commented on HDFS-8198:
----------------------------------

[~eddyxu] I have 7 datanodes. I'm new to the JIRA system and I can't seem to find the proper
way to upload the terasort output file. Please let me know how I can do that. The relevant
error from the terasort output is:
17/11/04 09:36:15 INFO mapreduce.Job: Task Id : attempt_1509761319113_0021_m_000002_0, Status
: FAILEDError: java.io.IOException: 3 missing blocks, the stripe is: Offset=77594624, length=1048576,
fetchedChunksNum=1, missingChunksNum=3; locatedBlocks is: LocatedBlocks{  fileLength=5000000000 
underConstruction=false  blocks=[LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841888_5101378;
getBlockSize()=1610612736; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[172.30.253.6:50010,DS-780df34f-44c3-4c67-b7dc-f901bc12a957,DISK],
DatanodeInfoWithStorage[172.30.253.5:50010,DS-c5e33c96-3df3-480b-80aa-fe97a3b8e3b4,DISK],
DatanodeInfoWithStorage[172.30.253.3:50010,DS-4cd5c037-9dcb-488c-81c2-0aa8ff1cbd2f,DISK],
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6bac2c0f-f8c6-4a67-8801-f2a7a74279a6,DISK],
DatanodeInfoWithStorage[172.30.253.7:50010,DS-0ee9e606-db4b-4df6-b180-fedb696c5e4f,DISK]];
indices=[0, 1, 2, 3, 4]}, LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841856_5101380;
getBlockSize()=1610612736; corrupt=false; offset=1610612736; locs=[DatanodeInfoWithStorage[172.30.253.2:50010,DS-f053781f-b2c4-41e9-8960-745b3fe8ef50,DISK],
DatanodeInfoWithStorage[172.30.253.5:50010,DS-4efc46be-5769-4a2f-9cf6-736b3d56edaf,DISK],
DatanodeInfoWithStorage[172.30.253.3:50010,DS-74b0796e-425d-4fa6-9309-247271f63f53,DISK],
DatanodeInfoWithStorage[172.30.253.4:50010,DS-ddfc805a-9ed9-4493-921d-acc169787683,DISK],
DatanodeInfoWithStorage[172.30.253.7:50010,DS-c3be97ce-660a-4c98-9f71-5c2f76236dc4,DISK]];
indices=[0, 1, 2, 3, 4]}, LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841824_5101382;
getBlockSize()=1610612736; corrupt=false; offset=3221225472; locs=[DatanodeInfoWithStorage[172.30.253.1:50010,DS-336c025e-f04b-475f-b051-d7a4d1b7669f,DISK],
DatanodeInfoWithStorage[172.30.253.5:50010,DS-dab6afcd-bf22-4d1d-b878-d52ee0b5bcd9,DISK],
DatanodeInfoWithStorage[172.30.253.7:50010,DS-16ade97a-978c-4a83-aae4-f25e861d63f5,DISK],
DatanodeInfoWithStorage[172.30.253.2:50010,DS-176f2769-3236-4548-94df-74de95171cdd,DISK],
DatanodeInfoWithStorage[172.30.253.3:50010,DS-2350ab83-f4bd-49f1-aa29-f8d4b5de5f78,DISK]];
indices=[0, 1, 2, 3, 4]}, LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841792_5101384;
getBlockSize()=168161792; corrupt=false; offset=4831838208; locs=[DatanodeInfoWithStorage[172.30.253.5:50010,DS-b63b7da0-20b7-4480-b80a-cb0491c4e17f,DISK],
DatanodeInfoWithStorage[172.30.253.2:50010,DS-dcb3d66b-ee0f-4e4d-b5c8-611498227092,DISK],
DatanodeInfoWithStorage[172.30.253.1:50010,DS-bc0b4749-6599-4691-98b6-35623ce8c08d,DISK],
DatanodeInfoWithStorage[172.30.253.7:50010,DS-1029b9e5-abff-4c63-bb9f-7986d1729e03,DISK],
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6fa25607-f980-4a15-8592-d31ef51a48ba,DISK]];
indices=[0, 1, 2, 3, 4]}]  lastLocatedBlock=LocatedStripedBlock{BP-260511027-172.30.253.91-1487788944154:blk_-9223372036852841792_5101384;
getBlockSize()=168161792; corrupt=false; offset=4831838208; locs=[DatanodeInfoWithStorage[172.30.253.5:50010,DS-b63b7da0-20b7-4480-b80a-cb0491c4e17f,DISK],
DatanodeInfoWithStorage[172.30.253.2:50010,DS-dcb3d66b-ee0f-4e4d-b5c8-611498227092,DISK],
DatanodeInfoWithStorage[172.30.253.1:50010,DS-bc0b4749-6599-4691-98b6-35623ce8c08d,DISK],
DatanodeInfoWithStorage[172.30.253.7:50010,DS-1029b9e5-abff-4c63-bb9f-7986d1729e03,DISK],
DatanodeInfoWithStorage[172.30.253.4:50010,DS-6fa25607-f980-4a15-8592-d31ef51a48ba,DISK]];
indices=[0, 1, 2, 3, 4]}  isLastBlockComplete=true} at org.apache.hadoop.hdfs.StripeReader.checkMissingBlocks(StripeReader.java:175) at
org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:366) at org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:315) at
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:388) at
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:813) at java.io.DataInputStream.read(DataInputStream.java:149) at
org.apache.hadoop.examples.terasort.TeraInputFormat$TeraRecordReader.nextKeyValue(TeraInputFormat.java:257) at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:562) at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at
org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177) at
java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)

> Erasure Coding: system test of TeraSort
> ---------------------------------------
>
>                 Key: HDFS-8198
>                 URL: https://issues.apache.org/jira/browse/HDFS-8198
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: HDFS-7285
>            Reporter: Kai Sasaki
>
> Functional system test of TeraSort on EC files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message