From hdfs-issues-return-212268-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Thu Mar 8 13:40:06 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 9934F18064C for ; Thu, 8 Mar 2018 13:40:05 +0100 (CET) Received: (qmail 17276 invoked by uid 500); 8 Mar 2018 12:40:04 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 17265 invoked by uid 99); 8 Mar 2018 12:40:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Mar 2018 12:40:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 26B3EC012E for ; Thu, 8 Mar 2018 12:40:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -107.011 X-Spam-Level: X-Spam-Status: No, score=-107.011 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, URIBL_DBL_SPAM=2.5, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id KdiaHx3zBNYA for ; Thu, 8 Mar 2018 12:40:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 64A275F5FB for ; Thu, 8 Mar 2018 12:40:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E3788E0360 for ; Thu, 8 Mar 2018 12:40:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3C0A525411 for ; Thu, 8 Mar 2018 12:40:00 +0000 (UTC) Date: Thu, 8 Mar 2018 12:40:00 +0000 (UTC) From: "Wei-Chiu Chuang (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HDFS-13243) Get CorruptBlock because of calling close and sync in same time MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-13243?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1639= 1178#comment-16391178 ]=20 Wei-Chiu Chuang edited comment on HDFS-13243 at 3/8/18 12:39 PM: ----------------------------------------------------------------- Hi [~gzh1992n] thanks very much for reporting the issue. The NN log is usef= ul too. I looked at the patch & log and=C2=A0tried to understand where the problem = is. I don't think I understand the problem fully, but here are some of the = thoughts that I'd like to share with you. # It seems to me the root of problem is that client would call fsync() wit= h an incorrect length (shorter than what it is supposed to sync). If that's= the case=C2=A0you should fix the client (DFSOutputStream), rather than the= NameNode. # Looking at the log, your minimal replication number is 2, rather than 1.= That's very unusual. In my past experience a lot of weird behavior like th= is could arise when you have that kind of configuration. # And why is close() in the picture? IMHO you don't even need to close(). = Suppose you block DataNode heartbeat, and let client keep the file open and= then call sync(), the last block's state remains in COMMITTED. Would that = cause the same behavior? # Looking at the patch, I would like to ask you to stay away from using re= flection. You could refactor FSNamesystem and DFSOutputStream to return a n= ew FSNamesystem/DFSOutputStream object and override them in the test code. = That way, you don't need to introduce new configurations too. And it'll be = much cleaner. # I don't understand the following code.=C2=A0 ## if=C2=A0lastBlockLength <=3D 0 || lastBlockLength <=3D b.getNumBytes() = is unexpected. You should not just log a debug message and ignore it. It's = got to be a WARN level message. You should also log the size of b.getNumByt= es() as well. There's also a grammatical error in the log message too. ## If your fix is correct, you should update=C2=A0the assertion in FileUnd= erConstructionFeature#updateLengthOfLastBlock() so it expects neither COMMI= TTED nor COMPLETE. ## What should it do when block state is unexpected? I=C2=A0don't think yo= u should just ignore it. {code:java} BlockInfo b =3D pendingFile.getLastBlock(); if (lastBlockLength <=3D 0 || lastBlockLength <=3D b.getNumBytes()) { LOG.debug("lastBlockLength(" + lastBlockLength + ") seems wrong, maybe hav= e a bug here?"); return; } if (b.getBlockUCState() !=3D BlockUCState.COMMITTED && b.getBlockUCState() !=3D BlockUCState.COMPLETE) {{code} =C2=A0 =C2=A0 was (Author: jojochuang): Hi [~gzh1992n] thanks very much for reporting the issue. The NN log is usef= ul too. I looked at the patch & log and=C2=A0tried to understand where the problem = is. I don't think I understand the problem fully, but here are some of the = thoughts that I'd like to share with you. # It seems to me the root of problem is that client would call fsync() wit= h an incorrect length (shorter than what it is supposed to sync). If that's= the case=C2=A0you should fix the client (DFSOutputStream), rather than the= NameNode. # Looking at the log, your minimal replication number is 2, rather than 1.= That's very unusual. In my past experience a lot of weird behavior like th= is could arise when you have that kind of configuration. # Looking at the patch, I would like to ask you to stay away from using re= flection. You could refactor FSNamesystem and DFSOutputStream to return a n= ew FSNamesystem/DFSOutputStream object and override them in the test code. = That way, you don't need to introduce new configurations too. And it'll be = much cleaner. # I don't understand the following code.=C2=A0 ## if=C2=A0lastBlockLength <=3D 0 || lastBlockLength <=3D b.getNumBytes() = is unexpected. You should not just log a debug message and ignore it. It's = got to be a WARN level message. You should also log the size of b.getNumByt= es() as well. There's also a grammatical error in the log message too. ## If your fix is correct, you should update=C2=A0the assertion in FileUnd= erConstructionFeature#updateLengthOfLastBlock() so it expects neither COMMI= TTED nor COMPLETE. ## What should it do when block state is unexpected? I=C2=A0don't think yo= u should just ignore it. {code:java} BlockInfo b =3D pendingFile.getLastBlock(); if (lastBlockLength <=3D 0 || lastBlockLength <=3D b.getNumBytes()) { LOG.debug("lastBlockLength(" + lastBlockLength + ") seems wrong, maybe hav= e a bug here?"); return; } if (b.getBlockUCState() !=3D BlockUCState.COMMITTED && b.getBlockUCState() !=3D BlockUCState.COMPLETE) {{code} =C2=A0 =C2=A0 > Get CorruptBlock because of calling close and sync in same time > --------------------------------------------------------------- > > Key: HDFS-13243 > URL: https://issues.apache.org/jira/browse/HDFS-13243 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.7.2, 3.2.0 > Reporter: Zephyr Guo > Priority: Critical > Fix For: 3.2.0 > > Attachments: HDFS-13243-v1.patch, HDFS-13243-v2.patch > > > HDFS File might get broken because of corrupt block(s) that could be prod= uced by calling close and sync in the same time. > When calling close was not successful, UCBlock status would change to COM= MITTED, and if a sync request gets popped from queue and processed, sync op= eration would change the last block length. > After that, DataNode would report all received block to NameNode, and wil= l check Block length of all COMMITTED Blocks. But the block length was alre= ady different between recorded in NameNode memory and reported by DataNode,= and consequently, the last block is marked as corruptted because of incons= istent length. > =C2=A0 > {panel:title=3DLog in my hdfs} > 2018-03-05 04:05:39,261 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* a= llocate blk_1085498930_11758129\{UCState=3DUNDER_CONSTRUCTION, truncateBloc= k=3Dnull, primaryNodeIndex=3D-1, replicas=3D[ReplicaUC[[DISK]DS-32c7e479-38= 45-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW], ReplicaUC[[DISK]DS-= a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW], ReplicaU= C[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW= ]]} for /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,1= 519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C1519= 845790686.default.1520193926515 > 2018-03-05 04:05:39,760 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* f= sync: /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,16020,151= 9845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C151984= 5790686.default.1520193926515 for DFSClient_NONMAPREDUCE_1077513762_1 > 2018-03-05 04:05:39,761 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: BLOCK* blk_1085498930_11758129\{UCState=3DCOMMITTED, truncateBlock= =3Dnull, primaryNodeIndex=3D-1, replicas=3D[ReplicaUC[[DISK]DS-32c7e479-384= 5-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW], ReplicaUC[[DISK]DS-a= 9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW], ReplicaUC= [[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]= ]} is not COMPLETE (ucState =3D COMMITTED, replication# =3D 0 < minimum =3D= 2) in file /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,160= 20,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020%2C= 1519845790686.default.1520193926515 > 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK* addStoredBlock: blo= ckMap updated: 10.0.0.220:50010 is added to blk_1085498930_11758129\{UCStat= e=3DCOMMITTED, truncateBlock=3Dnull, primaryNodeIndex=3D-1, replicas=3D[Rep= licaUC[[DISK]DS-32c7e479-3845-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:5001= 0|RBW], ReplicaUC[[DISK]DS-a9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0= .0.220:50010|RBW], ReplicaUC[[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:= NORMAL:10.0.0.218:50010|RBW]]} size 2054413 > 2018-03-05 04:05:39,761 INFO BlockStateChange: BLOCK NameSystem.addToCorr= uptReplicasMap: blk_1085498930 added as corrupt on 10.0.0.219:50010 by hb-j= 5e517al6xib80rkb-006.hbase.rds.aliyuncs.com/10.0.0.219 because block is COM= MITTED and reported length 2054413 does not match length in block map 14123= 2 > 2018-03-05 04:05:39,762 INFO BlockStateChange: BLOCK NameSystem.addToCorr= uptReplicasMap: blk_1085498930 added as corrupt on 10.0.0.218:50010 by hb-j= 5e517al6xib80rkb-004.hbase.rds.aliyuncs.com/10.0.0.218 because block is COM= MITTED and reported length 2054413 does not match length in block map 14123= 2 > 2018-03-05 04:05:40,162 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: BLOCK* blk_1085498930_11758129\{UCState=3DCOMMITTED, truncateBlock= =3Dnull, primaryNodeIndex=3D-1, replicas=3D[ReplicaUC[[DISK]DS-32c7e479-384= 5-4a44-adf1-831edec7506b:NORMAL:10.0.0.219:50010|RBW], ReplicaUC[[DISK]DS-a= 9a5d653-c049-463d-8e4a-d1f0dc14409c:NORMAL:10.0.0.220:50010|RBW], ReplicaUC= [[DISK]DS-f2b7c04a-b724-4c69-abbf-d2e416f70706:NORMAL:10.0.0.218:50010|RBW]= ]} is not COMPLETE (ucState =3D COMMITTED, replication# =3D 3 >=3D minimum = =3D 2) in file /hbase/WALs/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com,= 16020,1519845790686/hb-j5e517al6xib80rkb-006.hbase.rds.aliyuncs.com%2C16020= %2C1519845790686.default.1520193926515 > {panel} -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org