hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kihwal Lee <kih...@yahoo-inc.com.INVALID>
Subject Re: RollingUpgrade Rollback openfiles issue
Date Fri, 26 May 2017 19:59:13 GMT
>So I think, it’s better to revisit the RU solution considering these usecases.
Yes, I agree. Any use case that mutates the block content or gen stamp needs to be revisited.

I remember talking about the append case, but cannot find the relevant code. We might have
not actually done it. We could save a copy of the existing block & meta in the data storage
trash, up on append while in a RU.

What do you think about truncate? Unlike append, it is utilizing the block recovery /commitBlockSynchronization
facility.  Then what about regular block recoveries? 

Kihwal




________________________________
From: Vinayakumar B <vinayakumar.ba@huawei.com>
To: Kihwal Lee <kihwal@yahoo-inc.com>; "hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org>

Sent: Friday, May 26, 2017 5:05 AM
Subject: RE: RollingUpgrade Rollback openfiles issue



Thanks Kihwal,
 
Found another case, which looks more dangerous.
 
1. File written and closed. FINALIZED block in DN.
2. Rolling upgrade started.
3. File re-opened for append, and appended some bytes. BUT not closed. i.e. BLOCK moved from
FINALIZED to RBW in DN with updated genstamp.
4. RU Rollback Done.
 
After rollback, file will be in CLOSED state, where as block with updated genstamp will be
RWR state in DN. So this will be considered as CORRUPT in Namenode.
 
After rollback, file will have only corrupted locations to read from.
 
So I think, it’s better to revisit the RU solution considering these usecases.
 
Any thoughts?
 
-Vinay
From:Kihwal Lee [mailto:kihwal@yahoo-inc.com] 
Sent: 25 May 2017 22:16
To: Vinayakumar B <vinayakumar.ba@huawei.com>; hdfs-dev@hadoop.apache.org
Subject: Re: RollingUpgrade Rollback openfiles issue
 
Hi Vinay,
 
If I rephrase the question,
 
Does a RU rollback snapshot provide a consistent snapshot of the distributed file system?
 
I don't think we aimed it to be a completely consistent snapshot. It is meant to be a safe
place to go back with the old version of software.  This is normally used as a last resort.
By design, the datanodes will have extra blocks on rollback, which will be invalidated quickly.
 But the short presence of blocks with "future" block ids still can interfere with block allocations
after rolling back, if the cluster is used right away.  As you pointed out, the under-construction
block length is not recorded either.
 
>But now, extra bytes are seen after rollback. Is this correct?
I think it is a reasonable compromise.  If you can make a general argument against it, we
can revisit the design and try to fix it.
 
Kihwal



________________________________

From:Vinayakumar B <vinayakumar.ba@huawei.com>
To: "hdfs-dev@hadoop.apache.org" <hdfs-dev@hadoop.apache.org> 
Sent: Thursday, May 25, 2017 8:10 AM
Subject: RollingUpgrade Rollback openfiles issue
 
 
Hi all,
 
Have a doubt with expected behavior in case of RollingUpgrade and rollback.
 
Scenario:
 
1. file was being written before rolling upgrade started and written some bytes, say X, with
hsync().
2. Rolling upgrade done and writer continued to write and added some more bytes and file closed
with X+Y bytes.
3. Now rollback done.
 
i. Current state of the File is UNDERCONSTRUCTION.
Ii. getFileStatus() returns with size X. BUT in replicas there is a FINALIZED replica with
size X+Y.
iii. recoverLease() on the file closes the file with X+Y bytes.
 
Question:
  What should be the size here after rollback + recoverLease()?
  Since user always writes with hsync(), application might have some other track of how much
bytes written. But now, extra bytes are seen after rollback. Is this correct?
 
-vinay

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message