hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Haohui Mai (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks
Date Mon, 20 Jul 2015 22:28:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14634172#comment-14634172
] 

Haohui Mai commented on HDFS-8344:
----------------------------------

-1. Can you please revert the commit?

I'm concerned with the complexity associated with the commit as well as the difficulty for
the users to choose the right configuration. It's an internal implementation detail and it
should not be exposed to users whenever it's possible. We intentionally keep the soft and
hard limit not configurable to avoid the users shooting their foot.

bq. The datanode might be busy and recovery may fail the first time.

That's exactly what the hard limit / retries of leases is designed for. Again this is only
one type of internal implementation towards the solution. The detail should not be exposed
to the users.

> NameNode doesn't recover lease for files with missing blocks
> ------------------------------------------------------------
>
>                 Key: HDFS-8344
>                 URL: https://issues.apache.org/jira/browse/HDFS-8344
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.0
>            Reporter: Ravi Prakash
>            Assignee: Ravi Prakash
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, HDFS-8344.03.patch, HDFS-8344.04.patch,
HDFS-8344.05.patch, HDFS-8344.06.patch, HDFS-8344.07.patch
>
>
> I found another\(?) instance in which the lease is not recovered. This is reproducible
easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply reduces how
long you have to wait
> {code}
>       public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
>       public static final long LEASE_HARDLIMIT_PERIOD = 2 * LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed so some
of the data has landed on the datanodes) (I'm copying the client code I am using. I generate
a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar TestHadoop.jar) process
after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was only 1)
> I believe the lease should be recovered and the block should be marked missing. However
this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned cleanly. Although
we knew that the client had crashed, the Namenode never released the leases (even after restarting
the Namenode) (even months afterwards). There are actually several other cases too where we
don't consider what happens if ALL the datanodes die while the file is being written, but
I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message