Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 48D6C1079F for ; Mon, 22 Apr 2013 17:19:18 +0000 (UTC) Received: (qmail 41453 invoked by uid 500); 22 Apr 2013 17:19:17 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 41397 invoked by uid 500); 22 Apr 2013 17:19:17 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 41353 invoked by uid 99); 22 Apr 2013 17:19:17 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Apr 2013 17:19:17 +0000 Date: Mon, 22 Apr 2013 17:19:17 +0000 (UTC) From: "Ted Yu (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HBASE-8389) HBASE-8354 DDoSes Namenode with lease recovery requests MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13638173#comment-13638173 ] Ted Yu edited comment on HBASE-8389 at 4/22/13 5:18 PM: -------------------------------------------------------- In HDFS-4525, the following API was added: {code} /** * Get the close status of a file * @param src The path to the file * * @return return true if file is closed * @throws FileNotFoundException if the file does not exist. * @throws IOException If an I/O error occurred */ public boolean isFileClosed(Path src) throws IOException { {code} Since it is not available in hadoop 1.1, I will utilize it, through reflection, for 0.95 / trunk. A new HBASE JIRA would be logged for the above improvement. was (Author: yuzhihong@gmail.com): In HDFS-4525, the following API was added: {code} /** * Get the close status of a file * @param src The path to the file * * @return return true if file is closed * @throws FileNotFoundException if the file does not exist. * @throws IOException If an I/O error occurred */ public boolean isFileClosed(Path src) throws IOException { {code} Since it is not available in hadoop 1.1, I will utilize it, through reflection, for 0.95 / trunk. This approach is not ideal because if primary DataNode selected happens to be stale, the check would take some time and client doesn't have firm confirmation whether another lease recovery request should be issued. HDFS-4724 should provide better mechanism. A new HBASE JIRA would be logged for the above improvement. > HBASE-8354 DDoSes Namenode with lease recovery requests > ------------------------------------------------------- > > Key: HBASE-8389 > URL: https://issues.apache.org/jira/browse/HBASE-8389 > Project: HBase > Issue Type: Bug > Reporter: Varun Sharma > Assignee: Varun Sharma > Priority: Critical > Fix For: 0.94.8 > > Attachments: 8389-0.94.txt, 8389-0.94-v2.txt, 8389-0.94-v3.txt, 8389-0.94-v4.txt, 8389-trunk-v1.txt, 8389-trunk-v2.patch, nn1.log, nn.log, sample.patch > > > We ran hbase 0.94.3 patched with 8354 and observed too many outstanding lease recoveries because of the short retry interval of 1 second between lease recoveries. > The namenode gets into the following loop: > 1) Receives lease recovery request and initiates recovery choosing a primary datanode every second > 2) A lease recovery is successful and the namenode tries to commit the block under recovery as finalized - this takes < 10 seconds in our environment since we run with tight HDFS socket timeouts. > 3) At step 2), there is a more recent recovery enqueued because of the aggressive retries. This causes the committed block to get preempted and we enter a vicious cycle > So we do, --> --> > This loop is paused after 300 seconds which is the "hbase.lease.recovery.timeout". Hence the MTTR we are observing is 5 minutes which is terrible. Our ZK session timeout is 30 seconds and HDFS stale node detection timeout is 20 seconds. > Note that before the patch, we do not call recoverLease so aggressively - also it seems that the HDFS namenode is pretty dumb in that it keeps initiating new recoveries for every call. Before the patch, we call recoverLease, assume that the block was recovered, try to get the file, it has zero length since its under recovery, we fail the task and retry until we get a non zero length. So things just work. > Fixes: > 1) Expecting recovery to occur within 1 second is too aggressive. We need to have a more generous timeout. The timeout needs to be configurable since typically, the recovery takes as much time as the DFS timeouts. The primary datanode doing the recovery tries to reconcile the blocks and hits the timeouts when it tries to contact the dead node. So the recovery is as fast as the HDFS timeouts. > 2) We have another issue I report in HDFS 4721. The Namenode chooses the stale datanode to perform the recovery (since its still alive). Hence the first recovery request is bound to fail. So if we want a tight MTTR, we either need something like HDFS 4721 or we need something like this > recoverLease(...) > sleep(1000) > recoverLease(...) > sleep(configuredTimeout) > recoverLease(...) > sleep(configuredTimeout) > Where configuredTimeout should be large enough to let the recovery happen but the first timeout is short so that we get past the moot recovery in step #1. > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira