hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2713) HA : An alternative approach to clients handling Namenode failover.
Date Thu, 22 Dec 2011 19:23:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13174980#comment-13174980

Aaron T. Myers commented on HDFS-2713:

After any DFSClient operation fails due to Namenode unavailability, the most important thing
to do is to detect when the Active Namenode becomes available again.
So the background thread is not doing any unnecessary work, it is doing the high priority

That's not necessarily true. It's only important *if* the DFSClient will indeed be used later
for another client operation after some client operation has timed out. If it's not reused,
then any work the background thread has done will in fact have been unnecessary. I would guess
(pure conjecture) that most client programs will not be structured this way. That is, once
a single call times out for a given DFSClient, that DFSClient will not be reused.

My intention is that, when one client call finds failover is required and not able to complete
the failover within the wait time, then why do I need to wait till next call comes to try
again and failover after mindealy wait? 
Even though the first call fails, this background thread will ensure to find the active proxy
instance. If next call comes now(this is user thread), it need not wait to connect and failover
again. Immediately it can make use of that proxy instance and goahead.

But the delay on the subsequent call should be minimal (less than a second), assuming the
NN is indeed back up. Given that failover of the NN will likely take tens of seconds, this
particular aspect of client failover doesn't seem to me like it's something worth optimizing

Two things:

# This isn't so much "an alternative approach to clients handling NN failover" as it is a
potential performance improvement for subsequent calls after a failed call due to the active
NN being down. Would you mind changing the JIRA summary and description to better reflect
# Given that it's a performance improvement, could you provide some benchmarks? Do you have
an actual workload which benefits from this change?
> HA : An alternative approach to clients handling  Namenode failover.
> --------------------------------------------------------------------
>                 Key: HDFS-2713
>                 URL: https://issues.apache.org/jira/browse/HDFS-2713
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ha, hdfs client
>    Affects Versions: HA branch (HDFS-1623)
>            Reporter: Uma Maheswara Rao G
>            Assignee: Uma Maheswara Rao G
> This is the approach for client failover which we adopted when we developed HA for Hadoop.
I would like to propose thia approach for others to review & include in the HA implementation,
if found useful.
> This is similar to the ConfiguredProxyProvider in the sense that the it takes the address
of both the Namenodes as the input. The major differences I can see from the current implementation
> 1) During failover, user threads can be controlled very accurately about *the time they
wait for active namenode* to be available, awaiting the retry. Beyond this, the threads will
not be made to wait; DFS Client will throw an Exception indicating that the operation has
> 2) Failover happens in a seperate thread, not in the client application threads. The
thread will keep trying to find the Active Namenode until it succeeds. 
> 3) This also means that irrespective of whether the operation's RetryAction is RETRY_FAILOVER
or FAIL, the user thread can trigger the client's failover. 

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message