hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6440) Support more than 2 NameNodes
Date Thu, 07 May 2015 19:52:03 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533279#comment-14533279

Aaron T. Myers commented on HDFS-6440:

Hey Jesse,

Thanks a lot for working through my feedback, responses below.

bq. I'm not sure how we would test this when needing to change the structure of the FS to
support more than 2 NNs. Would you recommend (1) recognizing the old layout and then (2) transfering
it into the new layout? The reason this seems silly (to me) is that the layout is only enforced
by the way the minicluster is used/setup, rather than the way things would actually be run.
By moving things into the appropriate directories per-nn, but keeping everything else below
that the same, I think we keep the same upgrade properties but don't need to do the above
contrived/synthetic "upgrade".

I'm specifically thinking about just expanding {{TestRollingUpgrade}} with some tests that
exercise the > 2 NN scenario, e.g. amending or expanding {{testRollingUpgradeWithQJM}}.

bq. Maybe some salesforce terminology leak here.<snip>

Cool, that's what I figured. The new comment looks good to me.

bq. Yes, it for when there is an error and you want to run the exact sequence of failovers
again in the test. Minor helper, but can be useful when trying to track down ordering dependency
issues (which there shoudn't be, but sometimes these things can creep in).

Sorry, maybe I wasn't clear. I get the point of using the random seed in the first place,
but I'm specifically talking about the fact that in {{doWriteOverFailoverTest}} we change
the value of that variable, log the value, and then never read it again. Doesn't seem like
that's doing anything.

bq. It can either be an InterruptedException or an IOException when transfering the checkpoint.
Interrupted ("ie") thrown if we are interrupted while waiting the any checkpoint to complete.
IOE if there is an execution exception when doing the checkpoint.<snip>

Right, I get that, but what I was pointing out was just that in the previous version of the
patch the variable "{{ie}}" was never being assigned to anything but "{{null}}". Here was
the code in that patch, note the 4th-to-last line:
+    InterruptedException ie = null;
+    IOException ioe= null;
+    int i = 0;
+    boolean success = false;
+    for (; i < uploads.size(); i++) {
+      Future<TransferFsImage.TransferResult> upload = uploads.get(i);
+      try {
+        // TODO should there be some smarts here about retries nodes that are not the active
+        if (upload.get() == TransferFsImage.TransferResult.SUCCESS) {
+          success = true;
+          //avoid getting the rest of the results - we don't care since we had a successful
+          break;
+        }
+      } catch (ExecutionException e) {
+        ioe = new IOException("Exception during image upload: " + e.getMessage(),
+            e.getCause());
+        break;
+      } catch (InterruptedException e) {
+        ie = null;
+        break;
+      }
+    }
That's fixed in the latest version of the patch, where the variable "{{ie}}" is assigned to
"{{e}}" when an {{InterruptedException}} occurs, so I think we're good.

bq. There is {{TestFailoverWithBlockTokensEnabled}}<snip>

Ah, my bad. Yes indeed, that looks good to me. The overlapping range issue is exactly what
I wanted to see tested.

> Support more than 2 NameNodes
> -----------------------------
>                 Key: HDFS-6440
>                 URL: https://issues.apache.org/jira/browse/HDFS-6440
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: auto-failover, ha, namenode
>    Affects Versions: 2.4.0
>            Reporter: Jesse Yates
>            Assignee: Jesse Yates
>             Fix For: 3.0.0
>         Attachments: Multiple-Standby-NameNodes_V1.pdf, hdfs-6440-cdh-4.5-full.patch,
hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v1.patch, hdfs-6440-trunk-v3.patch, hdfs-multiple-snn-trunk-v0.patch
> Most of the work is already done to support more than 2 NameNodes (one active, one standby).
This would be the last bit to support running multiple _standby_ NameNodes; one of the standbys
should be available for fail-over.
> Mostly, this is a matter of updating how we parse configurations, some complexity around
managing the checkpointing, and updating a whole lot of tests.

This message was sent by Atlassian JIRA

View raw message