hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3077) Quorum-based protocol for reading and writing edit logs
Date Mon, 01 Oct 2012 18:47:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13467085#comment-13467085
] 

Todd Lipcon commented on HDFS-3077:
-----------------------------------

bq. The use of the term "master" can be confused with "recovery coordinator". The master JN
is the source for the journal synchronization. The section title uses the word "recovery"
- this word is also used section 2.8.
bq. clarify this in the section
bq. use another term: "journal-sync-master(s)" or "journal-sycn-source(s)" - I prefer the
word "source"

Good call. I changed "master" to "source" throughout.

bq. Q. is Synchronization needed before proceeding when you have a Quorum of JNs that have
all the transaction. That is, in that case does the "acceptRecovery" operation (section 2.8)
force the last log segment in each of the JNs to be consistently finalized? Either, way please
clarify this in section 2.9. (Clearly all unsync'ed JNs have to sync with one of the other
JNs.) I think you are using the design and code of HDFS-3092 but at times I am not sure when
i read this section.

I added the following:

{code}
Note that there may be multiple segments (and respective JournalNodes) that are determined
to be equally good sources by the above rules. For example, if all JournalNodes committed
the most recent transaction and no further transactions were partially proposed, all
JournalNodes would have identical states.

In this case, the current implementation chooses the recovery source arbitrarily between
the equal options. When a JournalNode receives an {\tt acceptRecovery()} RPC for a segment
and sees that it already has an identical segment stored on its disk, it does not waste
any effort in downloading the log from the remote node. So, in such a case that all
JournalNodes have equal segments, no log data need be transferred for recovery.
{code}

The reason why the recovery protocol is still followed when all candidates are equal is that
not all JNs may have responded. So, even if two JNs reply with equal segments, there may be
a third JN (crashed) which has a different segment length. Using a consistent recovery protocol
handles this case without any special-casing, so that a future recovery won't conflict.

----

bq. Section 2.10.6
bq. How can JN1 get new transactions (151, 152, 153) till finalization has been achieved on
a quorum JNs?
bq. Or do you mean that finalize succeeded and all JNs created "edits-inprogress-151" and
then "edits-inprogress-151" got deleted from JN2 and JN3 because they had no transactions
in them as described in 2.10.5?

Yep. The scenario is:
- Call finalizeSegment(1-150) on all JNs, they all succeed
- Call startLogSegment(151) on all JNs, they all succeed
- Call logEdits(151-153), but it only goes to JN1 before crashing


bq. At the end of recovery, can we guarantee that a new open segment is created with one no-op
transaction in it?

I think this actually complicates things, because then we have more edge conditions to consider
-- we have all the failures in this additional write. I prefer to think of recovery as having
one job: closing off the latest log segment. At that point, the writer continues on with writing
the next segment using the usual APIs.

If we had the recovery protocol actually insert a no-op segment on its own, that would break
the abstraction here that the JournalManager is just in charge of storage. It never generates
transactions itself.

bq. BTW I thought that with HDFS-1073 each segment has an initial no-op transaction (BTW did
we have a similar close-segment transaction in HDFS-1073?); did this change as part of HDFS-3077?

Yes, it does have an initial no-op transaction, but the API is such that there are two separate
calls made on the JournalManager: startLogSegment() which opens the file, and logEdit(START_LOG_SEGMENT)
which writes that no-op transaction. Really the "startLogSegment" has no semantic value itself,
which is why I chose to just roll it back during recovery if the JournalNode has an entirely
empty segment (ie it crashed between startLogSegment and the first no-op transaction).


                
> Quorum-based protocol for reading and writing edit logs
> -------------------------------------------------------
>
>                 Key: HDFS-3077
>                 URL: https://issues.apache.org/jira/browse/HDFS-3077
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: ha, name-node
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>             Fix For: QuorumJournalManager (HDFS-3077)
>
>         Attachments: hdfs-3077-partial.txt, hdfs-3077-test-merge.txt, hdfs-3077.txt,
hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt, hdfs-3077.txt,
qjournal-design.pdf, qjournal-design.pdf, qjournal-design.pdf, qjournal-design.pdf, qjournal-design.tex,
qjournal-design.tex
>
>
> Currently, one of the weak points of the HA design is that it relies on shared storage
such as an NFS filer for the shared edit log. One alternative that has been proposed is to
depend on BookKeeper, a ZooKeeper subproject which provides a highly available replicated
edit log on commodity hardware. This JIRA is to implement another alternative, based on a
quorum commit protocol, integrated more tightly in HDFS and with the requirements driven only
by HDFS's needs rather than more generic use cases. More details to follow.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message