hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravihad...@gmail.com>
Subject Re: Journal nodes , QJM requirement
Date Tue, 28 Feb 2017 19:52:38 GMT
Thanks for the question Amit and your response Surendra!

I think Amit has raised a good question. I can only guess towards the
"need" for *journaling* while using a QJM. I'm fairly certain that if you
look through all the comments in
https://issues.apache.org/jira/browse/HDFS-3077 and its subtasks, you are
bound to find the reasoning there. (Or maybe we never thought about it ;-)
and its worth pursuing)

Journaling was necessary in the past when there was a single Namenode
because we wanted to be sure to persist any fsedits (changes to the file
system metadata) before actually making those changes in memory. That way,
if the Namenode crashed, we would load up fsimage from disk, and apply the
journalled edits to this state.

Along comes the QJM where the likelihood of all QJM nodes failing is
reduced (but still is non-zero). Further more, I'm not sure (and perhaps
someone more knowledgeable about the QJM can answer) if an individual
JournalNode in a Quorum accepts a transaction only after persisting to disk
or after applying it to its journal in memory. If its the latter, keeping a
journal around is still valuable. Perhaps that's the reason?

Or perhaps it was just the software engineering aspect of it. To have a
special case of not journaling when a Quorum is available probably would
have required large scale changes to very brittle and important code, and
the designers chose to not increase the maintenance burden, and work with
the abstraction of the journal?

Good question though. Thanks for bringing it up and making us think about
it.

Cheers
Ravi

On Mon, Feb 27, 2017 at 11:16 PM, surendra lilhore <
surendra.lilhore@huawei.com> wrote:

> Hi Amit,
>
>
>
> 1. Shared storage is used instead of direct write to standby, to allow
> cluster to be functional, even when the standby is not available. Shared
> storage is distributed, it will be functional even if one of the node
> (standby) fails. So it supports uninterrupted functionality for the user.
>
>
>
> 2. HDFS used shared storage or journal node to avoiding the “split-brain”
> syndrome, where multiple namenodes think they’re in charge of the cluster.
> JournalNodes node will allow only one active namenode to write the edits
> logs.
>
> For more info you can check the HDFS document https://hadoop.apache.org/
> docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.
> html
>
>
>
> Regards
>
> Surendra
>
>
>
>
>
> *From:* Amit Kabra [mailto:amitkabraiiit@gmail.com
> <amitkabraiiit@gmail.com>]
> *Sent:* 27 February 2017 10:29
> *To:* user@hadoop.apache.org
> *Subject:* Journal nodes , QJM requirement
>
>
>
> Hi Hadoop Users,
>
>
>
> I have one question, didn't get information on internet.
>
>
>
> Why hadoop needs journaling system. In order to sync Active / Standby NN,
> instead of using Journal node or any shared system, can't it do
> master-slave or multi master replication where for any write master will
> write to other master/slave as well and only once replication is done at
> other sites will commit / accept the write ?
>
>
>
> One reason I could think is journal node writes data from NN in append
> only mode which *might* make it faster as compared to writing to slave /
> another master for replication but I am not sure.
>
>
>
> Any pointers ?
>
>
>
> Thanks,
>
> Amit Kabra.
>

Mime
View raw message