hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: [DISCUSS] No regions on Master node in 2.0
Date Thu, 17 Nov 2016 06:44:50 GMT
On Wed, Nov 16, 2016 at 10:57 AM, Gary Helmling <ghelmling@gmail.com> wrote:

> Only answer to the above observation is demonstration that ITBLL with meta
> not on master is as robust as runs that have master carrying meta.
>
>
> Agree that this is a prerequisite.  Another useful measure might be the
> delay before an assignment under load is visible to clients.
>

> There is still contention between readers and writers over available
> handler threads.  With a local meta region, you don't have assignment
> manager having to contend for handler threads in order to perform writes.
> This is huge for reliability.
>
Without meta on master, it has not been hard to reproduce scenarios where
> HBase _cannot_ start up from a cold roll with high client traffic.  Region
> assignments just can not complete because master has to compete with all
> the clients attempting to read new region locations from meta and can't get
> in the queue.  With meta on master, this goes away completely.
>
>
>
This latter is a scenario to defend against. All priority handlers are
occupied by clients trying to read hbase:meta including the master that is
trying to come up by first reading current state of hbase:meta and
subsequently write. Seems easy enough to repro and to fix. Along w/ the
above ITBLL equivalence proofs, let us take on this saturated hbase;meta
scenario as a proofing test.



> > > Above is fine except for the bit where we need to be able to field
> reads.
> > > Lets distribute the data to be read over the cluster rather than treat
> > meta
> > > reads with kid gloves hosted on a 'special' server; let these 'reads'
> be
> > > like any other read the cluster takes (see next point)
> > >
> > >
> > In my opinion, the real "special" part here is the master bit -- which I
> > think we should be working to make less special and more just a normal
> bit
> > of housekeeping spread across nodes -- not the regionserver role.  It
> only
> > looks special right now because the evolution has stopped in the middle.
> I
> > really don't think enshrining master as a separate process is the right
> way
> > forward for us.
> >
> >
> I always liked this notion.
>
> To be worked out is how Master and hbase:meta hosting would interplay (The
> RS that is designated Master would also host hbase:meta? Would it be
> exclusively hosting hbase;meta or hbase:meta would move with Master
> function.... [Stuff we've talked about before]).
>
>
> I think meta should be tied to the master function for the reasons
> described above. It's key that the updates to meta be local.  I don't think
> that has to mean that only meta regions are hosted by the regionserver
> serving as master.  There could be other user regions hosted as well, given
> the server has adequate headroom to handle the master functions.
>
>
I don't like introducing a new node type, the meta-carrying-master. Our
cluster form changes (see the BryanB concern above).

Also, some fundamentals are badly broke if are unable to reliably maintain
a table over RPC even though dedicated priority handlers and a single
writer.

(I'll not repeat other old args on why all our meta eggs on the one node
basket is the wrong direction IMO).

Do you folks run the meta-carrying-master form G?

One way to proceed would be to preserve the master carrying meta as an
option. It'd not be default. I don't like options like this because my
guess is that the master-carrying-meta option would get not testing (other
than by folks like yourselves G).

St.Ack





>
>
> > > >
> > > Is this just because meta had a dedicated server?
> > >
> > >
> > I'm sure that having dedicated resources for meta helps.  But I don't
> think
> > that's sufficient.  The key is that master writes to meta are local, and
> do
> > not have to contend with the user requests to meta.
> >
> > It seems premature to be discussing dropping a working implementation
> which
> > eliminates painful parts of distributed consensus, until we have a
> complete
> > working alternative to evaluate.  Until then, why are we looking at
> > features that are in use and work well?
> >
> >
> >
> How to move forward here? The Pv2 master is almost done. An ITBLL bakeoff
> of new Pv2 based assign vs a Master that exclusively hosts hbase:meta?
>
>
> I think that's a necessary test for proving out the new AM implementation.
> But remember that we are comparing a feature which is actively supporting
> production workloads with a line of active development.  I think there
> should also be additional testing around situations of high meta load and
> end-to-end assignment latency.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message