hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: [DISCUSS] No regions on Master node in 2.0
Date Fri, 11 Nov 2016 18:30:38 GMT
(Reviving an old thread that needs resolving before 2.0.0. Does Master
carry regions in hbase-2.0.0 or not? A strong argument by one of our
biggest users is made below that master hosting hbase:meta can be more
robust when updates are local and that we can up the throughput of meta
operations if hbae:meta is exclusively hosted by master.)

On Mon, Apr 25, 2016 at 12:35 PM, Gary Helmling <ghelmling@gmail.com> wrote:

> On Mon, Apr 25, 2016 at 11:20 AM Stack <stack@duboce.net> wrote:
>
> > On Fri, Apr 8, 2016 at 1:42 AM, Elliott Clark <eclark@apache.org> wrote:
> >
> > > # Without meta on master, we double assign and lose data.
> > >
> > > That is currently a fact that I have seen over and over on multiple
> > loaded
> > > clusters. Some abstract clean up of deployment vs losing data is a
> > > no-brainer for me. Master assignment, region split, region merge are
> all
> > > risky, and all places that HBase can lose data. Meta being hosted on
> the
> > > master makes communication easier and less flakey. Running ITBLL on a
> > loop
> > > that creates a new table every time, and without meta on master
> > everything
> > > will fail pretty reliably in ~2 days. With meta on master things pass
> > MUCH
> > > more.
> > >
> > >
>

Only answer to the above observation is demonstration that ITBLL with meta
not on master is as robust as runs that have master carrying meta.



> > The discussion is what to do in 2.0 with the assumption that master state
> > would be done up on procedure v2 making most of the transitions now done
> > over zk and hbase:meta instead local to the master with only the final
> > state published to a remote meta (an RPC but if we can't make RPC work
> > reliably in our distributed system, thats a bigger problem).
> >

>
> But making RPC work for assignment here is precisely the problem.  There's
> no reason master should have to contend with user requests to meta in order
> to be able to make updates.  And until clients can actually see the change,
> it doesn't really matter if the master state has been updated or not.
>
>
In hbase-2.0.0, there'll be a new regime. hbase:meta writing will be single
writer by master only. No more contention on writes. Regards contention
reading, this is unavoidable.

In hbase-2.0.0, only the final publishing step, what we want clients to
see, will update hbase:meta. All other transitions will be internal.


> Sure, we could add more RPC priorities, even more handler pools and
> additional queues for master requests to meta vs. user requests to meta.
> Maybe with that plus adding in regionserver groups we actually start to
> have something that comes close to what we already have today with meta on
> master.  But why should we have to add all that complexity?  None of this
> is an issue if master updates to meta are local and don't have to go
> through RPC.
>
>
(Old args)  A single server carrying meta doesn't scale, etc.

New observation is that there has been no work carrying home our recasting
of our deploy format such that master now is inline with read/writes and
exclusive host of hbase:meta region.


> > # Master hosting the system tables locates the system tables as close as
> > > possible to the machine that will be mutating the data.
> > >
> > > Data locality is something that we all work for. Short circuit local
> > reads,
> > > Caching blocks in jvm, etc. Bringing data closer to the interested
> party
> > > has a long history of making things faster and better. Master is in
> > charge
> > > of just about all mutations of all systems tables. It's in charge of
> > > changing meta, changing acls, creating new namespaces, etc. So put the
> > > memstore as close as possible to the system that's going to mutate
> meta.
> > >
> >
> >
> > Above is fine except for the bit where we need to be able to field reads.
> > Lets distribute the data to be read over the cluster rather than treat
> meta
> > reads with kid gloves hosted on a 'special' server; let these 'reads' be
> > like any other read the cluster takes (see next point)
> >
> >
> In my opinion, the real "special" part here is the master bit -- which I
> think we should be working to make less special and more just a normal bit
> of housekeeping spread across nodes -- not the regionserver role.  It only
> looks special right now because the evolution has stopped in the middle.  I
> really don't think enshrining master as a separate process is the right way
> forward for us.
>
>
I always liked this notion.

To be worked out is how Master and hbase:meta hosting would interplay (The
RS that is designated Master would also host hbase:meta? Would it be
exclusively hosting hbase;meta or hbase:meta would move with Master
function.... [Stuff we've talked about before]).



>
> >
> > > # If you want to make meta faster then moving it to other regionservers
> > > makes things worse.
> > >
> > > Meta can get pretty hot. Putting it with other regions that clients
> will
> > be
> > > trying to access makes everything worse. It means that meta is
> competing
> > > with user requests. If meta gets served and other requests don't,
> causing
> > > more requests to meta; or requests to user regions get served and other
> > > clients get starved.
> > > At FB we've seen read throughput to meta doubled or more by swapping it
> > to
> > > master. Writes to meta are also much faster since there's no rpc hop,
> no
> > > queueing, to fighting with reads. So far it has been the single biggest
> > > thing to make meta faster.
> > >
> > >
> > Is this just because meta had a dedicated server?
> >
> >
> I'm sure that having dedicated resources for meta helps.  But I don't think
> that's sufficient.  The key is that master writes to meta are local, and do
> not have to contend with the user requests to meta.
>
> It seems premature to be discussing dropping a working implementation which
> eliminates painful parts of distributed consensus, until we have a complete
> working alternative to evaluate.  Until then, why are we looking at
> features that are in use and work well?
>
>
>
How to move forward here? The Pv2 master is almost done. An ITBLL bakeoff
of new Pv2 based assign vs a Master that exclusively hosts hbase:meta?

St.Ack



>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message