hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Helmling <ghelml...@gmail.com>
Subject Re: [DISCUSS] No regions on Master node in 2.0
Date Mon, 25 Apr 2016 19:35:27 GMT
On Mon, Apr 25, 2016 at 11:20 AM Stack <stack@duboce.net> wrote:

> On Fri, Apr 8, 2016 at 1:42 AM, Elliott Clark <eclark@apache.org> wrote:
>
> > # Without meta on master, we double assign and lose data.
> >
> > That is currently a fact that I have seen over and over on multiple
> loaded
> > clusters. Some abstract clean up of deployment vs losing data is a
> > no-brainer for me. Master assignment, region split, region merge are all
> > risky, and all places that HBase can lose data. Meta being hosted on the
> > master makes communication easier and less flakey. Running ITBLL on a
> loop
> > that creates a new table every time, and without meta on master
> everything
> > will fail pretty reliably in ~2 days. With meta on master things pass
> MUCH
> > more.
> >
> >
> The above is a problem of branch-1?
>
> The discussion is what to do in 2.0 with the assumption that master state
> would be done up on procedure v2 making most of the transitions now done
> over zk and hbase:meta instead local to the master with only the final
> state published to a remote meta (an RPC but if we can't make RPC work
> reliably in our distributed system, thats a bigger problem).
>
>
But making RPC work for assignment here is precisely the problem.  There's
no reason master should have to contend with user requests to meta in order
to be able to make updates.  And until clients can actually see the change,
it doesn't really matter if the master state has been updated or not.

Sure, we could add more RPC priorities, even more handler pools and
additional queues for master requests to meta vs. user requests to meta.
Maybe with that plus adding in regionserver groups we actually start to
have something that comes close to what we already have today with meta on
master.  But why should we have to add all that complexity?  None of this
is an issue if master updates to meta are local and don't have to go
through RPC.


>
> > # Master hosting the system tables locates the system tables as close as
> > possible to the machine that will be mutating the data.
> >
> > Data locality is something that we all work for. Short circuit local
> reads,
> > Caching blocks in jvm, etc. Bringing data closer to the interested party
> > has a long history of making things faster and better. Master is in
> charge
> > of just about all mutations of all systems tables. It's in charge of
> > changing meta, changing acls, creating new namespaces, etc. So put the
> > memstore as close as possible to the system that's going to mutate meta.
> >
>
>
> Above is fine except for the bit where we need to be able to field reads.
> Lets distribute the data to be read over the cluster rather than treat meta
> reads with kid gloves hosted on a 'special' server; let these 'reads' be
> like any other read the cluster takes (see next point)
>
>
In my opinion, the real "special" part here is the master bit -- which I
think we should be working to make less special and more just a normal bit
of housekeeping spread across nodes -- not the regionserver role.  It only
looks special right now because the evolution has stopped in the middle.  I
really don't think enshrining master as a separate process is the right way
forward for us.


>
> > # If you want to make meta faster then moving it to other regionservers
> > makes things worse.
> >
> > Meta can get pretty hot. Putting it with other regions that clients will
> be
> > trying to access makes everything worse. It means that meta is competing
> > with user requests. If meta gets served and other requests don't, causing
> > more requests to meta; or requests to user regions get served and other
> > clients get starved.
> > At FB we've seen read throughput to meta doubled or more by swapping it
> to
> > master. Writes to meta are also much faster since there's no rpc hop, no
> > queueing, to fighting with reads. So far it has been the single biggest
> > thing to make meta faster.
> >
> >
> Is this just because meta had a dedicated server?
>
>
I'm sure that having dedicated resources for meta helps.  But I don't think
that's sufficient.  The key is that master writes to meta are local, and do
not have to contend with the user requests to meta.

It seems premature to be discussing dropping a working implementation which
eliminates painful parts of distributed consensus, until we have a complete
working alternative to evaluate.  Until then, why are we looking at
features that are in use and work well?



>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message