hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Marc Spaggiari <jean-m...@spaggiari.org>
Subject Re: [DISCUSS] No regions on Master node in 2.0
Date Wed, 16 Nov 2016 12:39:20 GMT
Hi all,

I'm not a commiter, but I really love this dicsussion and I want to chime
in.

I saw 2 very interesting ideas I'm reading between the lines. The first one
was to not have any master, but get on RS making master decisions and
therefore not hosting any region (or why not just the META). The balancer
might be able to take care of that. That will allow us to move the master
accountability to any other RS, which will get rid of its regions and will
get the META back at some point. Concern is regarding the data locality
when master changes to another host. META will not be local anymore until
some compactions occurs, same for the regions moved. However, from a user
stand point, that will make everything very easy to manage.
Pros:
- No more master or worker roles. All RegionServer can act as a master if
asked to (ZooKeeper elected)
- Usually worker nodes are bigger servers compared to master. That will
allow a bigger machine to serve the META, so better performances
- Allow to switch the master role anywhere at anytime since META will not
necessary have to follow right away.
Cons:
- One "master" (So a RegionServer) will run a DataNode underneath that will
not be used or used only for the META.
- People with small HBase clusters might not want to dedicate one beefy RS
to server only the META table... Might be a waste based on their usage.
- Performance impact (network read) when switching the master until META is
compacted back locally.

Another idea I'm reading here (betweem the lines again) is to have one RS
hosting ONLY the META. We keep the master roles as we have today, but the
balancer takes care of assign only the META on the RS hosting it. All other
regions going on other servers. I like this approach a lot because:
- It will be very easy to implement. We jus thave to update the balancer.
- It allows small users do disable this feature and still allow the META RS
to host other regions too.
- It allows big users to separate the META on a different server to improve
performances.
One con:
- The META is still not on the master and every operation will have to go
over the network. But is that really an issue?


Overall, I like the idea of loosing the master and all servers being able
to get any of the roles. But I think the 2nd approach might be easier to
implement and to undestand.

My 2¢ opinion ;)

JMS


2016-11-16 6:56 GMT-05:00 宾莉金 or binlijin <binlijin@gmail.com>:

> Hosting meta on the machine that doesn't serve user regions would help to
> ensure that updates to meta have higher chance to succeed. But if that
> machine isn't Master, then we'd introduce yet one more service role to the
> deployment. And I'd say that different machine roles/service types required
> for HBase deployment is something we already have enough of.
>
> I think we can just change the balancer, and always move user regions from
> the regionserver which hosting meta region to other regionservers.  At some
> point there are user regions with meta region together, there is no matter,
> most of the time there is only meta region.
>
> 2016-11-16 19:06 GMT+08:00 Mikhail Antonov <olorinbant@gmail.com>:
>
> > (side note @Yu - FYI there has been a number of fixes/optimizations to
> the
> > way we cache and invalidate caches of region locations on the client side
> > in 1.3, see for example HBASE-15658, HBASE-15654).
> >
> > On the topic -
> >
> > Hosting meta on the machine that doesn't serve user regions would help to
> > ensure that updates to meta have higher chance to succeed. But if that
> > machine isn't Master, then we'd introduce yet one more service role to
> the
> > deployment. And I'd say that different machine roles/service types
> required
> > for HBase deployment is something we already have enough of.
> >
> > I think this discussion is still at the same point as it was back then -
> it
> > looks like we're essentially comparing (A) an existing feature that works
> > and has practical benefits (as noted above on the thread) to the (B)
> > different way of doing things that's not finalized / released yet (please
> > correct me if I'm wrong)?
> >
> > And assuming B is finalized, I'm not sure that it actually fully
> addresses
> > the problems that A addresses now. That makes me inclined to think that
> > removing option A before we know that the actual problems it solves now
> are
> > completely addressed by other means would put us in a bad state.
> >
> > -Mikhail
> >
> > On Wed, Nov 16, 2016 at 2:13 AM, Yu Li <carp84@gmail.com> wrote:
> >
> > > Very late to the party +1 (Smile)
> > >
> > > We also offline discussed standalone meta server here in Alibaba since
> > > we've observed crazily high QPS on meta caused by online machine
> learning
> > > workload, and in the discussion we also mentioned pros. and cons. of
> > > serving meta on HMaster. Since quite some pros. already mentioned in
> the
> > > thread, I'd like to mention one cons. here: currently we could switch
> > > active master (almost) freely w/o affecting online service, so we could
> > do
> > > some hot-fix on master. But if we carry meta region on HMaster, the
> cost
> > of
> > > switching master will increase a lot and the hot-switch may not be
> > possible
> > > any more. Not sure whether this is an important thing for most users
> but
> > > still a point to share (Smile).
> > >
> > > And maybe another point for discussion: if not placed on HMaster,
> should
> > we
> > > have a standalone meta server or at least provide such an option?
> > >
> > > Thanks.
> > >
> > > Best Regards,
> > > Yu
> > >
> > > On 16 November 2016 at 03:43, <toffer@ymail.com.invalid> wrote:
> > >
> > > > > In the absence of more information, intuition says master carries
> > meta
> > > > to avoid a whole class of problems.
> > > > Off-hand I think the class of problems we'll eliminate are problems
> > that
> > > > are well understood and being constantly dealt with and hardened to
> > this
> > > > day (ie puts to a region).
> > > > > I think we have to evaluate whether the new pv2 master works with
> > > > remote meta updates and the fact that those updates can fail
> partially
> > or
> > > > succeed without theI think failing meta updates need to be dealt with
> > > > either way AFAIK eventually procedure state will be stored in HDFS
> > which
> > > is
> > > > also a distributed system.
> > > >
> > > >
> > > >
> > > >     On Saturday, November 12, 2016 9:45 AM, Andrew Purtell <
> > > > apurtell@apache.org> wrote:
> > > >
> > > >
> > > >  Thanks Stack and Enis. I concur, it's hard to say for those not
> > intimate
> > > > with the new code.
> > > >
> > > > In the absence of more information, intuition says master carries
> meta
> > to
> > > > avoid a whole class of problems.
> > > >
> > > > On Fri, Nov 11, 2016 at 3:27 PM, Enis Söztutar <enis@apache.org>
> > wrote:
> > > >
> > > > > Thanks Stack for reviving this.
> > > > >
> > > > > How to move forward here? The Pv2 master is almost done. An ITBLL
> > > bakeoff
> > > > > > of new Pv2 based assign vs a Master that exclusively hosts
> > > hbase:meta?
> > > > > >
> > > > > >
> > > > > I think we have to evaluate whether the new pv2 master works with
> > > remote
> > > > > meta
> > > > > updates and the fact that those updates can fail partially or
> succeed
> > > > > without the
> > > > > client getting the reply, etc. Sorry it has been some time I've
> > looked
> > > at
> > > > > the design.
> > > > > Actually what would be very good is to have a design overview /
> write
> > > up
> > > > of
> > > > > the pv2
> > > > > in its current / final form so that we can evaluate. Last time I've
> > > > looked
> > > > > there was no
> > > > > detailed design doc at all.
> > > > >
> > > > >
> > > > > > St.Ack
> > > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > >
> > > >   - Andy
> > > >
> > > > Problems worthy of attack prove their worth by hitting back. - Piet
> > Hein
> > > > (via Tom White)
> > > >
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Michael Antonov
> >
>
>
>
> --
> *Best Regards,*
>  lijin bin
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message