hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yu Li <car...@gmail.com>
Subject Re: [DISCUSS] No regions on Master node in 2.0
Date Wed, 16 Nov 2016 16:05:23 GMT
@Mikhail:
Thank you sir for the reference to HBASE-15654 and good to know the efforts
on optimizing meta cache. But in our case there might be plenty of new
process launched and accessing hbase at the same time, like at the very
beginning of some big yarn batch job or during the failover of streaming
jobs after several retries, so cache no longer exists and meta will still
experience a big pressure. And as a side-effect of
CallQueueTooBigException, there might be more retry requests issued to meta
region and cause a vicious circle. I'll submit one or two JIRAs to try
relieving the pain but for a thorough solution maybe a standalone meta
server with all handlers serving the meta region is better? And yes I do
feel the pain managing multiple roles so this could be an optional but not
necessary choice?

Sorry guys if I disturbed the main topic here but I do feel the "standalone
meta server" and "colocating meta region with HMaster" topics are relative.
Let me know if any of you prefer me to open another thread for the
standalone meta server topic. Thanks.

Best Regards,
Yu

On 16 November 2016 at 22:56, Bryan Beaudreault <bbeaudreault@hubspot.com>
wrote:

> I'd like to echo Yu Li. As an operator/user, it is very helpful to be able
> to run the masters separately. This allows for hot fixes, but also
> simplifies operations in times of crisis: if there's any real issue, we can
> restart the masters at will without any fear of impact.
>
> If the plan is to colocate the masters on the regionservers for simplicity,
> I can understand that to make it easier for onboarding new users. But
> please make it configurable, as those of us who have been doing it a while
> would probably like to keep the separation.
>
> Honestly, I'd love for other datastores to allow separation, such as Kafka
> which we have annoyingly been hit by controller bugs a few times but have
> little recourse since there is no separation of controller and broker. So
> I'd rather not see HBase "regress" in this way.
>
> For us we already have to run zookeeper separately, so we colocate our
> HMasters on our zookeeper nodes. If the HMaster role was taken away, we'd
> still need to run zookeeper so would have the same number of servers but
> have lost the flexibility of running our masters separate from
> regionservers.
>
> On Wed, Nov 16, 2016 at 8:29 AM Ted Yu <yuzhihong@gmail.com> wrote:
>
> > Gary has a JIRA HBASE-16025 which would reduce the load on server hosting
> > hbase:meta.
> >
> > FYI
> >
> > > On Nov 16, 2016, at 2:13 AM, Yu Li <carp84@gmail.com> wrote:
> > >
> > > Very late to the party +1 (Smile)
> > >
> > > We also offline discussed standalone meta server here in Alibaba since
> > > we've observed crazily high QPS on meta caused by online machine
> learning
> > > workload, and in the discussion we also mentioned pros. and cons. of
> > > serving meta on HMaster. Since quite some pros. already mentioned in
> the
> > > thread, I'd like to mention one cons. here: currently we could switch
> > > active master (almost) freely w/o affecting online service, so we could
> > do
> > > some hot-fix on master. But if we carry meta region on HMaster, the
> cost
> > of
> > > switching master will increase a lot and the hot-switch may not be
> > possible
> > > any more. Not sure whether this is an important thing for most users
> but
> > > still a point to share (Smile).
> > >
> > > And maybe another point for discussion: if not placed on HMaster,
> should
> > we
> > > have a standalone meta server or at least provide such an option?
> > >
> > > Thanks.
> > >
> > > Best Regards,
> > > Yu
> > >
> > > On 16 November 2016 at 03:43, <toffer@ymail.com.invalid> wrote:
> > >
> > >>> In the absence of more information, intuition says master carries
> meta
> > >> to avoid a whole class of problems.
> > >> Off-hand I think the class of problems we'll eliminate are problems
> that
> > >> are well understood and being constantly dealt with and hardened to
> this
> > >> day (ie puts to a region).
> > >>> I think we have to evaluate whether the new pv2 master works with
> > >> remote meta updates and the fact that those updates can fail partially
> > or
> > >> succeed without theI think failing meta updates need to be dealt with
> > >> either way AFAIK eventually procedure state will be stored in HDFS
> > which is
> > >> also a distributed system.
> > >>
> > >>
> > >>
> > >>    On Saturday, November 12, 2016 9:45 AM, Andrew Purtell <
> > >> apurtell@apache.org> wrote:
> > >>
> > >>
> > >> Thanks Stack and Enis. I concur, it's hard to say for those not
> intimate
> > >> with the new code.
> > >>
> > >> In the absence of more information, intuition says master carries meta
> > to
> > >> avoid a whole class of problems.
> > >>
> > >>> On Fri, Nov 11, 2016 at 3:27 PM, Enis Söztutar <enis@apache.org>
> > wrote:
> > >>>
> > >>> Thanks Stack for reviving this.
> > >>>
> > >>> How to move forward here? The Pv2 master is almost done. An ITBLL
> > bakeoff
> > >>>> of new Pv2 based assign vs a Master that exclusively hosts
> hbase:meta?
> > >>> I think we have to evaluate whether the new pv2 master works with
> > remote
> > >>> meta
> > >>> updates and the fact that those updates can fail partially or succeed
> > >>> without the
> > >>> client getting the reply, etc. Sorry it has been some time I've
> looked
> > at
> > >>> the design.
> > >>> Actually what would be very good is to have a design overview / write
> > up
> > >> of
> > >>> the pv2
> > >>> in its current / final form so that we can evaluate. Last time I've
> > >> looked
> > >>> there was no
> > >>> detailed design doc at all.
> > >>>
> > >>>
> > >>>> St.Ack
> > >>>>
> > >>>>
> > >>>>
> > >>>>>
> > >>
> > >>
> > >>
> > >> --
> > >> Best regards,
> > >>
> > >>  - Andy
> > >>
> > >> Problems worthy of attack prove their worth by hitting back. - Piet
> Hein
> > >> (via Tom White)
> > >>
> > >>
> > >>
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message