zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Best pratice
Date Fri, 21 Mar 2014 07:13:32 GMT
Bcache looks great if you don't lose power.  If you do, I hope they did a
*very* carefully thought out implementation.

See this article for why:
http://www.cse.ohio-state.edu/~qin/pub-papers/SSDFault-FAST13.pdf



On Thu, Mar 20, 2014 at 2:28 PM, Ishaaq Chandy <ishaaq@gmail.com> wrote:

> Interesting. Especially since at work we've been leaning towards using
> bcache for performance reasons to be able to deal with the flood of input
> we get - not so much for ZooKeeper but for Cassandra. Do you have any
> opinions about bcache?
>
> http://bcache.evilpiepirate.org/
>
>
> On 21 March 2014 07:14, Ted Dunning <ted.dunning@gmail.com> wrote:
>
> > SSD's also have the issue that it is common that recently written data is
> > not actually persisted.  Worse, new data might be persisted while
> slightly
> > older data is not.  These issues differ greatly across different
> hardware.
> >
> > Disks with write caching disabled are vastly better understood.
> >
> >
> >
> >
> > On Thu, Mar 20, 2014 at 1:09 PM, Patrick Hunt <phunt@apache.org> wrote:
> >
> > > My experience with SSDs has been negative. Write cliff issues
> > > eventually kick in and everything stops (if you put the txnlog on
> > > there). See my earlier messages to the list about this.
> > >
> > > Patrick
> > >
> > > On Thu, Mar 20, 2014 at 10:14 AM, Software Dev
> > > <static.void.dev@gmail.com> wrote:
> > > > I was thinking SSD for zookeeper but traditional for the log
> directory.
> > > > Memory wouldn't be a problem
> > > >
> > > >
> > > > On Wed, Mar 19, 2014 at 11:14 PM, Michi Mutsuzaki <
> > michi@cs.stanford.edu
> > > >wrote:
> > > >
> > > >> It should be fine to consolidate so long as these applications don't
> > > >> overload the ZooKeeper cluster in terms of memory usage and
> read/write
> > > >> throughput. I would definitely test it first though :)
> > > >>
> > > >> On Wed, Mar 19, 2014 at 9:57 PM, Software Dev <
> > > static.void.dev@gmail.com>
> > > >> wrote:
> > > >> > We currently have 4 separate ZK clusters (hbase, kafka, solr
> cloud,
> > > >> storm)
> > > >> > with either 3 or 5 per cluster. Should we combine all clusters
> into
> > > one
> > > >> and
> > > >> > just serve each one up in their own chroot?
> > > >>
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message