zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kapil Thangavleu <kapil.f...@gmail.com>
Subject Re: Use a RAM database to store zookeeper data
Date Sun, 11 Sep 2011 10:37:12 GMT
indeed my primary reason for doing so was preserving ssd writes (probably 
silly).. actual write throughput gains against the api are minimal as per the 
test suite total runtime comparative benefit.

cheers,

kapil

Excerpts from Ted Dunning's message of Fri Sep 09 09:04:05 -0700 2011:
> Also keep in mind that it probably won't change throughput all that much.
> 
> You should run tests before counting your chickens.
> 
> On Fri, Sep 9, 2011 at 7:37 AM, Kapil Thangavleu <kapil.foss@gmail.com>wrote:
> 
> > Excerpts from PADIOU Pierre-Marie (MORPHO)'s message of Fri Sep 09 04:50:19
> > -0700 2011:
> > > Hello,
> > >
> > > Suppose I've got only < live > data in zookeeper, but I want high write
> > throughput. Is there any issue with using /dev/shm to store zookeeper data?
> > (Assuming datalogs are properly cleaned up)
> > >
> > > Has anyone ever done that?
> > >
> > > Thanks,
> > >
> > > Pierre-Marie
> >
> > I occasionally do something similiar for running large test suites. a tmpfs
> > mount for the data dirs, and /dev/null for log4j. Keep in mind zk is
> > already
> > keeping data in memory, so ram usage is at least 2x data set size with
> > this.
> >
> > cheers,
> >
> > Kapil
> >

Mime
View raw message